Fileless attack detection for Linux in preview

This blog post was co-authored by Aditya Joshi, Senior Software Engineer, Enterprise Protection and Detection.

Attackers are increasingly employing stealthier methods to avoid detection. Fileless attacks exploit software vulnerabilities, inject malicious payloads into benign system processes, and hide in memory. These techniques minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions.

To counter this threat, Azure Security Center released fileless attack detection for Windows in October 2018. Our blog post from 2018 explains how Security Center can detect shellcode, code injection, payload obfuscation techniques, and other fileless attack behaviors on Windows. Our research indicates the rise of fileless attacks on Linux workloads as well.

Today, Azure Security Center is happy to announce a preview for detecting fileless attacks on Linux.  In this post, we will describe a real-world fileless attack on Linux, introduce our fileless attack detection capabilities, and provide instructions for onboarding to the preview. 

Real-world fileless attack on Linux

One common pattern we see is attackers injecting payloads from packed malware on disk into memory and deleting the original malicious file from the disk. Here is a recent example:

  1. An attacker infects a Hadoop cluster by identifying the service running on a well-known port (8088) and uses Hadoop YARN unauthenticated remote command execution support to achieve runtime access on the machine. Note, the owner of the subscription could have mitigated this stage of the attack by configuring Security Center JIT.
  2. The attacker copies a file containing packed malware into a temp directory and launches it.
  3. The malicious process unpacks the file using shellcode to allocate a new dynamic executable region of memory in the process’s own memory space and injects an executable payload into the new memory region.
  4. The malware then transfers execution to the injected ELF entry point.
  5. The malicious process deletes the original packed malware from disk to cover its tracks. 
  6. The injected ELF payload contains a shellcode that listens for incoming TCP connections, transmitting the attacker’s instructions.

This attack is difficult for scanners to detect. The payload is hidden behind layers of obfuscation and only present on disk for a short time.  With the fileless attack detection preview, Security Center can now identify these kinds of payloads in memory and inform users of the payload’s capabilities.

Fileless attacks detection capabilities

Like fileless attack detection for Windows, this feature scans the memory of all processes for evidence of fileless toolkits, techniques and behaviors. Over the course of the preview, we will be enabling and refining our analytics to detect the following behaviors of userland malware:

  • Well known toolkits and crypto mining software. 
  • Shellcode, injected ELF executables, and malicious code in executable regions of process memory.
  • LD_PRELOAD based rootkits to preload malicious libraries.
  • Elevation of privilege of a process from non-root to root.
  • Remote control of another process using ptrace.

In the event of a detection, you receive an alert in the Security alerts page. Alerts contain supplemental information such as the kind of techniques used, process metadata, and network activity. This enables analysts to have a greater understanding of the nature of the malware, differentiate between different attacks, and make more informed decisions when choosing remediation steps.

 Picture1

The scan is non-invasive and does not affect the other processes on the system.  The vast majority of scans run in less than five seconds. The privacy of your data is protected throughout this procedure as all memory analysis is performed on the host itself. Scan results contain only security-relevant metadata and details of suspicious payloads.

Getting started

To sign-up for this specific preview, or our ongoing preview program, indicate your interest in the “Fileless attack detection preview.”

Once you choose to onboard, this feature is automatically deployed to your Linux machines as an extension to Log Analytics Agent for Linux (also known as OMS Agent), which supports the Linux OS distributions described in this documentation. This solution supports Azure, cross-cloud and on-premise environments. Participants must be enrolled in the Standard or Standard Trial pricing tier to benefit from this feature.

To learn more about Azure Security Center, visit the Azure Security Center page.

New Azure Firewall certification and features in Q1 CY2020

This post was co-authored by Suren Jamiyanaa, Program Manager, Azure Networking

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are excited to share several new Azure Firewall capabilities based on your top feedback items:

  • ICSA Labs Corporate Firewall Certification.
  • Forced tunneling support now in preview.
  • IP Groups now in preview.
  • Customer configured SNAT private IP address ranges now generally available.
  • High ports restriction relaxation now generally available.

Azure Firewall is a cloud native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

ICSA Labs Corporate Firewall Certification

ICSA Labs is a leading vendor in third-party testing and certification of security and health IT products, as well as network-connected devices. They measure product compliance, reliability, and performance for most of the world’s top technology vendors.

Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification. For the Azure Firewall certification report, see information here. For more information, see the ICSA Labs Firewall Certification program page.
Front page of the ICSA Labs Certification Testing and Audit Report for Azure Firewall.

Figure one – Azure Firewall now ICSA Labs certified.

Forced tunneling support now in preview

Forced tunneling lets you redirect all internet bound traffic from Azure Firewall to your on-premises firewall or a nearby Network Virtual Appliance (NVA) for additional inspection. By default, forced tunneling isn’t allowed on Azure Firewall to ensure all its outbound Azure dependencies are met.

To support forced tunneling, service management traffic is separated from customer traffic. An additional dedicated subnet named AzureFirewallManagementSubnet is required with its own associated public IP address. The only route allowed on this subnet is a default route to the internet, and BGP route propagation must be disabled.

Within this configuration, the AzureFirewallSubnet can now include routes to any on-premise firewall or NVA to process traffic before it’s passed to the Internet. You can also publish these routes via BGP to AzureFirewallSubnet if BGP route propagation is enabled on this subnet. For more information see Azure Firewall forced tunneling documentation.

Creating a firewall with forced tunneling enabled

Figure two – Creating a firewall with forced tunneling enabled.

IP Groups now in preview

IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. For more information, see the IP Groups in Azure Firewall documentation.

Azure Firewall application rules utilize an IP group

Figure three – Azure Firewall application rules utilize an IP group.

Customer configured SNAT private IP address ranges

Azure firewall provides automatic Source Network Address Translation (SNAT) for all outbound traffic to public IP addresses. Azure Firewall doesn’t SNAT when the destination IP address is a private IP address range per IANA RFC 1918. If your organization uses a public IP address range for private networks or opts to force tunnel Azure Firewall internet traffic via an on-premises firewall, you can configure Azure Firewall to not SNAT additional custom IP address ranges. For more information, see Azure Firewall SNAT private IP address ranges.

Azure Firewall with custom private IP address ranges

Figure four – Azure Firewall with custom private IP address ranges.

High ports restriction relaxation now generally available

Since its initial preview release, Azure Firewall had a limitation that prevented network and application rules from including source or destination ports above 64,000. This default behavior blocked RPC based scenarios and specifically Active Directory synchronization. With this new update, customers can use any port in the 1-65535 range in network and application rules.

Next steps

For more information on everything we covered above please see the following blogs, documentation, and videos.

Azure Firewall central management partners:

Azure Firewall Manager now supports virtual networks

This post was co-authored by Yair Tor, Principal Program Manager, Azure Networking.

Last November we introduced Microsoft Azure Firewall Manager preview for Azure Firewall policy and route management in secured virtual hubs. This also included integration with key Security as a Service partners, Zscaler, iboss, and soon Check Point. These partners support branch to internet and virtual network to internet scenarios.

Today, we are extending Azure Firewall Manager preview to include automatic deployment and central security policy management for Azure Firewall in hub virtual networks.

Azure Firewall Manager preview is a network security management service that provides central security policy and route management for cloud-based security perimeters. It makes it easy for enterprise IT teams to centrally define network and application-level rules for traffic filtering across multiple Azure Firewall instances that spans different Azure regions and subscriptions in hub-and-spoke architectures for traffic governance and protection. In addition, it empowers DevOps for better agility with derived local firewall security policies that are implemented across organizations.

For more information see Azure Firewall Manager documentation.

Azure Firewall Manager getting started page

Figure one – Azure Firewall Manger Getting Started page

 

Hub virtual networks and secured virtual hubs

Azure Firewall Manager can provide security management for two network architecture types:

  •  Secured virtual hub—An Azure Virtual WAN Hub is a Microsoft-managed resource that lets you easily create hub-and-spoke architectures. When security and routing policies are associated with such a hub, it is referred to as a secured virtual hub.
  •  Hub virtual network—This is a standard Azure Virtual Network that you create and manage yourself. When security policies are associated with such a hub, it is referred to as a hub virtual network. At this time, only Azure Firewall Policy is supported. You can peer spoke virtual networks that contain your workload servers and services. It is also possible to manage firewalls in standalone virtual networks that are not peered to any spoke.

Whether to use a hub virtual network or a secured virtual depends on your scenario:

  •  Hub virtual network—Hub virtual networks are probably the right choice if your network architecture is based on virtual networks only, requires multiple hubs per regions, or doesn’t use hub-and-spoke at all.
  •  Secured virtual hubs—Secured virtual hubs might address your needs better if you need to manage routing and security policies across many globally distributed secured hubs. Secure virtual hubs have high scale VPN connectivity, SDWAN support, and third-party Security as Service integration. You can use Azure to secure your Internet edge for both on-premises and cloud resources.

The following comparison table in Figure 2 can assist in making an informed decision:

 

 Hub virtual networkSecured virtual hub
Underlying resourceVirtual networkVirtual WAN hub
Hub-and-SpokeUsing virtual network peeringAutomated using hub virtual network connection
On-prem connectivity

VPN Gateway up to 10 Gbps and 30 S2S connections; ExpressRoute

More scalable VPN Gateway up to 20 Gbps and 1000 S2S connections; ExpressRoute

Automated branch connectivity using SDWANNot supportedSupported
Hubs per regionMultiple virtual networks per region

Single virtual hub per region. Multiple hubs possible with multiple Virtual WANs

Azure Firewall – multiple public IP addressesCustomer providedAuto-generated (to be available by general availability)
Azure Firewall Availability ZonesSupportedNot available in preview. To be available availabilityavailablity

Advanced internet security with 3rd party Security as a service partners

Customer established and managed VPN connectivity to partner service of choice

Automated via Trusted Security Partner flow and partner management experience

Centralized route management to attract traffic to the hub

Customer managed UDR; Roadmap: UDR default route automation for spokes

Supported using BGP
Web Application Firewall on Application GatewaySupported in virtual networkRoadmap: can be used in spoke
Network Virtual ApplianceSupported in virtual networkRoadmap: can be used in spoke

Figure 2 – Hub virtual network vs. secured virtual hub

Firewall policy

Firewall policy is an Azure resource that contains network address translation (NAT), network, and application rule collections as well as threat intelligence settings. It’s a global resource that can be used across multiple Azure Firewall instances in secured virtual hubs and hub virtual networks. New policies can be created from scratch or inherited from existing policies. Inheritance allows DevOps to create local firewall policies on top of organization mandated base policy. Policies work across regions and subscriptions.

Azure Firewall Manager orchestrates Firewall policy creation and association. However, a policy can also be created and managed via REST API, templates, Azure PowerShell, and CLI.

Once a policy is created, it can be associated with a firewall in a Virtual WAN Hub (aka secured virtual hub) or a firewall in a virtual network (aka hub virtual network).

Firewall Policies are billed based on firewall associations. A policy with zero or one firewall association is free of charge. A policy with multiple firewall associations is billed at a fixed rate.

For more information, see Azure Firewall Manager pricing.

The following table compares the new firewall policies with the existing firewall rules:

 

Policy

Rules

Contains

NAT, Network, Application rules, and Threat Intelligence settings

NAT, Network, and Application rules

Protects

Virtual hubs and virtual networks

Virtual networks only

Portal experience

Central management using Firewall Manager

Standalone firewall experience

Multiple firewall support

Firewall Policy is a separate resource that can be used across firewalls

Manually export and import rules or using 3rd party management solutions

Pricing

Billed based on firewall association. See Pricing

Free

Supported deployment mechanisms

Portal, REST API, templates, PowerShell, and CLI

Portal, REST API, templates, PowerShell, and CLI

Release Status

Preview

General Availability

Figure 3 – Firewall Policy vs. Firewall Rules

Next steps

For more information on topics covered here, see the following blogs, documentation, and videos:

Azure Firewall central management partners:

Announcing the preview of Azure Shared Disks for clustered applications

Today, we are announcing the limited preview of Azure Shared Disks, the industry’s first shared cloud block storage. Azure Shared Disks enables the next wave of block storage workloads migrating to the cloud including the most demanding enterprise applications, currently running on-premises on Storage Area Networks (SANs). These include clustered databases, parallel file systems, persistent containers, and machine learning applications. This unique capability enables customers to run latency-sensitive workloads, without compromising on well-known deployment patterns for fast failover and high availability. This includes applications built for Windows or Linux-based clustered filesystems like Global File System 2 (GFS2).

With Azure Shared Disks, customers now have the flexibility to migrate clustered environments running on Windows Server, including Windows Server 2008 (which has reached End-of-Support), to Azure. This capability is designed to support SQL Server Failover Cluster Instances (FCI), Scale-out File Servers (SoFS), Remote Desktop Servers (RDS), and SAP ASCS/SCS running on Windows Server.

We encourage you to get started and request access by filling out this form.

Leveraging Azure Shared Disks

Azure Shared Disks provides a consistent experience for applications running on clustered environments today. This means that any application that currently leverages SCSI Persistent Reservations (PR) can use this well-known set of commands to register nodes in the cluster to the disk. The application can then choose from a range of supported access modes for one or more nodes to read or write to the disk. These applications can deploy in highly available configurations while also leveraging Azure Disk durability guarantees.

The below diagram illustrates a sample two-node clustered database application orchestrating failover from one node to the other.
   2-node failover cluster
The flow is as follows:

  1. The clustered application running on both Azure VM 1 and  Azure VM 2 registers the intent to read or write to the disk.
  2. The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk.
  3. This reservation is enforced on Azure Disk and the database can now exclusively write to the disk. Any writes from the application instance on Azure VM 2 will not succeed.
  4. If the application instance on Azure VM 1 goes down, the instance on Azure VM 2 can now initiate a database failover and take-over of the disk.
  5. This reservation is now enforced on the Azure Disk, and it will no longer accept writes from the application on Azure VM 1. It will now only accept writes from the application on Azure VM 2.
  6. The clustered application can complete the database failover and serve requests from Azure VM 2.

The below diagram illustrates another common workload consists of multiple nodes reading data from the disk to run parallel jobs, for example, training of Machine Learning models.
   n-node cluster with multiple readers
The flow is as follows:

  1. The application registers all Virtual Machines registers to the disk.
  2. The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk while opening up reads from other Virtual Machines.
  3. This reservation is enforced on Azure Disk.
  4. All nodes in the cluster can now read from the disk. Only one node writes results back to the disk on behalf of all the nodes in the cluster.

Disk types, sizes, and pricing

Azure Shared Disks are available on Premium SSDs and supports disk sizes including and greater than P15 (i.e. 256 GB). Support for Azure Ultra Disk will be available soon. Azure Shared Disks can be enabled as data disks only (not OS Disks). Each additional mount to an Azure Shared Disk (Premium SSDs) will be charged based on disk size. Please refer to the Azure Disks pricing page for details on limited preview pricing.

Azure Shared Disks vs Azure Files

Azure Shared Disks provides shared access to block storage which can be leveraged by multiple virtual machines. You will need to use a common Windows and Linux-based cluster manager like Windows Server Failover Cluster (WSFC), Pacemaker, or Corosync for node-to-node communication and to enable write locking. If you are looking for a fully-managed files service on Azure that can be accessed using Server Message Block (SMB) or Network File System (NFS) protocol, check out Azure Premium Files or Azure NetApp Files.

Getting started

You can create Azure Shared Disks using Azure Resource Manager templates. For details on how to get started and use Azure Shared Disks in preview, please refer to the documentation page. For updates on regional availability and Ultra Disk availability, please refer to the Azure Disks FAQ. Here is a video of Mark Russinovich from Microsoft Ignite 2019 covering Azure Shared Disks.

In the coming weeks, we will be enabling Portal and SDK support. Support for Azure Backup and  Azure Site Recovery is currently not available. Refer to the Managed Disks documentation for detailed instructions on all disk operations.

If you are interested in participating in the preview, you can now get started by requesting access.

Microsoft Sustainability Calculator helps enterprises analyze the carbon emissions of their IT infrastructure

an industry wind farm

For more than a decade, Microsoft has been investing to reduce environmental impact while supporting the digital transformation of organizations around the world through cloud services. We strive to be transparent with our commitments, evidenced by our announcement that Microsoft’s cloud datacenters will be powered by 100 percent renewable energy sources by 2025. The commitments and investments we make as a company are important steps in reducing our own environmental impact, but we recognize that the opportunity for positive change is greatest by empowering customers and partners to achieve their own sustainability goals.

An industry first—the Microsoft Sustainability Calculator

Today we’re announcing the availability of the Microsoft Sustainability Calculator, a Power BI application for Azure enterprise customers that provides new insight into carbon emissions data associated with their Azure services. Migrating from traditional datacenters to cloud services significantly improves efficiencies, however, enterprises are now looking for additional insights into the carbon impact of their cloud workloads to help them make more sustainable computing decisions. For the first time, those responsible for reporting on and driving sustainability within their organizations will have the ability to quantify the carbon impact of each Azure subscription over a period of time and datacenter region, as well as see estimated carbon savings from running those workloads in Azure versus on-premises datacenters. This data is crucial for reporting existing emissions and is the first step in establishing a foundation to drive further decarbonization efforts.

Microsoft Sustainability Calculator carbon data visualization view

Providing transparency with rigorous methodology

The tool’s calculations are based on a customer’s Azure consumption, informed by the research in the 2018 whitepaper, “The Carbon Benefits of Cloud Computing: a Study of the Microsoft Cloud”, and have been independently verified by Apex, a leading environmental verification body. The calculator factors in inputs such as the energy requirements of the Azure service, the energy mix of the electric grid serving the hosting datacenters, Microsoft’s procurement of renewable energy in those datacenters, as well as the emissions associated with the transfer of data over the internet. The result is an estimate of the greenhouse gas (GHG) emissions, measured in total metric tons of carbon equivalent (MTCO2e) related to a customer’s consumption of Azure.

The calculator gives a granular view of the estimated emissions savings from running workloads on Azure by accounting for Microsoft’s IT operational efficiency, IT equipment efficiency, and datacenter infrastructure efficiency compared to that of a typical on-premises deployment. It also estimates the emissions savings attributable to a customer from Microsoft’s purchase of renewable energy.
   Microsoft Sustainability Calculator - Reporting

We also understand customers want transparency into the specific commitments we are making to build a more sustainable cloud. To make that information easily accessible, we’ve built a view within the tool of the renewable energy projects that Microsoft has invested in as part of its carbon neutral and renewable energy commitments. Each year Microsoft purchases renewable energy to cover its annual cloud consumption. Customers can use the world map to learn about projects in regions where they consume Azure services or have a regional presence. The projects are examples of the investments that Microsoft has made since 2012.

A path to actionable insight

Azure enterprise customers can get started by downloading the Microsoft Sustainability Calculator from AppSource now and following the included setup instructions. We’re excited by the opportunity this new tool provides for our customers to gain a deeper understanding of their current infrastructure and drive meaningful sustainability conversations within their organizations. We see this as a first step and plan to deepen and expand the tool’s capabilities in the future. We know our customers would like an even more comprehensive view of the sustainability benefits of our cloud services and look forward to supporting and enabling them in their journey.

Connecting Microsoft Azure and Oracle Cloud in the UK and Canada

In June 2019, Microsoft announced a cloud interoperability collaboration with Oracle that will enable our customers to migrate and run enterprise workloads across Microsoft Azure and Oracle Cloud.

At Oracle OpenWorld in September, the cross-cloud collaboration was a big part of the conversation. Since then, we have fielded interest from mutual customers who want to accelerate their cloud adoption across both Microsoft Azure and Oracle Cloud. Customers are interested in running their Oracle database and enterprise applications on Azure and in the scenarios enabled by the industry’s first cross-cloud interconnect implementation between Azure and Oracle Cloud Infrastructure. Many are also excited about our announcement to integrate Microsoft Teams with Oracle Cloud Applications. We have already enabled the integration of Azure Active Directory with Oracle Cloud Applications and continue to break new ground while engaging with customers and partners.

Interest from the partner community

Partners like Accenture are very supportive of the collaboration between Microsoft Azure and Oracle Cloud. Accenture recently published a white paper, articulating their own perspective and hands-on experiences while configuring the connectivity between Microsoft Azure and Oracle Cloud Infrastructure.

Another Microsoft and Oracle partner who expressed interest early on is SYSCO. SYSCO is a European IT-company specializing in solutions for the utilities sector. They offer unique industry expertise combined with highly skilled technology experts within AI and analytics, cloud, infrastructure, and applications. SYSCO is a Microsoft Gold Cloud Platform partner and a Platinum Oracle partner.

In August 2019, we introduced the ability to interconnect Microsoft Azure (UK South) and Oracle Cloud Infrastructure in London, UK providing our joint customers access to a direct, low-latency, and highly reliable network connection between Azure and Oracle Cloud Infrastructure. Prior to that, for partners like SYSCO, the ability to leverage this new collaboration between Microsoft Azure and Oracle Cloud was out of reach.

“The Microsoft Azure and Oracle Cloud Interconnect announcement is one of the best announcements in years for our customers! A direct link provides the Microsoft / Oracle cloud interconnect with a new option for all customers using proprietary business applications. With our expertise across both Microsoft and Oracle, we are thrilled to be one of the first partners to pilot this together with our customers in the utilities industry in Norway.”–Frank Vikingstad VP International – SYSCO

Azure and Oracle Cloud Infrastructure interconnect in Toronto, Canada

Today we are announcing that we have extended the Microsoft Azure and Oracle Cloud Infrastructure interconnect to include the Azure Canada Central region and Oracle Cloud Infrastructure region in Toronto, Canada.

“This unique Azure and Oracle Cloud Infrastructure solution delivers the performance, easy integration, rigorous service level agreements, and collaborative enterprise support that enterprise IT departments need to simplify their operations. We’ve been pleased by the demand for the interconnected cloud solution by our mutual customers around the world and are thrilled to extend these capabilities to our Canadian customers.” –Clive D’Souza, Sr. Director and Head of Product Management, Oracle Cloud Infrastructure

What this means for you

In addition to being able to run certified Oracle databases and applications on Azure, you now have access to new migration and deployment scenarios enabled by the interconnect. For example, you can rely on tested, validated, and supported deployments of Oracle applications on Azure with Oracle databases, Real Application Clusters (RAC) and Exadata, deployed in Oracle Cloud Infrastructure. You can also run custom applications on Azure backed by Oracle’s Autonomous Database on Oracle Cloud Infrastructure.

To learn more about the collaboration between Oracle and Microsoft and how you can run Oracle applications on Azure please refer to our website.

Azure Stack HCI now running on HPE Edgeline EL8000

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure’s hybrid services such as backup, disaster recovery, update management, monitoring, and security compliance? 

Well, Microsoft and HPE have teamed up to validate the HPE Edgeline EL8000 Converged Edge system for Microsoft’s Azure Stack HCI program. Designed specifically for space-constrained environments, the HPE Edgeline EL8000 Converged Edge system has a unique 17-inch depth form factor that fits into limited infrastructures too small for other x86 systems. The chassis has an 8.7-inch width which brings additional flexibility for deploying at the deep edge, whether it is in a telco environment, a mobile vehicle, or a manufacturing floor. This Network Equipment-Building System (NEBs) compliant system delivers secure scalability.

HPE Edgeline EL8000 Converged Edge system gives:

  • Traditional x86 compute optimized for edge deployments, far from the traditional data center without the sacrifice of compute performance.
  • Edge-optimized remote system management with wireless capabilities based on Redfish industry standard.
  • Compact form factor, with short-depth and half-width options.
  • Rugged, modular form factor for secure scalability and serviceability in edge and hostile environments including NEBs level three and American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) level three/four compliance.
  • Broad accelerator support for emerging edge artificial intelligence (AI) use cases, for field programmable gate arrays or graphics processing units.
  • Up to four independent compute nodes, which are cluster-ready with embedded networks.

Modular design providing broad configuration possibilities

The HPE Edgeline EL8000 Converged Edge system offers flexibility of choice for compute density or for input/output expansion. These compact, ruggedized systems offer high-performance capacity to support the use cases that matter most, including media streaming, IoT, AI, and video analytics. The HPE Edgeline EL8000 is a versatile platform that enables edge compute transformation so as use case requirements change, the system’s flexible and modular architecture can scale to meet them.

Seamless management and security features with HPE Edgeline Chassis Manager

The HPE Edgeline EL8000 Converged Edge system features the HPE Edgeline Chassis Manager which limits downtime by providing system-level health monitoring and alerts. Increase efficiency and reliability by managing the chassis fan speeds for each server blade installed in addition to monitoring the health and status of the power supply. It simplifies firmware upgrade management and implementation with HPE Edgeline Chassis Manager.

Microsoft Azure Stack HCI:

Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote-direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network microsegmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyperconverged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services, including:

  • Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

  • Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.

  • Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.

  • Azure Backup for offsite data protection and to protect against ransomware.

  • Azure Update Management for update assessment and update deployments for Windows virtual machines (VMs) running in Azure and on-premises.

  • Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site virtual private network (VPN.)

  • Sync your file server with the cloud, using Azure File Sync.

  • Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft and HPE HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  

Microsoft has validated the Lenovo ThinkSystem SE350 edge server for Azure Stack HCI

Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure’s hybrid services such as backup, disaster recovery, update managment, monitoring, and security compliance?  

Microsoft and Lenovo have teamed up to validate the Lenovo ThinkSystem SE350 for Microsoft’s Azure Stack HCI program. The ThinkSystem SE350 was designed and built with the unique requirements of edge servers in mind. It is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and can be easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 solution has a focus on smart connectivity, business security, and manageability for the harsh environment. To see all Lenovo servers validated for Azure Stack HCI, see the Azure Stack HCI catalog to learn more.

Lenovo ThinkSystem SE350:

The ThinkSystem SE350 is the latest workhorse for the edge. Designed and built with the unique requirements for edge servers in mind, it is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and is easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 is a rugged compact-sized edge solution with a focus on smart connectivity, business security, and manageability for the harsh environment.

The ThinkSystem SE350 is an Intel® Xeon® D processor-based server, with a 1U height, half-width and short depth case that can go anywhere. Mount it on a wall, stack it on a shelf, or install it in a rack. This rugged edge server can handle anything from 0-55°C as well as full performance in high dust and vibration environments.

Information availability is another challenging issue for users at the edge, who require insight into their operations at all times to ensure they are making the right decisions. The ThinkSystem SE350 is designed to provide several connectivity options with wired and secure wireless Wi-Fi and LTE connection ability. This purpose-built compact server is reliable for a wide variety of edge and IoT workloads.

Microsoft Azure Stack HCI:

Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.

Achieve industry-leading virtual machine (VM) performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote direct memory access (RDMA) networking.

Help keep apps and data secure with shielded virtual machines, network micro-segmentation, and native encryption.

You can take advantage of cloud and on-premises working together with a hyper-converged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services:

  • Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).

  • Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure, with advanced analytics powered by artificial intelligence.

  • Cloud Witness, to use Azure as the lightweight tie-breaker for cluster quorum.

  • Azure Backup for offsite data protection and to protect against ransomware.

  • Azure Update Management for update assessment and update deployments for Windows Virtual Machines running in Azure and on-premises.

  • Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site VPN.

  • Sync your file server with the cloud, using Azure File Sync.

  • Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.

By deploying the Microsoft + Lenovo HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.  

Extended filesystem programming capabilities in Azure Data Lake Storage

Since the general availability of Azure Data Lake Storage Gen2 in February 2019, customers have been getting insights at cloud scale faster than ever before. Integration to analytics engines is critical for their analytics workloads and equally important is the ability to programmatically ingest, manage, and analyze data. This ability is critical for key areas of enterprise data lakes such as data ingestion, event-driven big data platforms, machine learning, and advanced analytics. Programmatic access is possible today using Azure Data Lake Storage Gen2 REST APIs or Blob REST APIs. In addition, customers can enable continuous integration and continuous delivery (CI/CD) pipelines using Blob PowerShell and CLI capabilities via multi-protocol access. As part of the journey to enable our developer ecosystem, our goal is to make customer application development easier than ever before.

We are excited to announce the public preview of .NET SDK, Python SDK, Java SDK, PowerShell, and CLI for filesystem operations for Azure Data Lake Storage Gen2. Customers who are used to the familiar filesystem programming model can now implement this model using .NET, Python, and Java SDKs. Customers can also now incorporate these filesystem operations into their CI/CD pipelines using PowerShell and CLI, thereby enriching CI/CD pipeline automation for big data workloads on Azure Data Lake Storage Gen2. As part of this preview, the SDKs, PowerShell, and CLI include support for CRUD operations for filesystems, directories, files, and permissions through filesystem semantics for Azure Data Lake Storage Gen2.

Detailed reference documentation for all these filesystem semantics are provided in the links below. These links will also help you get started and provide feedback.

This public preview is available globally in all regions. Your participation and feedback are critical to help us enrich your development experience. Join us in our journey.

Multi-protocol access on Data Lake Storage now generally available

We are excited to announce the general availability of multi-protocol access for Azure Data Lake Storage. Azure Data Lake Storage is a unique cloud storage solution for analytics that offers multi-protocol access to the same data. This is a no-compromise solution that allows both the Azure Blob Storage API and Azure Data Lake Storage API to access data on a single storage account. You can store all your different types of data in one place, which gives you the flexibility to make the best use of your data as your use case evolves. The general availability of multi-protocol access creates the foundation to enable object storage capabilities on Data Lake Storage. This brings together the best of both object storage and Hadoop Distributed File System (HDFS) to enable scenarios that were not possible until today without data copy.

Multi-protocol access generally available

Broader ecosystem of applications and features

Multi-protocol access provides a powerful foundation to enable integrations and features for Data Lake Storage. Existing object storage applications and connectors can now be used to access data stored in Data Lake Storage with no changes. This vastly accelerated the integration of Azure services and the partner ecosystem with Data Lake Storage. We are also announcing the general availability of multiple Azure service integrations with Data Lake Storage including: Azure Stream Analytics, IoT Hub, Azure Event Hubs Capture, Azure Data Box, and Logic Apps. These Azure services now integrate seamlessly with Data Lake Storage. Real-time scenarios are now enabled by easily ingesting streaming data into Data Lake Storage via IoT Hub, Stream Analytics and Event Hubs Capture.

Ecosystem partners have also strongly leveraged multi-protocol access for their applications. Here is what our partners are saying:

“Multi-protocol access is a massive paradigm shift that enables cloud analytics to run on a single account for both blob data and analytics data. We believe that multi-protocol access helps customers rapidly achieve integration with Azure Data Lake Storage using our existing blob connector. This brings tremendous value to customers without needing to do costly re-development efforts.” – Rob Cornell, Head of Cloud Alliances, Talend

Our customers are excited about how their existing blob applications and workloads “just work” leveraging the multi-protocol capability. There are no changes required for their existing blob applications saving them precious development and validation resources. We have customers today running multiple workloads seamlessly against the same data using both the blob connector and the Azure Data Lake Storage connector.

We are also making the ability to tier data between hot and cool tiers for Data Lake Storage generally available. This is great for analytics customers who want to keep frequently used analytics data in the hot tier and move less used data to cooler storage tiers for cost efficiencies. As we continue our journey, we will be enabling more capabilities on Data Lake Storage in upcoming releases. Stay tuned for more announcements in the future!

Get started with multi-protocol access

Visit our multi-protocol access documentation to get started. For additional information see our preview announcement. To learn more about pricing, see our pricing page.

Introducing Azure Cost Management for partners

As a partner, you play a critical role in successful planning and managing long-term cloud implementations for your customers. While the cloud grants the flexibility to scale the cloud infrastructure to the changing needs, it does become challenging to control the spend when cloud costs can fluctuate dramatically with demand. This is where Azure Cost Management comes in to help you track and control cloud cost, prevent overspending and increase predictability for your cloud costs

Announcing general availability of Azure Cost Management for all cloud solution partners (CSPs) who have onboarded their customers to the new Microsoft Customer agreement. With this update, partners and their customers can take advantage of Azure Cost Management tools available to manage cloud spend, similar to the cost management capabilities available for pay-as-you-go (PAYG) and enterprise customers today.

This is the first of the periodic updates to enable cost management support for partners that enables partners to understand, analyze, dissect and manage cost across all their customers and invoices.

With this update, CSPs use Azure Cost Management to:

  • Understand invoiced costs and associate the costs to the customer, subscriptions, resource groups, and services.
  • Get an intuitive view of Azure costs in cost analysis with capabilities to analyze costs by customer, subscription, resource group, resource, meter, service, and many other dimensions.
  • View resource costs that have Partner Earned Credit (PEC) applied in Cost Analysis.
  • Set up notifications and automation using programmatic budgets and alerts when costs exceed budgets.
  • Enable the Azure Resource Manager policy that provides customer access to Cost Management data. Customers can then view consumption cost data for their subscriptions using pay-as-you-go rates.

For more information see, Get Started with Azure Cost Management as a Partner.

Analyze costs by customer, subscription, tags, resource group or resource using cost analysis

Using cost analysis, partners can group by and filter costs by customer, subscription, tags, resource group, resource, and reseller Microsoft partner Network identifier (MP NID), and have increased visibility into costs for better cost control. Partners can also view and manage the costs in the billing currency and in US dollars for billing scopes.

An image showing how you can group and filter costs in cost analysis.

Reconcile cost to an invoice

Partners can reconcile costs by invoice across their customers and their subscriptions to understand the pre-tax costs that contributed to the invoice.

An image showing how cost analysis can help analyze Azure spend to reconcile cost.

You can analyze azure spend for the customers you support and their subscriptions and resources. With this enhanced visibility into the costs of your customers, you can use spending patterns to enforce cost control mechanisms, like budgets and alerts to manage costs with continued and increased accountability.

Enable cost management at retail rates for your customers

In this update, a partner can also enable cost management features, initially at pay-as-you-go rates for your customers and resellers who have access to the subscriptions in the customer’s tenant. As a partner, if you decide to enable cost management for the users with access to the subscription, they will have the same capabilities to analyze the services they consume and set budgets to control costs that are computed at pay-as-you-go prices for Azure consumed services. This is just the first of the updates and we have features planned in the first half of 2020 to enable cost management for customers at prices that partner can set by applying a markup on the pay-as-you-go prices.

Partners can set a policy to enable cost management for users with access to an Azure subscription to view costs at retail rates for a specific customer.

An image showing how partners can set a policy to view costs at retail rates for a specific customer.

If the policy is enabled for subscriptions in the customer’s tenant, users with role-based access control (RBAC) access to the subscription can now manage Azure consumption costs at retail prices.

An image showing how customers with RBAC access can manage Azure consumption at retail prices.

Set up programmatic budgets and alerts to automate and notify when costs exceed threshold

As a partner, you can set up budgets and alerts to send notifications to specified email recipients when the cost threshold is exceeded. In the partner tenant, you can set up budgets for costs as invoiced to the partner. You can also set up monthly, quarterly, or annual budgets across all your customers, or for a specific customer, and filter by subscription, resource, reseller MPN ID, or resource group.

An image showing how you can set up budgets and alerts.

Any user with RBAC access to a subscription or resource group can also set up budgets and alerts for Azure consumption costs at retail rates in the customer tenant if the policy for cost visibility has been enabled for the customer.

An image showing how users can create budgets.

When a budget is created for a subscription or resource group in the customer tenant, you can also configure it to call an action group. The action group can perform a variety of different actions when your budget threshold is met. For more information about action groups, see Create and manage action groups in the Azure portal. For more information about using budget-based automation with action groups, see Manage costs with Azure budgets.

All the experiences that we provide in Azure Cost Management natively are also available as REST APIs for enabling automated cost management experiences.

Coming soon

  • We will be enabling cost recommendation and optimization suggestions, for better savings and efficiency in managing Azure costs.
  • We will launch Azure Cost Management at retail rates for customers who are not on the Microsoft Customer Agreement and are supported by CSP partners.
  • Showback features that enable partners to charge a markup on consumption costs are also being planned for 2020.

Try Azure Cost Management for partners today! It is natively available in the Azure portal for all partners who have onboarded customers to the new Microsoft Customer Agreement.

Announcing the general availability of the new Azure HPC Cache service

If data-access challenges have been keeping you from running high-performance computing (HPC) jobs in Azure, we’ve got great news to report! The now-available Microsoft Azure HPC Cache service lets you run your most demanding workloads in Azure without the time and cost of rewriting applications and while storing data where you want to—in Azure or on your on-premises storage. By minimizing latency between compute and storage, the HPC Cache service seamlessly delivers the high-speed data access required to run your HPC applications in Azure.

Use Azure to expand analytic capacity—without worrying about data access

Most HPC teams recognize the potential for cloud bursting to expand analytic capacity. While many organizations would benefit from the capacity and scale advantages of running compute jobs in the cloud, users have been held back by the size of their datasets and the complexity of providing access to those datasets, typically stored on long-deployed network-attached storage (NAS) assets. These NAS environments often hold petabytes of data collected over a long period of time and represent significant infrastructure investment.

Here’s where the HPC Cache service can help. Think of the service as an edge cache that provides low-latency access to POSIX file data sourced from one or more locations, including on-premises NAS and data archived to Azure Blob storage. The HPC Cache makes it easy to use Azure to increase analytic throughput, even as the size and scope of your actionable data expands.

Keep up with the expanding size and scope of actionable data

The rate of new data acquisition in certain industries such as life sciences continues to drive up the size and scope of actionable data. Actionable data, in this case, could be datasets that require post-collection analysis and interpretation that in turn drive upstream activity. A sequenced genome can approach hundreds of gigabytes, for example. As the rate of sequencing activity increases and becomes more parallel, the amount of data to store and interpret also increases—and your infrastructure has to keep up. Your power to collect, process, and interpret actionable data—your analytic capacity—directly impacts your organization’s ability to meet the needs of customers and to take advantage of new business opportunities.

Some organizations address expanding analytic throughput requirements by continuing to deploy more robust on-premises HPC environment with high-speed networking and performant storage. But for many companies, expanding on-premises environments presents increasingly daunting and costly challenges. For example, how can you accurately forecast and more economically address new capacity requirements? How do you best juggle equipment lifecycles with bursts in demand? How can you ensure that storage keeps up (in terms of latency and throughput) with compute demands? And how can you manage all of it with limited budget and staffing resources?

Azure services can help you more easily and cost-effectively expand your analytic throughput beyond the capacity of existing HPC infrastructure. You can use tools like Azure CycleCloud and Azure Batch to orchestrate and schedule compute jobs on Azure virtual machines (VMs). More effectively manage cost and scale by using low-priority VMs, as well as Azure Virtual Machine Scale Sets. Use Azure’s latest H- and N-series Virtual Machines to meet performance requirements for your most complex workloads.

So how do you start? It’s straightforward. Connect your network to Azure via ExpressRoute, determine which VMs you will use, and coordinate processes using CycleCloud or Batch—voila, your burstable HPC environment is ready to go. All you need to do is feed it data. Ok, that’s the stickler. This is where you need the HPC Cache service.

Use HPC Cache to ensure fast, consistent data access

Most organizations recognize the benefits of using cloud: a burstable HPC environment can give you more analytic capacity without forcing new capital investments. And Azure offers additional pluses, letting you take advantage of your current schedulers and other toolsets to ensure deployment consistency with your on-premises environment.

But here’s the catch when it comes to data. Your libraries, applications, and location of data may require the same consistency. In some circumstances, a local analytic pipeline may rely on POSIX paths that must be the same whether running in Azure or locally. Data may be linked between directories, and those links may need to be deployed in the same way in the cloud. The data itself may reside in multiple locations and must be aggregated. Above all else, the latency of access must be consistent with what can be realized in the local HPC environment.

To understand how the HPC Cache works to address these requirements, consider it an edge cache that provides low-latency access to POSIX file data sourced from one or more locations. For example, a local environment may contain a large HPC cluster connected to a commercial NAS solution. HPC Cache enables access from that NAS solution to Azure Virtual Machines, containers, or machine learning routines operating across a WAN link. The service accomplishes this by caching client requests (including from the virtual machines), and ensuring that subsequent accesses of that data are serviced by the cache rather than by re-accessing the on-premises NAS environment. This lets you run your HPC jobs at a similar performance level as you could in your own data center. HPC Cache also lets you build a namespace consisting of data located in multiple exports across multiple sources while displaying a single directory structure to client machines.

HPC Cache provides a Blob-backed cache (we call it Blob-as-POSIX) in Azure as well, facilitating migration of file-based pipelines without requiring that you rewrite applications. For example, a genetic research team can load reference genome data into the Blob environment to further optimize the performance of secondary-analysis workflows. This helps mitigate any latency concerns when you launch new jobs that rely on a static set of reference libraries or tools.

   : Diagram showing the placement of Azure HPC Cache in a systems architectures that included on-premises storage access, Azure Blob, and computing in an Azure compute cluster.
Azure HPC Cache Architecture

HPC Cache Benefits

Caching throughput to match workload requirements

HPC Cache offers three SKUs: up to 2 gigabytes per second (GB/s), up to 4 GB/s, and up to 8 GB/s throughput. Each of these SKUs can service requests from tens to thousands of VMs, containers, and more. Furthermore, you choose the size of your cache disks to control your costs while ensuring the right capacity is available for caching.

Data bursting from your datacenter

HPC Cache fetches data from your NAS, wherever it is. Run your HPC workload today and figure out your data storage policies over the longer term.

High-availability connectivity

HPC Cache provides high-availability (HA) connectivity to clients, a key requirement for running compute jobs at larger scales.

Aggregated namespace

The HPC Cache aggregated namespace functionality lets you build a namespace out of various sources of data. This abstraction of sources makes it possible to run multiple HPC Cache environments with a consistent view of data.

Lower-cost storage, full POSIX compliance with Blob-as-POSIX

HPC Cache supports Blob-based, fully POSIX-compliant storage. HPC Cache, using the Blob-as-POSIX format, maintains full POSIX support including hard links. If you need this level of compliance, you’ll be able to get full POSIX at Blob price points.

Start here

The Azure HPC Cache Service is available today and can be accessed now. For the very best results, contact your Microsoft team or related partners—they’ll help you build a comprehensive architecture that optimally meets your specific business objectives and desired outcomes.

Our experts will be attending at SC19 in Denver, Colorado, the conference on high-performance computing, ready and eager to help you accelerate your file-based workloads in Azure!