ExpressRoute Global Reach: Building your own cloud-based global backbone

Connectivity has gone through a fundamental shift as more workloads and services have moved to the Cloud. Traditional enterprise Wide Area Networks (WAN) have been fixed in nature, without the ability to dynamically scale to meet modern customer demands. For customers seeking to increasingly apply a cloud-first approach as the basis for their app and networking strategy, hybrid cloud enables applications and services to be deployed cross-premises as a fully connected and seamless architecture. The connectivity across premises is moving to utilize a more cloud-first model, with services offered by global hyper-scale networks.

Microsoft global network

Microsoft operates one of the  largest networks on the globe  spanning over 130,000 miles of terrestrial and subsea fiber cable systems across 6 continents. Besides Azure, the global network powers all our cloud services, including Bing, Office 365 and Xbox. The network carries more than 30 billion packets per second at any one time and is accessible for peering, private connectivity and application content delivery through our more than 160 global network PoPs. Microsoft continuously add new network PoPs to optimize the experience for our customers accessing Microsoft services.

Microsoft's Global network map

The global network is built and operated using intelligent software-defined traffic engineering technologies, that allow Microsoft to dynamically select optimal paths and route around network faults and congestion scenarios in near real-time. The network has multiple redundant paths to ensure maximum uptime and reliability when powering mission-critical workloads for our customers.

Microsoft's Point of Presence (PoP) with connectivity services

ExpressRoute overview

Azure ExpressRoute provides enterprises with a service that bypasses the Internet to securely and privately connect to Azure and to create their own global network. A common scenario is for enterprises to use ExpressRoute to access their Azure virtual networks (VNets) containing their own private IP addresses. This allows Azure to become a seamless hybrid extension of their on-premises networks. Another scenario includes using ExpressRoute to access public services over a private connection such as Azure Storage or Azure SQL. Traffic for ExpressRoute enters the Microsoft network at our networking Points of Presence (or PoPs) strategically distributed across the world, which are hosted in carrier-neutral facilities to provide customers options when picking a carrier or Telco partner.

ExpressRoute provides three different SKUs of ExpressRoute circuits:

  • ExpressRoute Local: Available at ExpressRoute sites physically close to an Azure region and can be used only to access the local Azure region. Because the traffic stays in the regional network and does not traverse the global network, the ExpressRoute Local traffic has no egress charge.
  • ExpressRoute Standard: Provides connectivity to any Azure region with in the same geopolitical region as the ExpressRoute site from London to West Europe, for example.
  • ExpressRoute Premium: Provides connectivity to any Azure region within the cloud environment. For example, an ExpressRoute Premium circuit at the New Zealand site can access Azure regions in Australia or other geographies from Europe or North America.

In addition to using the more than 200 ExpressRoute partners to connect for ExpressRoute, enterprises can directly connect to ExpressRoute routers with the ExpressRoute Direct option, at either 10G or 100G physical interfaces. Within ExpressRoute Direct, enterprises can divide up this physical port into multiple ExpressRoute circuits to serve different business units and use cases.

Many customers want to take further advantage of their existing architecture and ExpressRoute connections to provide connectivity between their on-premises sites or data centers. Enabling site-to-site connectivity across our global network is now very easy. When Azure introduced ExpressRoute Global Reach, as the first in public cloud, we provided a sleek and simple way to take full advantage of our global backbone assets. 

ExpressRoute Global Reach

With ExpressRoute Global Reach, we are democratizing connectivity, allowing enterprises to build cloud based virtual global backbones by using ExpressRoute and Microsoft’s global network. ExpressRoute Global Reach enables connectivity from on-premises to on-premises fully routed privately within the Microsoft global backbone. This capability can be a backup to existing network infrastructure, or it can be the primary means to serve enterprise Wide Area Network (WAN) needs. Microsoft takes care of redundancy, the larger global infrastructure investments, and the scale out requirements, allowing customers to focus on their core mission. 

ExpressRoute Global Reach Map

Consider Contoso, a multi-national company headquartered in Dallas, Texas with global offices in London and Tokyo. These three main locations also serve as major connectivity hubs for branch offices and on-premises datacenters. Utilizing a local last-mile carrier, Contoso invests in redundant paths to meet at the ExpressRoute sites in these same locations. After establishing the physical connectivity, Contoso stands up their ExpressRoute connectivity through a local provider or via ExpressRoute Direct and starts advertising routes via the industry standard, Border Gateway Protocol (BGP). Contoso can now connect all these sites together and opt to enable Global Reach, which will take the on-premises routes and advertise them to the peered circuit in the remote locations, enabling cross-premises connectivity. Contoso has now created a cloud-based Wide Area Network and all within minutes. Effectively end-to-end global connectivity without long-haul investments and fixed contracts.

Modernizing the network and applying the cloud-first model help customers scale with their needs, while at the same time take full advantage and build onto their existing cloud infrastructure. As on-premises sites and branches emerge or change, global connectivity should be as easy as a click of a button. ExpressRoute Global Reach enables companies to provide best in class connectivity on one of the most comprehensive software-defined networks on the planet.

ExpressRoute Global Reach is generally available in these locations, including Azure US Government.

Azure HBv2 Virtual Machines eclipse 80,000 cores for MPI HPC

HPC-optimized virtual machines now available

Azure HBv2-series Virtual Machines (VMs) are now generally available in the South Central US region. HBv2 VMs will also be available in West Europe, East US, West US 2, North Central US, Japan East soon.

HBv2 VMs deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world high performance computing (HPC) workloads, such as CFD, explicit finite element analysis, seismic processing, reservoir modeling, rendering, and weather simulation.

Azure HBv2 VMs are the first in the public cloud to feature 200 gigabit per second HDR InfiniBand from Mellanox. HDR InfiniBand on Azure delivers latencies as low as 1.5 microseconds, more than 200 million messages per second per VM, and advanced in-network computing engines like hardware offload of MPI collectives and adaptive routing for higher performance on the largest scaling HPC workloads. HBv2 VMs use standard Mellanox OFED drivers that support all RDMA verbs and MPI variants.

Each HBv2 VM features 120 AMD EPYC™ 7002-series CPU cores with clock frequencies up to 3.3 GHz, 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading (SMT). HBv2 VMs provide up to 340 GB/sec of memory bandwidth, which is 45-50 percent more than comparable x86 alternatives and three times faster than what most HPC customers have in their datacenters today. A HBv2 virtual machine is capable of up to 4 double-precision teraFLOPS, and up to 8 single-precision teraFLOPS.

One and three year Reserved Instance, Pay-As-You-Go, and Spot Pricing for HBv2 VMs is available now for both Linux and Windows deployments. For information about five-year Reserved Instances, contact your Azure representative.

Disruptive speed for critical weather forecasting

Numerical Weather Prediction (NWP) and simulation has long been one of the most beneficial use cases for HPC. Using NWP techniques, scientists can better understand and predict the behavior of our atmosphere, which in turn drives advances in everything from coordinating airline traffic, shipping of goods around the globe, ensuring business continuity, and critical disaster preparedness from the most adverse weather. Microsoft recognizes the criticality of this field is to science and society, which is why Azure shares US hourly weather forecast data produced by the Global Forecast System (GFS) from the National Oceanic and Atmospheric Administration (NOAA) as part of the Azure Open Datasets initiative.

Cormac Garvey, a member of the HPC Azure Global team, has extensive experience supporting weather simulation teams on the world’s most powerful supercomputers. Today, he’s published a guide to running the widely-used Weather Research and Forecasting (WRF) Version 4 simulation suite on HBv2 VMs.

Cormac used a 371M grid point simulation of Hurricane Maria, a Category 5 storm that struck the Caribbean in 2017, with a resolution of 1 kilometer. This model was chosen not only as a rigorous benchmark of HBv2 VMs but also because the fast and accurate simulation of dangerous storms is one of the most vital functions of the meteorology community.

WRF v4.1.3 on Azure HBv2 benchmark results

Figure 1: WRF Speedup from 1 to 672 Azure HBv2 VMs.

Nodes

(VMs)

Parallel

Processes

Average Time(s)

per Time Step

Scaling

Efficiency

Speedup

(VM-based)

1

120

18.51

100 percent

1.00

2

240

8.9

104 percent

2.08

4

480

4.37

106 percent

4.24

8

960

2.21

105 percent

8.38

16

1,920

1.16

100 percent

15.96

32

3,840

0.58

100 percent

31.91

64

7,680

0.31

93 percent

59.71

128

15,360

0.131

110 percent

141.30

256

23,040

0.082

88 percent

225.73

512

46,080

0.0456

79 percent

405.92

640

57,600

0.0393

74 percent

470.99

672

80,640

0.0384

72 percent

482.03

Figure 2: Scaling and configuration data for WRF on Azure HBv2 VMs.

Note: for some scaling points, optimal performance is achieved with 30 MPI ranks and 4 threads per rank, while in others 90 MPI ranks was optimal. All tests were run with OpenMPI 4.0.2.

Azure HBv2 VMs executed the “Maria” simulation with mostly super-linear scalability up to 128 VMs (15,360 parallel processes). Improvements from scaling continue up to the largest scale of 672 VMs (80,640 parallel processes) tested in this exercise, where a 482x speedup over a single VM. At 512 nodes (VMs) we observe a ~2.2x performance increase as compared to a leading supercomputer that debuted among the top 20 fastest machines in 2016.

The gating factor to higher levels of scaling efficiency? The 371M grid point model, even as one of the largest known WRF models, is too small at such extreme levels of parallel processing. This opens the door for leading weather forecasting organizations to leverage Azure to build and operationalize even higher resolution models that higher numerical accuracy and a more realistic understanding of these complex weather phenomena.

Visit Cormac’s blog post on the Azure Tech Community to learn how to run WRF on our family of H-series Virtual Machines, including HBv2.

Better, safer product design from hyper-realistic CFD

Computational fluid dynamics (CFD) is core to the simulation-driven businesses of many Azure customers. A common request from customers is to “10x” their capabilities while keeping costs as close to constant as possible. Specifically, customers often seek ways to significantly increase the accuracy of their models by simulating it in higher resolution. Given that many customers already solve CFD problems with ~500-1000 parallel processes per job, this is a tall task that implies linear scaling to at least 5,000-10,000 parallel processes. Last year, Azure accomplished one of these objectives when it became the first public cloud to scale a CFD application to more than 10,000 parallel processes. With the launch of HBv2 VMs, Azure’s CFD capabilities are increasing again.

Jon Shelley, also a member of the Azure Global HPC team, worked with Siemens to validate one its largest CFD simulations ever, a 1 billion cell model of a sports car named after the famed 24 Hours of Le Mans race with a 10x higher-resolution mesh than what Azure tested just last year. Jon has published a guide to running Simcenter STAR-CCM+ at large scale on HBv2 VMs.

Siemens Simcenter Star-CCM+ 14.06 benchmark results

Figure 3: Simcenter STAR-CCM+ Scaling Efficiency from 1 to 640 Azure HBv2 VMs

Nodes

(VMs)

Parallel

Processes

Solver Elapsed Time

Scaling Efficiency

Speedup

(VM-based)

8

928

337.71

100 percent

1.00

16

1,856

164.79

102.5 percent

2.05

32

3,712

82.07

102.9 percent

4.11

64

7,424

41.02

102.9 percent

8.23

128

14,848

20.94

100.8 percent

16.13

256

29,696

12.02

87.8 percent

28.10

320

37,120

9.57

88.2 percent

35.29

384

44,544

7.117

98.9 percent

47.45

512

59,392

6.417

82.2 percent

52.63

640

57,600

5.03

83.9 percent

67.14

Figure 4: Scaling and configuration data for STAR-CCM+ on Azure HBv2 VMs

Note: A given scaling point may achieve optimal performance with 90, 112, 116, or 120 parallel processes per VM. Plotted data below shows optimal performance figures. All tests were run with HPC-X MPI ver. 2.50.

Once again, Azure HBv2 executed the challenging problem with linear efficiency to more than 15,000 parallel processes across 128 VMs. From there, high scaling efficiency continued, peaking at nearly 99 percent at more than 44,000 parallel processes. At the largest scale of 640 VMs and 57,600 parallel processes, HBv2 delivered 84 percent scaling efficiency. This is among the largest scaling CFD simulations with Simcenter STAR-CCM+ ever performed, and now can be replicated by Azure customers.

Visit Jon’s blog post on the Azure Tech Community site to learn how to run Simcenter STAR-CCM+ on our family of H-series Virtual Machines, including HBv2.

Extreme HPC I/O meets cost-efficiency

An increasing scenario on the cloud is on-demand HPC-grade parallel filesystems. The rationale is straight forward; if a customer needs to perform a large quantity of compute, that customer often needs to also move a lot of data into and out of those compute resources. The catch? Simple cost comparisons against traditional on-premises HPC filesystem appliances can be unfavorable, depending on circumstances. With Azure HBv2 VMs, however, NVMeDirect technology can be combined with ultra low-latency RDMA capabilities to deliver on-demand “burst buffer” parallel filesystems at no additional cost beyond the HBv2 VMs already provisioned for compute purposes.

BeeGFS is one such filesystem and has a rapidly growing user base among both entry-level and extreme-scale users. The BeeOND filesystem is even used in production on the novel HPC + AI hybrid supercomputer “Tsubame 3.0.”

Here is a high-level summary of how a sample BeeOND filesystem looks when created across 352 HBv2 VMs, providing 308 terabytes of usable, high-performance namespace.

Overview of example BeeOND filesystem on HBv2 VMs

Figure 5: Overview of example BeeOND filesystem on HBv2 VMs.

Running the widely-used IOR test of parallel filesystems across 352 HBv2 VMs, BeeOND achieved peak read performance of 763 gigabytes per second, and peak write performance of 352 gigabytes per second.

Visit Cormac’s blog post on the Azure Tech Community to learn how to run BeeGFS on RDMA-powered Azure Virtual Machines.

10x-ing the cloud HPC experience

Microsoft Azure is committed to delivering to our customers a world-class HPC experience, and maximum levels of performance, price/performance, and scalability.

“The 2nd Gen AMD EPYC processors provide fantastic core scaling, access to massive memory bandwidth and are the first x86 server processors that support PCIe 4.0; all of these features enable some of the best high-performance computing experiences for the industry,” said Ram Peddibhotla, corporate vice president, Data Center Product Management, AMD. “What Azure has done for HPC in the cloud is amazing; demonstrating that HBv2 VMs and 2nd Gen EPYC processors can deliver supercomputer-class performance, MPI scalability, and cost efficiency for a variety of real-world HPC workloads, while democratizing access to HPC that will help drive the advancement of science and research.”

“200 gigabit HDR InfiniBand delivers high data throughout, extremely low latency, and smart In-Network Computing engines, enabling high performance and scalability for compute and data applications. We are excited to collaborate with Microsoft to bring the InfiniBand advantages into Azure, providing users with leading HPC cloud services” said Gilad Shainer, Senior Vice President of Marketing at Mellanox Technologies. “By taking advantage of InfiniBand RDMA and its MPI acceleration engines, Azure delivers higher performance compared to other cloud options based on Ethernet. We look forward to continuing to work with Microsoft to introduce future generations and capabilities.”

Azure Cost Management + Billing updates – February 2020

Whether you’re a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you’re spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

We’re always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let’s dig into the details.

 

New Power BI reports for Azure reservations and Azure Hybrid Benefit

Azure Cost Management + Billing offers several ways to report on your cost and usage data. You can start in the portal, download data or schedule an automated export for offline analysis, or even integrate with Cost Management APIs directly. But maybe you just need detailed reporting alongside other business reports. This is where the Power BI comes in. We last talked about the addition of reservation purchases in the Azure Cost Management Power BI connector in October. Building on top of that, the new Azure Cost Management Power BI app offers an extensive set of reports to get you started, including detailed reservation and Azure Hybrid Benefit reports.

The Account overview offers a summary of all usage and purchases as well as your credit balance to help you track monthly expenses. From here, you can dig in to usage costs broken down by subscription, resource group, or service in additional pages. Or, if you simply want to see your prices, take a look at the Price sheet page.

If you’re already using Azure Hybrid Benefit (AHB) or have existing, unused on-prem Windows licenses, check out the Windows Server AHB Usage page. Start by checking how many VMs currently have AHB enabled to determine if you have additional licenses that could help you further lower your costs. If you do have additional licenses, you can also identify eligible VMs based on their core/vCPU count. Apply AHB to your most expensive VMs to maximize your potential savings.

Azure Hybrid Benefit (AHB) report in the new Azure Cost Management Power BI app

If you’re using Azure reservations or are interested in potential savings you could benefit from if you did, you’ll want to check out the VM RI coverage pages to identify any new opportunities where you can save with new reservations, including the historical usage so you can see why that reservation is recommended. You can drill in to a specific region or instance size flexibility group and more. You can see your past purchases in the RI purchases page and get a breakdown of those costs by region, subscription, or resource group in the RI chargeback page, if you need to do any internal chargeback. And, don’t forget to check out the RI savings page, where you can see how much you’ve saved so far by using Azure reservations.

Azure reservation coverage report in the new Azure Cost Management Power BI app

This is just the first release of a new generation of Power BI reports. Get started with the Azure Cost Management Power BI quickstart today and let us know what you’d like to see next.

 

Quicker access to help and support

Learning something new can be a challenge; especially when it’s not your primary focus. But given how critical it is to meet your financial goals, getting help and support needs to be front and center. To support this, Cost Management now includes a contextual Help menu to direct you to documentation and support experiences.

Get started with a quickstart tutorial and, when you’re ready to automate that experience or integrate it into your own apps, check out the API reference. If you have any suggestions on how the experience could be improved for you, please don’t hesitate to share your feedback. If you run into an issue or see something that doesn’t make sense, start with Diagnose and solve problems, and if you don’t see a solution, then please do submit a new support request. We’re closely monitoring all feedback and support requests to identify ways the experience could be streamlined for you. Let us know what you’d like to see next.

Help menu in Azure Cost Management showing options to navigate to a Quickstart tutorial, API reference, Feedback, Diagnose and solve problems, and New support request

 

We need your feedback

As you know, we’re always looking for ways to learn more about your needs and expectations. This month, we’d like to learn more about how you report on and analyze your cloud usage and costs in a brief survey. We’ll use your inputs from this survey to inform ease of use and navigation improvements within Cost Management + Billing experiences. The 15-question survey should take about 10 minutes.

Take the survey.

 

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

  • Get started quicker with the cost analysis Home view
    Azure Cost Management offers five built-in views to get started with understanding and drilling into your costs. The Home view gives you quick access to those views so you get to what you need faster.
  • New: More details in the cost by resource view
    Drill in to the cost of your resources to break them down by meter. Simply expand the row to see more details or click the link to open and take action on your resources.
  • New: Explain what “not applicable” means
    Break down “not applicable” to explain why specific properties don’t have values within cost analysis.

Of course, that’s not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today.

 

Drill in to the costs for your resources

Resources are the fundamental building block in the cloud. Whether you’re using the cloud as infrastructure or componentized microservices, you use resources to piece together your solution and achieve your vision. And how you use these resources ultimately determines what you’re billed for, which breaks down to individual “meters” for each of your resources. Each service tracks a unique set of meters covering time, size, or other generalized unit. The more units you use, the higher the cost.

Today, you can see costs broken down by resource or meter with built-in views, but seeing both together requires additional filtering and grouping to get down to the data you need, which can be tedious. To simplify this, you can now expand each row in the Cost by resource view to see the individual meters that contribute to the cost of that resource.

Cost by resource view showing a breakdown of meters under a resource

This additional clarity and transparency should help you better understand the costs you’re accruing for each resource at the lowest level. And if you see a resource that shouldn’t be running, simply click the name to open the resource, where you can stop or delete it to avoid incurring additional cost.

You can see the updated Cost by resource view in Cost Management Labs today, while in preview. Let us know if you have any feedback. We’d love to know what you’d like to see next. This should be available everywhere within the next few weeks.

 

Understanding why you see “not applicable”

Azure Cost Management + Billing includes all usage, purchases, and refunds for your billing account. Seeing every line item in the full usage and charges file allows you to reconcile your bill at the lowest level, but since each of these records has different properties, aggregating them within cost analysis can result in groups of empty properties. This is when you see “not applicable” today.

Now, in Cost Management Labs, you can see these costs broken down and categorized into separate groups to bring additional clarity and explain what each represents. Here are a few examples:

  • You may see Other classic resources for any classic resources that don’t include resource group in usage data when grouping by resource or resource group.
  • If you’re using any services that aren’t deployed to resource groups, like Security Center or Azure DevOps (Visual Studio Online), you will see Other subscription resources when grouping by resource group.
  • You may recall seeing Untagged costs when grouping by a specific tag. This group is now broken down further into Tags not available and Tags not supported groups. These signify services that don’t include tags in usage data (see How tags are used) and costs that can’t be tagged, like purchases and resources not deployed to resource groups, covered above.
  • Since purchases aren’t associated with an Azure resource, you might see Other Azure purchases or Other Marketplace purchases when grouping by resource, resource group, or subscription.
  • You may also see Other Marketplace purchases when grouping by reservation. This represents other purchases, which aren’t associated with a reservation.
  • If you have a reservation, you may see Unused reservation when viewing amortized costs and grouping by resource, resource group, or subscription. This represents the unused portion of your reservation that isn’t associated with any resources. These costs will only be visible from your billing account or billing profile.

Of course, these are just a few examples. You may see more. When there simply isn’t a value, you’ll see something like No department, as an example, which represents Enterprise Agreement (EA) subscriptions that aren’t grouped into a department.

We hope these changes help you better understand your cost and usage data. You can see this today in Cost Management Labs while in preview. Please check it out and let us know if you have any feedback. This should be available everywhere within the next few weeks.

 

Upcoming changes to Azure usage data

Many organizations use the full Azure usage and charges to understand what’s being used, identify what charges should be internally billed to which teams, and/or to look for opportunities to optimize costs with Azure reservations and Azure Hybrid Benefit, just to name a few. If you’re doing any analysis or have setup integration based on product details in the usage data, please update your logic for the following services.

The following change will start effective March 1:

Also, remember the key-based Enterprise Agreement (EA) billing APIs have been replaced by new Azure Resource Manager APIs. The key-based APIs will still work through the end of your enrollment, but will no longer be available when you renew and transition into Microsoft Customer Agreement. Please plan your migration to the latest version of the UsageDetails API to ease your transition to Microsoft Customer Agreement at your next renewal.

 

New videos and learning opportunities

For those visual learners out there, here are 2 new resources you should check out:

Follow the Azure Cost Management + Billing YouTube channel to stay in the loop with new videos as they’re released and let us know what you’d like to see next!

 

Documentation updates

There were lots of documentation updates. Here are a few you might be interested in:

Want to keep an eye on all of the documentation updates? Check out the Cost Management + Billing doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

What’s next?

These are just a few of the big updates from last month. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

Fileless attack detection for Linux in preview

This blog post was co-authored by Aditya Joshi, Senior Software Engineer, Enterprise Protection and Detection.

Attackers are increasingly employing stealthier methods to avoid detection. Fileless attacks exploit software vulnerabilities, inject malicious payloads into benign system processes, and hide in memory. These techniques minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions.

To counter this threat, Azure Security Center released fileless attack detection for Windows in October 2018. Our blog post from 2018 explains how Security Center can detect shellcode, code injection, payload obfuscation techniques, and other fileless attack behaviors on Windows. Our research indicates the rise of fileless attacks on Linux workloads as well.

Today, Azure Security Center is happy to announce a preview for detecting fileless attacks on Linux.  In this post, we will describe a real-world fileless attack on Linux, introduce our fileless attack detection capabilities, and provide instructions for onboarding to the preview. 

Real-world fileless attack on Linux

One common pattern we see is attackers injecting payloads from packed malware on disk into memory and deleting the original malicious file from the disk. Here is a recent example:

  1. An attacker infects a Hadoop cluster by identifying the service running on a well-known port (8088) and uses Hadoop YARN unauthenticated remote command execution support to achieve runtime access on the machine. Note, the owner of the subscription could have mitigated this stage of the attack by configuring Security Center JIT.
  2. The attacker copies a file containing packed malware into a temp directory and launches it.
  3. The malicious process unpacks the file using shellcode to allocate a new dynamic executable region of memory in the process’s own memory space and injects an executable payload into the new memory region.
  4. The malware then transfers execution to the injected ELF entry point.
  5. The malicious process deletes the original packed malware from disk to cover its tracks. 
  6. The injected ELF payload contains a shellcode that listens for incoming TCP connections, transmitting the attacker’s instructions.

This attack is difficult for scanners to detect. The payload is hidden behind layers of obfuscation and only present on disk for a short time.  With the fileless attack detection preview, Security Center can now identify these kinds of payloads in memory and inform users of the payload’s capabilities.

Fileless attacks detection capabilities

Like fileless attack detection for Windows, this feature scans the memory of all processes for evidence of fileless toolkits, techniques and behaviors. Over the course of the preview, we will be enabling and refining our analytics to detect the following behaviors of userland malware:

  • Well known toolkits and crypto mining software. 
  • Shellcode, injected ELF executables, and malicious code in executable regions of process memory.
  • LD_PRELOAD based rootkits to preload malicious libraries.
  • Elevation of privilege of a process from non-root to root.
  • Remote control of another process using ptrace.

In the event of a detection, you receive an alert in the Security alerts page. Alerts contain supplemental information such as the kind of techniques used, process metadata, and network activity. This enables analysts to have a greater understanding of the nature of the malware, differentiate between different attacks, and make more informed decisions when choosing remediation steps.

 Picture1

The scan is non-invasive and does not affect the other processes on the system.  The vast majority of scans run in less than five seconds. The privacy of your data is protected throughout this procedure as all memory analysis is performed on the host itself. Scan results contain only security-relevant metadata and details of suspicious payloads.

Getting started

To sign-up for this specific preview, or our ongoing preview program, indicate your interest in the “Fileless attack detection preview.”

Once you choose to onboard, this feature is automatically deployed to your Linux machines as an extension to Log Analytics Agent for Linux (also known as OMS Agent), which supports the Linux OS distributions described in this documentation. This solution supports Azure, cross-cloud and on-premise environments. Participants must be enrolled in the Standard or Standard Trial pricing tier to benefit from this feature.

To learn more about Azure Security Center, visit the Azure Security Center page.

Burst 4K encoding on Azure Kubernetes Service

Burst encoding in the cloud with Azure and Media Excel HERO platform.

Content creation has never been as in demand as it is today. Both professional and user-generated content has increased exponentially over the past years. This puts a lot of stress on media encoding and transcoding platforms. Add the upcoming 4K and even 8K to the mix and you need a platform that can scale with these variables. Azure Cloud compute offers a flexible way to grow with your needs. Microsoft offers various tools and products to fully support on-premises, hybrid, or native cloud workloads. Azure Stack offers support to a hybrid scenario for your computing needs and Azure ARC helps you to manage hybrid setups.

Finding a solution

Generally, 4K/UHD live encoding is done on dedicated hardware encoder units, which cannot be hosted in a public cloud like Azure. With such dedicated hardware units hosted on-premise that need to push 4K into the Azure data center the immediate problem we face is a need for high bandwidth network connection between the encoder unit on-premise and Azure data center. In general, it’s a best practice to ingest into multiple regions, increasing the load on the network connected between the encoder and the Azure Datacenter.

How do we ingest 4K content reliably into the public cloud?

Alternatively, we can encode the content in the cloud. If we can run 4K/UHD live encoding in Azure, its output can be ingested into Azure Media Services over the intra-Azure network backbone which provides sufficient bandwidth and reliability.

How can we reliably run and scale 4K/UHD live encoding on the Azure cloud as a containerized solution? Let’s explore below. 

Azure Kubernetes Service

With Azure Kubernetes Services (AKS) Microsoft offers a managed Kubernetes platform to customers. It is a hosted Kubernetes platform without having to spend a lot of time creating a cluster with all the necessary configuration burden like networking, cluster masters, and OS patching of the cluster nodes. It also comes with pre-configured monitoring seamlessly integrating with Azure Monitor and Log Analytics. Of course, it still offers flexibility to integrate your own tools. Furthermore, it is still just the plain vanilla Kubernetes and as such is fully compatible with any existing tooling you might have running on any other standard Kubernetes platform.

Media Excel encoding

Media Excel is an encoding and transcoding vendor offering physical appliance and software-based encoding solutions. Media Excel has been partnering with Microsoft for many years and engaging in Azure media customer projects. They are also listed as recommended and tested contribution encoder for Azure Media Services for fMP4. There has also work done by both Media Excel and Microsoft to integrate SCTE-35 timed metadata from Media Excel encoder to an Azure Media Services Origin supporting Server-Side Ad Insertion (SSAI) workflows.

Networking challenge

With increasing picture quality like 4K and 8K, the burden on both compute and networking becomes a significant architecting challenge. In a recent engagement with a customer, we needed to architect a 4K live streaming platform with a challenge of limited bandwidth capacity from the customer premises to one of our Azure Datacenters. We worked with Media Excel to build a scalable containerized encoding platform on AKS. Utilizing cloud compute and minimizing network latency between Encoder and Azure Media Services Packager. Multiple bitrates with a top bitrate up to [email protected] of the same source are generated in the cloud and ingested into the Azure Media Services platform for further processing. This includes Dynamic Encryption and Packaging. This setup enables the following benefits:

  • Instant scale to multiple AKS nodes
  • Eliminate network constraints between customer and Azure Datacenter
  • Automated workflow for containers and easy separation of concern with container technology
  • Increased level of security of high-quality generated content to distribution
  • Highly redundant capability
  • Flexibility to provide various types of Node pools for optimized media workloads

In this particular test, we proved that the intra-Azure network is extremely capable of shipping high bandwidth, latency-sensitive 4K packets from a containerized encoder instance running in West Europe to both East US and Honk Kong Datacenter Regions. This allows the customer to place origin closer to them for further content conditioning.

High-level Architecture of used Azure components for 4K encoding in the Azure cloud.

Workflow:

  1. Azure Pipeline is triggered to deploy onto the AKS cluster. In the YAML file (which you can find on Github) there is a reference to the Media Excel Container in Azure Container Registry.
  2. AKS starts deployment and pulls container from Azure Container Registry.
  3. During Container start custom PHP script is loaded and container is added to the HMS (Hero Management Service). And placed into the correct device pool and job.
  4. Encoder loads source and (in this case) push 4K Livestream into Azure Media Services.
  5. Media Services packaged Livestream into multiple formats and apply DRM (digital rights management).
  6. Azure Content Deliver Network scales livestream.

Scale through Azure Container Instances

With Azure Kubernetes Services you get the power of Azure Container Instances out of the box. Azure Container Instances are a way to instantly scale to pre-provisioned compute power at your disposal. When deploying Media Excel encoding instances to AKS you can specify where these instances will be created. This offers you the flexibility to work with variables like increased density on cheaper nodes for low-cost low priority encoding jobs or more expensive nodes for high throughput high priority jobs. With Azure Container Instances you can instantly move workloads to standby compute power without provisioning time. You only pay for the compute time offering full flexibility for customer demand and future change in platform needs. With Media Excel’s flexible Live/File based encoding roles you can easily move workloads across different compute power offered by AKS and Azure Container Instances.

Container Creating on Azure Kubernetes Services (AKS)

Media Excel Hero Management System showing all Container Instances.

Azure DevOps pipeline to bring it all together

All the general benefits that come with containerized workload apply in the following case. For this particular proof-of-concept, we created an automated deployment pipeline in Azure DevOps for easy testing and deployment. With a deployment YAML and Pipeline YAML we can easily automate deployment, provisioning and scaling of a Media Excel encoding container. Once DevOps pushes the deployment job onto AKS a container image is pulled from Azure Container Registry. Although container images can be bulky utilizing node side caching of layers any additional container pull is greatly improved down to seconds. With the help of Media Excel, we created a YAML file container pre- and post-container lifecycle logic that will add and remove a container from Media Excel’s management portal. This offers an easy single pane of glass management of multiple instances across multiple node types, clusters, and regions.

This deployment pipeline offers full flexibility to provision certain multi-tenant customers or job priority on specific node types. This unlocks the possibility of provision encoding jobs on GPU enabled nodes for maximum throughput or using cheaper generic nodes for low priority jobs.

Deployment Release Pipeline in Azure DevOps.

Azure Media Services and Azure Content Delivery Network

Finally, we push the 4K stream into Azure Media Services. Azure Media Services is a cloud-based platform that enables you to build solutions that achieve broadcast-quality video streaming, enhance accessibility and distribution, analyze content, and much more. Whether you’re an app developer, a call center, a government agency, or an entertainment company, Media Services helps you create apps that deliver media experiences of outstanding quality to large audiences on today’s most popular mobile devices and browsers.

Azure Media Services is seamlessly integrated with Azure Content Delivery Network. With Azure Content Delivery Network we offer a true multi CDN with choices of Azure Content Delivery Network from Microsoft, Azure Content Delivery Network from Verizon, and Azure Content Delivery Network from Akamai. All of this through a single Azure Content Delivery Network API for easy provisioning and management. As an added benefit, all CDN traffic between Azure Media Services Origin and CDN edge is free of charge.

With this setup, we’ve demonstrated that Cloud encoding is ready to handle real-time 4K encoding across multiple clusters. Thanks to Azure services like AKS, Container Registry, Azure DevOps, Media Services, and Azure Content Delivery Network, we demonstrated how easy it is to create an architecture that is capable of meeting high throughput time-sensitive constraints.

Azure Security Center for IoT RSA 2020 announcements

We announced the general availability of Azure Security Center for IoT in July 2019. Since then, we have seen a lot of interest from both our customers and partners. Our team has been working on enhancing the capabilities we offer our customers to secure their IoT solutions. As our team gets ready to attend the RSA conference next week, we are sharing the new capabilities we have in Azure Security Center for IoT.

As organizations pursue digital transformation by connecting vital equipment or creating new connected products, IoT deployments will get bigger and more common. In fact, the International Data Corporation (IDC) forecasts that IoT will continue to grow at double-digit rates until IoT spending surpasses $1 trillion in 2022. As these IoT deployments come online, newly connected devices will expand the attack surface available to attackers, creating opportunities to target the valuable data generated by IoT. Organizations are challenged with securing their IoT deployments end-to-end from the devices to applications and data, also including the connections between the two.

Why Azure Security Center for IoT?

Azure Security Center for IoT provides threat protection and security posture management designed for securing entire IoT deployments, including Microsoft and 3rd party devices. Azure Security Center for IoT is the first IoT security service from a major cloud provider that enables organizations to prevent, detect, and help remediate potential attacks on all the different components that make up an IoT deployment—from small sensors, to edge computing devices and gateways, to Azure IoT Hub, and on to the compute, storage, databases, and AI or machine learning workloads that organizations connect to their IoT deployments. This end-to-end protection is vital to secure IoT deployments.

Added support for Azure RTOS operating system

Azure RTOS is a comprehensive suite of real-time operating systems (RTOS) and libraries for developing embedded real-time IoT applications on multi control unit (MCU) devices. It includes Azure RTOS ThreadX, a leading RTOS with the off-the-shelf support for most leading chip architectures and embedded development tools. Azure Security Center for IoT extends support for Azure RTOS operating system in addition to Linux (Ubuntu, Debian) and Windows 10 IoT core operating systems. Azure RTOS will be shipped with a built-in security module that will cover common threats on real-time operating system devices. The offering includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of the device.

New Azure Sentinel connector

As information technology, operational technology, and the Internet of Things converge, customers are faced with rising threats.

Azure Security Center for IoT announces the availability of an Azure Sentinel connector that provides onboarding of IoT data workloads into Sentinel from Azure IoT Hub-managed deployments. This integration provides investigation capabilities on IoT assets from Azure Sentinel allowing security pros to combine IoT security data with data from across the organization for artificial intelligence or advanced analysis. With Azure Sentinel connector you can now monitor alerts across all your IoT Hub deployments, act upon potential risks, inspect and triage your IoT Incidents, and run investigations to track attacker’s lateral movement within your network.

With this new announcement, Azure Sentinel is the first security information and event management (SIEM) with native IoT support, allowing SecOps and analysts to identify threats in the complex converged networks.

Microsoft Intelligent Security Association partnership program for IoT security vendors

Through partnering with members of the Microsoft Intelligent Security Association, Microsoft is able to leverage a vast knowledge pool to defend against a world of increasing IoT threats in enterprise, healthcare, manufacturing, energy, building management systems, transportation, smart cities, smart homes, and more. Azure Security Center for IoT’s simple onboarding flow connects solutions, like Attivo Networks, CyberMDX, CyberX, Firedome, and SecuriThings—enabling you to protect your managed and unmanaged IoT devices, view all security alerts, reduce your attack surface with security posture recommendations, and run unified reports in a single pane of glass.

For more information on the Microsoft Intelligent Security Association partnership program for IoT security vendors check out our tech community blog.

Availability on government regions

Starting on March 1, 2020, Azure Security Center for IoT will be available on USGov Virginia and USGov Arizona regions.

Organizations can monitor their entire IoT solution, stay ahead of evolving threats, and fix configuration issues before they become threats. When combined with Microsoft’s secure-by-design devices, services, and the expertise we share with you and your partners, Azure Security Center for IoT provides an important way to reduce the risk of IoT while achieving your business goals.

To learn more about Azure Security Center for IoT please visit our documentation page. To learn more about our new partnerships please visit the Microsoft Intelligent Security Association page. Upgrade to Azure Security Center Standard to benefit from IoT security.

New Azure Firewall certification and features in Q1 CY2020

This post was co-authored by Suren Jamiyanaa, Program Manager, Azure Networking

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are excited to share several new Azure Firewall capabilities based on your top feedback items:

  • ICSA Labs Corporate Firewall Certification.
  • Forced tunneling support now in preview.
  • IP Groups now in preview.
  • Customer configured SNAT private IP address ranges now generally available.
  • High ports restriction relaxation now generally available.

Azure Firewall is a cloud native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

ICSA Labs Corporate Firewall Certification

ICSA Labs is a leading vendor in third-party testing and certification of security and health IT products, as well as network-connected devices. They measure product compliance, reliability, and performance for most of the world’s top technology vendors.

Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification. For the Azure Firewall certification report, see information here. For more information, see the ICSA Labs Firewall Certification program page.
Front page of the ICSA Labs Certification Testing and Audit Report for Azure Firewall.

Figure one – Azure Firewall now ICSA Labs certified.

Forced tunneling support now in preview

Forced tunneling lets you redirect all internet bound traffic from Azure Firewall to your on-premises firewall or a nearby Network Virtual Appliance (NVA) for additional inspection. By default, forced tunneling isn’t allowed on Azure Firewall to ensure all its outbound Azure dependencies are met.

To support forced tunneling, service management traffic is separated from customer traffic. An additional dedicated subnet named AzureFirewallManagementSubnet is required with its own associated public IP address. The only route allowed on this subnet is a default route to the internet, and BGP route propagation must be disabled.

Within this configuration, the AzureFirewallSubnet can now include routes to any on-premise firewall or NVA to process traffic before it’s passed to the Internet. You can also publish these routes via BGP to AzureFirewallSubnet if BGP route propagation is enabled on this subnet. For more information see Azure Firewall forced tunneling documentation.

Creating a firewall with forced tunneling enabled

Figure two – Creating a firewall with forced tunneling enabled.

IP Groups now in preview

IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. For more information, see the IP Groups in Azure Firewall documentation.

Azure Firewall application rules utilize an IP group

Figure three – Azure Firewall application rules utilize an IP group.

Customer configured SNAT private IP address ranges

Azure firewall provides automatic Source Network Address Translation (SNAT) for all outbound traffic to public IP addresses. Azure Firewall doesn’t SNAT when the destination IP address is a private IP address range per IANA RFC 1918. If your organization uses a public IP address range for private networks or opts to force tunnel Azure Firewall internet traffic via an on-premises firewall, you can configure Azure Firewall to not SNAT additional custom IP address ranges. For more information, see Azure Firewall SNAT private IP address ranges.

Azure Firewall with custom private IP address ranges

Figure four – Azure Firewall with custom private IP address ranges.

High ports restriction relaxation now generally available

Since its initial preview release, Azure Firewall had a limitation that prevented network and application rules from including source or destination ports above 64,000. This default behavior blocked RPC based scenarios and specifically Active Directory synchronization. With this new update, customers can use any port in the 1-65535 range in network and application rules.

Next steps

For more information on everything we covered above please see the following blogs, documentation, and videos.

Azure Firewall central management partners:

Azure Firewall Manager now supports virtual networks

This post was co-authored by Yair Tor, Principal Program Manager, Azure Networking.

Last November we introduced Microsoft Azure Firewall Manager preview for Azure Firewall policy and route management in secured virtual hubs. This also included integration with key Security as a Service partners, Zscaler, iboss, and soon Check Point. These partners support branch to internet and virtual network to internet scenarios.

Today, we are extending Azure Firewall Manager preview to include automatic deployment and central security policy management for Azure Firewall in hub virtual networks.

Azure Firewall Manager preview is a network security management service that provides central security policy and route management for cloud-based security perimeters. It makes it easy for enterprise IT teams to centrally define network and application-level rules for traffic filtering across multiple Azure Firewall instances that spans different Azure regions and subscriptions in hub-and-spoke architectures for traffic governance and protection. In addition, it empowers DevOps for better agility with derived local firewall security policies that are implemented across organizations.

For more information see Azure Firewall Manager documentation.

Azure Firewall Manager getting started page

Figure one – Azure Firewall Manger Getting Started page

 

Hub virtual networks and secured virtual hubs

Azure Firewall Manager can provide security management for two network architecture types:

  •  Secured virtual hub—An Azure Virtual WAN Hub is a Microsoft-managed resource that lets you easily create hub-and-spoke architectures. When security and routing policies are associated with such a hub, it is referred to as a secured virtual hub.
  •  Hub virtual network—This is a standard Azure Virtual Network that you create and manage yourself. When security policies are associated with such a hub, it is referred to as a hub virtual network. At this time, only Azure Firewall Policy is supported. You can peer spoke virtual networks that contain your workload servers and services. It is also possible to manage firewalls in standalone virtual networks that are not peered to any spoke.

Whether to use a hub virtual network or a secured virtual depends on your scenario:

  •  Hub virtual network—Hub virtual networks are probably the right choice if your network architecture is based on virtual networks only, requires multiple hubs per regions, or doesn’t use hub-and-spoke at all.
  •  Secured virtual hubs—Secured virtual hubs might address your needs better if you need to manage routing and security policies across many globally distributed secured hubs. Secure virtual hubs have high scale VPN connectivity, SDWAN support, and third-party Security as Service integration. You can use Azure to secure your Internet edge for both on-premises and cloud resources.

The following comparison table in Figure 2 can assist in making an informed decision:

 

 Hub virtual networkSecured virtual hub
Underlying resourceVirtual networkVirtual WAN hub
Hub-and-SpokeUsing virtual network peeringAutomated using hub virtual network connection
On-prem connectivity

VPN Gateway up to 10 Gbps and 30 S2S connections; ExpressRoute

More scalable VPN Gateway up to 20 Gbps and 1000 S2S connections; ExpressRoute

Automated branch connectivity using SDWANNot supportedSupported
Hubs per regionMultiple virtual networks per region

Single virtual hub per region. Multiple hubs possible with multiple Virtual WANs

Azure Firewall – multiple public IP addressesCustomer providedAuto-generated (to be available by general availability)
Azure Firewall Availability ZonesSupportedNot available in preview. To be available availabilityavailablity

Advanced internet security with 3rd party Security as a service partners

Customer established and managed VPN connectivity to partner service of choice

Automated via Trusted Security Partner flow and partner management experience

Centralized route management to attract traffic to the hub

Customer managed UDR; Roadmap: UDR default route automation for spokes

Supported using BGP
Web Application Firewall on Application GatewaySupported in virtual networkRoadmap: can be used in spoke
Network Virtual ApplianceSupported in virtual networkRoadmap: can be used in spoke

Figure 2 – Hub virtual network vs. secured virtual hub

Firewall policy

Firewall policy is an Azure resource that contains network address translation (NAT), network, and application rule collections as well as threat intelligence settings. It’s a global resource that can be used across multiple Azure Firewall instances in secured virtual hubs and hub virtual networks. New policies can be created from scratch or inherited from existing policies. Inheritance allows DevOps to create local firewall policies on top of organization mandated base policy. Policies work across regions and subscriptions.

Azure Firewall Manager orchestrates Firewall policy creation and association. However, a policy can also be created and managed via REST API, templates, Azure PowerShell, and CLI.

Once a policy is created, it can be associated with a firewall in a Virtual WAN Hub (aka secured virtual hub) or a firewall in a virtual network (aka hub virtual network).

Firewall Policies are billed based on firewall associations. A policy with zero or one firewall association is free of charge. A policy with multiple firewall associations is billed at a fixed rate.

For more information, see Azure Firewall Manager pricing.

The following table compares the new firewall policies with the existing firewall rules:

 

Policy

Rules

Contains

NAT, Network, Application rules, and Threat Intelligence settings

NAT, Network, and Application rules

Protects

Virtual hubs and virtual networks

Virtual networks only

Portal experience

Central management using Firewall Manager

Standalone firewall experience

Multiple firewall support

Firewall Policy is a separate resource that can be used across firewalls

Manually export and import rules or using 3rd party management solutions

Pricing

Billed based on firewall association. See Pricing

Free

Supported deployment mechanisms

Portal, REST API, templates, PowerShell, and CLI

Portal, REST API, templates, PowerShell, and CLI

Release Status

Preview

General Availability

Figure 3 – Firewall Policy vs. Firewall Rules

Next steps

For more information on topics covered here, see the following blogs, documentation, and videos:

Azure Firewall central management partners:

SQL Server runs best on Azure. Here’s why.

SQL Server customers migrating their databases to the cloud have multiple choices for their cloud destination. To thoroughly assess which cloud is best for SQL Server workloads, two key factors to consider are:

  1. Innovations that the cloud provider can uniquely provide.
  2. Independent benchmark results.

What innovations can the cloud provider bring to your SQL Server workloads?

As you consider your options for running SQL Server in the cloud, it’s important to understand what the cloud provider can offer both today and tomorrow. Can they provide you with the capabilities to maximize the performance of your modern applications? Can they automatically protect you against vulnerabilities and ensure availability for your mission-critical workloads?

SQL Server customers benefit from our continued expertise developed over the past 25 years, delivering performance, security, and innovation. This includes deploying SQL Server on Azure, where we provide customers with innovations that aren’t available anywhere else. One great example of this is Azure BlobCache, which provides fast, free reads for customers. This feature alone provides tremendous value to our customers that is simply unmatched in the market today.

Additionally, we offer preconfigured, built-in security and management capabilities that automate tasks like patching, high availability, and backups. Azure also offers advanced data security that enables both vulnerability assessments and advanced threat protection. Customers benefit from all of these capabilities both when using our Azure Marketplace images and when self-installing SQL Server on Azure virtual machines.

Only Azure offers these innovations.

What are their performance results on independent, industry-standard benchmarks?

Benchmarks can often be useful tools for assessing your cloud options. It’s important, though, to ask if those benchmarks were conducted by independent third parties and whether they used today’s industry-standard methods.

bar graphs comparing the prefromance and price differences between Azure and AWS.

The images above show performance and price-performance comparisons from the February 2020 GigaOm performance benchmark blog post

In December, an independent study by GigaOm compared SQL Server on Azure Virtual Machines to AWS EC2 using a field test derived from the industry standard TPC-E benchmark. GigaOm found Azure was up to 3.4x faster and 87 percent cheaper than AWS. Today, we are pleased to announce that in GigaOm’s second benchmark analysis, using the latest virtual machine comparisons and disk striping, Azure was up to 3.6x faster and 84 percent cheaper than AWS.1 

These results continue to demonstrate that SQL Server runs best on Azure.

Get started today

Learn more about how you can start taking advantage of these benefits today with SQL Server on Azure.

 


1Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in February 2020. The study compared price performance between SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in Azure E32as_v4 instance type with P30 Premium SSD Disks and the SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in AWS EC2 r5a.8xlarge instance type with General Purpose (gp2) volumes. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of January 2020. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs. Actual results and prices may vary based on configuration and region.

Announcing the preview of Azure Shared Disks for clustered applications

Today, we are announcing the limited preview of Azure Shared Disks, the industry’s first shared cloud block storage. Azure Shared Disks enables the next wave of block storage workloads migrating to the cloud including the most demanding enterprise applications, currently running on-premises on Storage Area Networks (SANs). These include clustered databases, parallel file systems, persistent containers, and machine learning applications. This unique capability enables customers to run latency-sensitive workloads, without compromising on well-known deployment patterns for fast failover and high availability. This includes applications built for Windows or Linux-based clustered filesystems like Global File System 2 (GFS2).

With Azure Shared Disks, customers now have the flexibility to migrate clustered environments running on Windows Server, including Windows Server 2008 (which has reached End-of-Support), to Azure. This capability is designed to support SQL Server Failover Cluster Instances (FCI), Scale-out File Servers (SoFS), Remote Desktop Servers (RDS), and SAP ASCS/SCS running on Windows Server.

We encourage you to get started and request access by filling out this form.

Leveraging Azure Shared Disks

Azure Shared Disks provides a consistent experience for applications running on clustered environments today. This means that any application that currently leverages SCSI Persistent Reservations (PR) can use this well-known set of commands to register nodes in the cluster to the disk. The application can then choose from a range of supported access modes for one or more nodes to read or write to the disk. These applications can deploy in highly available configurations while also leveraging Azure Disk durability guarantees.

The below diagram illustrates a sample two-node clustered database application orchestrating failover from one node to the other.
   2-node failover cluster
The flow is as follows:

  1. The clustered application running on both Azure VM 1 and  Azure VM 2 registers the intent to read or write to the disk.
  2. The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk.
  3. This reservation is enforced on Azure Disk and the database can now exclusively write to the disk. Any writes from the application instance on Azure VM 2 will not succeed.
  4. If the application instance on Azure VM 1 goes down, the instance on Azure VM 2 can now initiate a database failover and take-over of the disk.
  5. This reservation is now enforced on the Azure Disk, and it will no longer accept writes from the application on Azure VM 1. It will now only accept writes from the application on Azure VM 2.
  6. The clustered application can complete the database failover and serve requests from Azure VM 2.

The below diagram illustrates another common workload consists of multiple nodes reading data from the disk to run parallel jobs, for example, training of Machine Learning models.
   n-node cluster with multiple readers
The flow is as follows:

  1. The application registers all Virtual Machines registers to the disk.
  2. The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk while opening up reads from other Virtual Machines.
  3. This reservation is enforced on Azure Disk.
  4. All nodes in the cluster can now read from the disk. Only one node writes results back to the disk on behalf of all the nodes in the cluster.

Disk types, sizes, and pricing

Azure Shared Disks are available on Premium SSDs and supports disk sizes including and greater than P15 (i.e. 256 GB). Support for Azure Ultra Disk will be available soon. Azure Shared Disks can be enabled as data disks only (not OS Disks). Each additional mount to an Azure Shared Disk (Premium SSDs) will be charged based on disk size. Please refer to the Azure Disks pricing page for details on limited preview pricing.

Azure Shared Disks vs Azure Files

Azure Shared Disks provides shared access to block storage which can be leveraged by multiple virtual machines. You will need to use a common Windows and Linux-based cluster manager like Windows Server Failover Cluster (WSFC), Pacemaker, or Corosync for node-to-node communication and to enable write locking. If you are looking for a fully-managed files service on Azure that can be accessed using Server Message Block (SMB) or Network File System (NFS) protocol, check out Azure Premium Files or Azure NetApp Files.

Getting started

You can create Azure Shared Disks using Azure Resource Manager templates. For details on how to get started and use Azure Shared Disks in preview, please refer to the documentation page. For updates on regional availability and Ultra Disk availability, please refer to the Azure Disks FAQ. Here is a video of Mark Russinovich from Microsoft Ignite 2019 covering Azure Shared Disks.

In the coming weeks, we will be enabling Portal and SDK support. Support for Azure Backup and  Azure Site Recovery is currently not available. Refer to the Managed Disks documentation for detailed instructions on all disk operations.

If you are interested in participating in the preview, you can now get started by requesting access.

Microsoft Connected Vehicle Platform: trends and investment areas

This post was co-authored by the extended Azure Mobility Team.

The past year has been eventful for a lot of reasons. At Microsoft, we’ve expanded our partnerships, including Volkswagen, LG Electronics, Faurecia, TomTom, and more, and taken the wraps off new thinking such as at CES, where we recently demonstrated our approach to in-vehicle compute and software architecture.

Looking ahead, areas that were once nominally related now come into sharper focus as the supporting technologies are deployed and the various industry verticals mature. The welcoming of a new year is a good time to pause and take in what is happening in our industry and in related ones with an aim to developing a view on where it’s all heading.

In this blog, we will talk about the trends that we see in connected vehicles and smart cities and describe how we see ourselves fitting in and contributing.

Trends

Mobility as a Service (Maas)

MaaS (sometimes referred to as Transportation as a Service, or TaaS) is about people getting to goods and services and getting those goods and services to people. Ride-hailing and ride-sharing come to mind, but so do many other forms of MaaS offerings such as air taxis, autonomous drone fleets, and last-mile delivery services. We inherently believe that completing a single trip—of a person or goods—will soon require a combination of passenger-owned vehicles, ride-sharing, ride-hailing, autonomous taxis, bicycle-and scooter-sharing services transporting people on land, sea, and in the air (what we refer to as “multi-modal routing”). Service offerings that link these different modes of transportation will be key to making this natural for users.

With Ford, we are exploring how quantum algorithms can help improve urban traffic congestion and develop a more balanced routing system. We’ve also built strong partnerships with TomTom for traffic-based routing as well as with AccuWeather for current and forecast weather reports to increase awareness of weather events that will occur along the route. In 2020, we will be integrating these routing methods together and making them available as part of the Azure Maps service and API. Because mobility constitutes experiences throughout the day across various modes of transportation, finding pickup locations, planning trips from home and work, and doing errands along the way, Azure Maps ties the mobility journey with cloud APIs and iOS and Android SDKs to deliver in-app mobility and mapping experiences. Coupled with the connected vehicle architecture of integration with federated user authentication, integration with the Microsoft Graph, and secure provisioning of vehicles, digital assistants can support mobility end-to-end. The same technologies can be used in moving goods and retail delivery systems.

The pressure to become profitable will force changes and consolidation among the MaaS providers and will keep their focus on approaches to reducing costs such as through autonomous driving. Incumbent original equipment manufacturers (OEMs) are expanding their businesses to include elements of car-sharing to continue evolving their businesses as private car ownership is likely to decline over time.

Connecting vehicles to the cloud

We refer holistically to these various signals that can inform vehicle routing (traffic, weather, available modalities, municipal infrastructure, and more) as “navigation intelligence.” Taking advantage of this navigation intelligence will require connected vehicles to become more sophisticated than just logging telematics to the cloud.

The reporting of basic telematics (car-to-cloud) is barely table-stakes; over-the-air updates (OTA, or cloud-to-car) will become key to delivering a market-competitive vehicle, as will command-and-control (more cloud-to-car, via phone apps). Forward-thinking car manufacturers deserve a lot of credit here for showing what’s possible and for creating in consumers the expectation that the appearance of new features in the car after it is purchased isn’t just cool, but normal.

Future steps include the integration of in-vehicle infotainment (IVI) with voice assistants that blend the in- and out-of-vehicle experiences, updating AI models for in-market vehicles for automated driving levels one through five, and of course pre-processing the telemetry at the edge in order to better enable reinforcement learning in the cloud as well as just generally improving services.

Delivering value from the cloud to vehicles and phones

As vehicles become more richly connected and deliver experiences that overlap with what we’ve come to expect from our phones, an emerging question is, what is the right way to make these work together? Projecting to the IVI system of the vehicle is one approach, but most agree that vehicles should have a great experience without a phone present.

Separately, phones are a great proxy for “a vehicle” in some contexts, such as bicycle sharing, providing speed, location, and various other probe data, as well as providing connectivity (as well as subsidizing the associated costs) for low-powered electronics on the vehicle.

This is probably a good time to mention 5G. The opportunity 5G brings will have a ripple effect across industries. It will be a critical foundation for the continued rise of smart devices, machines, and things. They can speak, listen, see, feel, and act using sensitive sensor technology as well as data analytics and machine learning algorithms without requiring “always on” connectivity. This is what we call the intelligent edge. Our strategy is to enable 5G at the edge through cloud partnerships, with a focus on security and developer experience.

Optimizations through a system-of-systems approach

Connecting things to the cloud, getting data into the cloud, and then bringing the insights gained through cloud-enabled analytics back to the things is how optimizations in one area can be brought to bear in another area. This is the essence of digital transformation. Vehicles gathering high-resolution imagery for improving HD maps can also inform municipalities about maintenance issues. Accident information coupled with vehicle telemetry data can inform better PHYD (pay how you drive) insurance plans as well as the deployment of first responder infrastructure to reduce incident response time.

As the vehicle fleet electrifies, the demand for charging stations will grow. The way in-car routing works for an electric car is based only on knowledge of existing charging stations along the route—regardless of the current or predicted wait-times at those stations. But what if that route could also be informed by historical use patterns and live use data of individual charging stations in order to avoid arriving and having three cars ahead of you? Suddenly, your 20-minute charge time is actually a 60-minute stop, and an alternate route would have made more sense, even if, on paper, it’s more miles driven.

Realizing these kinds of scenarios means tying together knowledge about the electrical grid, traffic patterns, vehicle types, and incident data. The opportunities here for brokering the relationships among these systems are immense, as are the challenges to do so in a way that encourages the interconnection and sharing while maintaining privacy, compliance, and security.

Laws, policies, and ethics

The past several years of data breaches and elections are evidence of a continuously evolving nature of the security threats that we face. That kind of environment requires platforms that continuously invest in security as a fundamental cost of doing business.

Laws, regulatory compliance, and ethics must figure into the design and implementation of our technologies to as great a degree as goals like performance and scalability do. Smart city initiatives, where having visibility into the movement of people, goods, and vehicles is key to doing the kinds of optimizations that increase the quality of life in these cities, will confront these issues head-on.

Routing today is informed by traffic conditions but is still fairly “selfish:” routing for “me” rather than for “we.” Cities would like a hand in shaping traffic, especially if they can factor in deeper insights such as the types of vehicles on the road (sending freight one way versus passenger traffic another way), whether or not there is an upcoming sporting event or road closure, weather, and so on.

Doing this in a way that is cognizant of local infrastructure and the environment is what smart cities initiatives are all about.

For these reasons, we have joined the Open Mobility Foundation. We are also involved with Stanford’s Digital Cities Program, the Smart Transportation Council, the Alliance to Save Energy by the 50×50 Transportation Initiative, and the World Business Council for Sustainable Development.

With the Microsoft Connected Vehicle Platform (MCVP) and an ecosystem of partners across the industry, Microsoft offers a consistent horizontal platform on top of which customer-facing solutions can be built. MCVP helps mobility companies accelerate the delivery of digital services across vehicle provisioning, two-way network connectivity, and continuous over-the-air updates of containerized functionality. MCVP provides support for command-and-control, hot/warm/cold path for telematics, and extension hooks for customer/third-party differentiation. Being built on Azure, MCVP then includes the hyperscale, global availability, and regulatory compliance that comes as part of Azure. OEMs and fleet operators leverage MCVP as a way to “move up the stack” and focus on their customers rather than spend resources on non-differentiating infrastructure.

Innovation in the automotive industry

At Microsoft, and within the Azure IoT organization specifically, we have a front-row seat on the transformative work that is being done in many different industries, using sensors to gather data and develop insights that inform better decision-making. We are excited to see these industries on paths that are trending to converging, mutually beneficial paths. Our colleague Sanjay Ravi shares his thoughts from an automotive industry perspective in this great article.

Turning our attention to our customer and partner ecosystem, the traction we’ve gotten across the industry has been overwhelming:

The Volkswagen Automotive Cloud will be one of the largest dedicated clouds of its kind in the automotive industry and will provide all future digital services and mobility offerings across its entire fleet. More than 5 million new Volkswagen-specific brand vehicles are to be fully connected on Microsoft’s Azure cloud and edge platform each year. The Automotive Cloud subsequently will be rolled out on all Group brands and models.

Cerence is working with us to integrate Cerence Drive products with MCVP. This new integration is part of Cerence’s ongoing commitment to delivering a superior user experience in the car through interoperability across voice-powered platforms and operating systems. Automakers developing their connected vehicle solutions on MCVP can now benefit from Cerence’s industry-leading conversational AI, in turn delivering a seamless, connected, voice-powered experience to their drivers.

Ericsson, whose Connected Vehicle Cloud connects more than 4 million vehicles across 180 countries, is integrating their Connected Vehicle Cloud with Microsoft’s Connected Vehicle Platform to accelerate the delivery of safe, comfortable, and personalized connected driving experiences with our cloud, AI, and IoT technologies.

LG Electronics is working with Microsoft to build its automotive infotainment systems, building management systems and other business-to-business collaborations. LG will leverage Microsoft Azure cloud and AI services to accelerate the digital transformation of LG’s B2B business growth engines, as well as Automotive Intelligent Edge, the in-vehicle runtime environment provided as part of MCVP.

Global technology company ZF Friedrichshafen is transforming into a provider of software-driven mobility solutions, leveraging Azure cloud services and developer tools to promote faster development and validation of connected vehicle functions on a global scale.

Faurecia is collaborating with Microsoft to develop services that improve comfort, wellness, and infotainment as well as bring digital continuity from home or the office to the car. At CES, Faurecia demonstrated how its cockpit integration will enable Microsoft Teams video conferencing. Using Microsoft Connected Vehicle Platform, Faurecia also showcased its vision of playing games on the go, using Microsoft’s new Project xCloud streaming game preview.

Bell has revealed AerOS, a digital mobility platform that will give operators a 360° view into their aircraft fleet. By leveraging technologies like artificial intelligence and IoT, AerOS provides powerful capabilities like fleet master scheduling and real-time aircraft monitoring, enhancing Bell’s Mobility-as-a-Service (MaaS) experience. Bell chose Microsoft Azure as the technology platform to manage fleet information, observe aircraft health, and manage the throughput of goods, products, predictive data, and maintenance.

Luxoft is expanding its collaboration with Microsoft to accelerate the delivery of connected vehicle solutions and mobility experiences. By leveraging MCVP, Luxoft will enable and accelerate the delivery of vehicle-centric solutions and services that will allow automakers to deliver unique features such as advanced vehicle diagnostics, remote access and repair, and preventive maintenance. Collecting real usage data will also support vehicle engineering to improve manufacturing quality.

We are incredibly excited to be a part of the connected vehicle space. With MCVP, our ecosystem partners and our partnerships with leading automotive players, both vehicle OEMs and automotive technology suppliers, we believe we have a uniquely capable offering enabling at global scale the next wave of innovation in the automotive industry as well as related verticals such as smart cities, smart infrastructure, insurance, transportation, and beyond.

Advancing safe deployment practices

“What is the primary cause of service reliability issues that we see in Azure, other than small but common hardware failures? Change. One of the value propositions of the cloud is that it’s continually improving, delivering new capabilities and features, as well as security and reliability enhancements. But since the platform is continuously evolving, change is inevitable. This requires a very different approach to ensuring quality and stability than the box product or traditional IT approaches — which is to test for long periods of time, and once something is deployed, to avoid changes. This post is the fifth in the series I kicked off in my July blog post that shares insights into what we’re doing to ensure that Azure’s reliability supports your most mission critical workloads. Today we’ll describe our safe deployment practices, which is how we manage change automation so that all code and configuration updates go through well-defined stages to catch regressions and bugs before they reach customers, or if they do make it past the early stages, impact the smallest number possible. Cristina del Amo Casado from our Compute engineering team authored this posts, as she has been driving our safe deployment initiatives.” – Mark Russinovich, CTO, Azure


 

When running IT systems on-premises, you might try to ensure perfect availability by having gold-plated hardware, locking up the server room and throwing away the key. Software wise, IT would traditionally prevent as much change as possible — avoiding applying updates to the operating system or applications because they’re too critical, and pushing back on change requests from users. With everyone treading carefully around the system, this ‘nobody breathe!’ approach stifles continued system improvement, and sometimes even compromises security for systems that are deemed too crucial to patch regularly. As Mark mentioned above, this approach doesn’t work for change and release management in a hyperscale public cloud like Azure. Change is both inevitable and beneficial, given the need to deploy service updates and improvements, and given our commitment to you to act quickly in the face of security vulnerabilities. As we can’t simply avoid change, Microsoft, our customers, and our partners need to acknowledge that change is expected, and we plan for it. Microsoft continues to work on making updates as transparent as possible and will deploy the changes safely as described below. Having said that, our customers and partners should also design for high availability, consume maintenance events sent by the platform to adapt as needed. Finally, in some cases, customers can take control of initiating the platform updates at a suitable time for their organization.

Changing safely

When considering how to deploy releases throughout our Azure datacenters, one of the key premises that shapes our processes is to assume that there could be an unknown problem introduced by the change being deployed, plan in a way that enables the discovery of said problem with minimal impact, and automate mitigation actions for when the problem surfaces. While a developer might judge it as completely innocuous and guarantee that it won’t affect the service, even the smallest change to a system poses a risk to the stability of the system, so ‘changes’ here refers to all kinds of new releases and covers both code changes and configuration changes. In most cases a configuration change has a less dramatic impact on the behavior of a system but, just as for a code change, no configuration change is free of risk for activating a latent code defect or a new code path.

Teams across Azure follow similar processes to prevent or at least minimize impact related to changes. Firstly, by ensuring that changes meet the quality bar before the deployment starts, through test and integration validations. Then after sign off, we roll out the change in a gradual manner and measure health signals continuously, so that we can detect in relative isolation if there is any unexpected impact associated with the change that did not surface during testing. We do not want a change causing problems to ever make it to broad production, so steps are taken to ensure we can avoid that whenever possible. The gradual deployment gives us a good opportunity to detect issues at a smaller scale (or a smaller ‘blast radius’) before it causes widespread impact.

Azure approaches change automation, aligned with the high level process above, through a safe deployment practice (SDP) framework, which aims to ensure that all code and configuration changes go through a lifecycle of specific stages, where health metrics are monitored along the way to trigger automatic actions and alerts in case of any degradation detected. These stages (shown in the diagram that follows) reduce the risk that software changes will negatively affect your existing Azure workloads.

A diagram showing how the cost and impact of failures increases throughout the production rollout pipeline, and is minimized by going through rounds of development and testing, quality gates, and integration.

This shows a simplification of our deployment pipeline, starting on the left with developers modifying their code, testing it on their own systems, and pushing it to staging environments. Generally, this integration environment is dedicated to teams for a subset of Azure services that need to test the interactions of their particular components together. For example, core infrastructure teams such as compute, networking, and storage share an integration environment. Each team runs synthetic tests and stress tests on the software in that environment, iterate until stable, and then once the quality results indicate that a given release, feature, or change is ready for production they deploy the changes into the canary regions.

Canary regions

Publicly we refer to canary regions as “Early Updates Access Program” regions, and they’re effectively full-blown Azure regions with the vast majority of Azure services. One of the canary regions is built with Availability Zones and the other without it, and both regions form a region pair so that we can validate data geo-replication capabilities. These canary regions are used for full, production level, end to end validations and scenario coverage at scale. They host some first party services (for internal customers), several third party services, and a small set of external customers that we invite into the program to help increase the richness and complexity of scenarios covered, all to ensure that canary regions have patterns of usage representative of our public Azure regions. Azure teams also run stress and synthetic tests in these environments, and periodically we execute fault injections or disaster recovery drills at the region or Availability Zone level, to practice the detection and recovery workflows that would be run if this occurred in real life. Separately and together, these exercises attempt to ensure that software is of the highest quality before the changes touch broad customer workloads in Azure.

Pilot phase

Once the results from canary indicate that there are no known issues detected, the progressive deployment to production can get started, beginning with what we call our pilot phase. This phase enables us to try the changes, still at a relatively small scale, but with more diversity of hardware and configurations. This phase is especially important for software like core storage services and core compute infrastructure services, that have hardware dependencies. For example, Azure offers servers with GPU’s, large memory servers, commodity servers, multiple generations and types of processors, Infiniband, and more, so this enables flighting the changes and may enable detection of issues that would not surface during the smaller scale testing. In each step along the way, thorough health monitoring and extended ‘bake times’ enable potential failure patterns to surface, and increase our confidence in the changes while greatly reducing the overall risk to our customers.

Once we determine that the results from the pilot phase are good, the deployment systems proceed by allowing the change to progress to more and more regions incrementally. Throughout the deployment to the broader Azure regions, the deployment systems endeavor to respect Availability Zones (a change only goes to one Availability Zone within a region) and region pairing (every region is ‘paired up’ with a second region for georedundant storage) so a change deploys first to a region and then to its pair. In general, the changes deploy only as long as no negative signals surface.

Safe deployment practices in action

Given the scale of Azure globally, the entire rollout process is completely automated and driven by policy. These declarative policies and processes (not the developers) determine how quickly software can be rolled out. Policies are defined centrally and include mandatory health signals for monitoring the quality of software as well as mandatory ‘bake times’ between the different stages outlined above. The reason to have software sitting and baking for different periods of time across each phase is to make sure to expose the change to a full spectrum of load on that service. For example, diverse organizational users might be coming online in the morning, gaming customers might be coming online in the evening, and new virtual machines (VMs) or resource creations from customers may occur over an extended period of time.

Global services, which cannot take the approach of progressively deploying to different clusters, regions, or service rings, also practice a version of progressive rollouts in alignment with SDP. These services follow the model of updating their service instances in multiple phases, progressively deviating traffic to the updated instances through Azure Traffic Manager. If the signals are positive, more traffic gets deviated over time to updated instances, increasing confidence and unblocking the deployment from being applied to more service instances over time.

Of course, the Azure platform also has the ability to deploy a change simultaneously to all of Azure, in case this is necessary to mitigate an extremely critical vulnerability. Although our safe deployment policy is mandatory, we can choose to accelerate it when certain emergency conditions are met. For example, to release a security update that requires us to move much more quickly than we normally would, or for a fix where the risk of regression is overcome by the fix mitigating a problem that’s already very impactful to customers. These exceptions are very rare, in general our deployment tools and processes intentionally sacrifice velocity to maximize the chance for signals to build up and scenarios and workflows to be exercised at scale, thus creating the opportunity to discover issues at the smallest possible scale of impact.

Continuing improvements

Our safe deployment practices and deployment tooling continue to evolve with learnings from previous outages and maintenance events, and in line with our goal of detecting issues at a significantly smaller scale. For example, we have learned about the importance of continuing to enrich our health signals and about using machine learning to better correlate faults and detect anomalies. We also continue to improve the way in which we do pilots and flighting, so that we can cover more diversity of hardware with smaller risk. We continue to improve our ability to rollback changes automatically if they show potential signs of problems. We also continue to invest in platform features that reduce or eliminate the impact of changes generally.

With over a thousand new capabilities released in the last year, we know that the pace of change in Azure can feel overwhelming. As Mark mentioned, the agility and continual improvement of cloud services is one of the key value propositions of the cloud – change is a feature, not a bug. To learn about the latest releases, we encourage customers and partners to stay in the know at Azure.com/Updates. We endeavor to keep this as the single place to learn about recent and upcoming Azure product updates, including the roadmap of innovations we have in development. To understand the regions in which these different services are available, or when they will be available, you can also use our tool at Azure.com/ProductsbyRegion.