Azure Cost Management + Billing updates – February 2020

Whether you’re a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you’re spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

We’re always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let’s dig into the details.

 

New Power BI reports for Azure reservations and Azure Hybrid Benefit

Azure Cost Management + Billing offers several ways to report on your cost and usage data. You can start in the portal, download data or schedule an automated export for offline analysis, or even integrate with Cost Management APIs directly. But maybe you just need detailed reporting alongside other business reports. This is where the Power BI comes in. We last talked about the addition of reservation purchases in the Azure Cost Management Power BI connector in October. Building on top of that, the new Azure Cost Management Power BI app offers an extensive set of reports to get you started, including detailed reservation and Azure Hybrid Benefit reports.

The Account overview offers a summary of all usage and purchases as well as your credit balance to help you track monthly expenses. From here, you can dig in to usage costs broken down by subscription, resource group, or service in additional pages. Or, if you simply want to see your prices, take a look at the Price sheet page.

If you’re already using Azure Hybrid Benefit (AHB) or have existing, unused on-prem Windows licenses, check out the Windows Server AHB Usage page. Start by checking how many VMs currently have AHB enabled to determine if you have additional licenses that could help you further lower your costs. If you do have additional licenses, you can also identify eligible VMs based on their core/vCPU count. Apply AHB to your most expensive VMs to maximize your potential savings.

Azure Hybrid Benefit (AHB) report in the new Azure Cost Management Power BI app

If you’re using Azure reservations or are interested in potential savings you could benefit from if you did, you’ll want to check out the VM RI coverage pages to identify any new opportunities where you can save with new reservations, including the historical usage so you can see why that reservation is recommended. You can drill in to a specific region or instance size flexibility group and more. You can see your past purchases in the RI purchases page and get a breakdown of those costs by region, subscription, or resource group in the RI chargeback page, if you need to do any internal chargeback. And, don’t forget to check out the RI savings page, where you can see how much you’ve saved so far by using Azure reservations.

Azure reservation coverage report in the new Azure Cost Management Power BI app

This is just the first release of a new generation of Power BI reports. Get started with the Azure Cost Management Power BI quickstart today and let us know what you’d like to see next.

 

Quicker access to help and support

Learning something new can be a challenge; especially when it’s not your primary focus. But given how critical it is to meet your financial goals, getting help and support needs to be front and center. To support this, Cost Management now includes a contextual Help menu to direct you to documentation and support experiences.

Get started with a quickstart tutorial and, when you’re ready to automate that experience or integrate it into your own apps, check out the API reference. If you have any suggestions on how the experience could be improved for you, please don’t hesitate to share your feedback. If you run into an issue or see something that doesn’t make sense, start with Diagnose and solve problems, and if you don’t see a solution, then please do submit a new support request. We’re closely monitoring all feedback and support requests to identify ways the experience could be streamlined for you. Let us know what you’d like to see next.

Help menu in Azure Cost Management showing options to navigate to a Quickstart tutorial, API reference, Feedback, Diagnose and solve problems, and New support request

 

We need your feedback

As you know, we’re always looking for ways to learn more about your needs and expectations. This month, we’d like to learn more about how you report on and analyze your cloud usage and costs in a brief survey. We’ll use your inputs from this survey to inform ease of use and navigation improvements within Cost Management + Billing experiences. The 15-question survey should take about 10 minutes.

Take the survey.

 

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

  • Get started quicker with the cost analysis Home view
    Azure Cost Management offers five built-in views to get started with understanding and drilling into your costs. The Home view gives you quick access to those views so you get to what you need faster.
  • New: More details in the cost by resource view
    Drill in to the cost of your resources to break them down by meter. Simply expand the row to see more details or click the link to open and take action on your resources.
  • New: Explain what “not applicable” means
    Break down “not applicable” to explain why specific properties don’t have values within cost analysis.

Of course, that’s not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today.

 

Drill in to the costs for your resources

Resources are the fundamental building block in the cloud. Whether you’re using the cloud as infrastructure or componentized microservices, you use resources to piece together your solution and achieve your vision. And how you use these resources ultimately determines what you’re billed for, which breaks down to individual “meters” for each of your resources. Each service tracks a unique set of meters covering time, size, or other generalized unit. The more units you use, the higher the cost.

Today, you can see costs broken down by resource or meter with built-in views, but seeing both together requires additional filtering and grouping to get down to the data you need, which can be tedious. To simplify this, you can now expand each row in the Cost by resource view to see the individual meters that contribute to the cost of that resource.

Cost by resource view showing a breakdown of meters under a resource

This additional clarity and transparency should help you better understand the costs you’re accruing for each resource at the lowest level. And if you see a resource that shouldn’t be running, simply click the name to open the resource, where you can stop or delete it to avoid incurring additional cost.

You can see the updated Cost by resource view in Cost Management Labs today, while in preview. Let us know if you have any feedback. We’d love to know what you’d like to see next. This should be available everywhere within the next few weeks.

 

Understanding why you see “not applicable”

Azure Cost Management + Billing includes all usage, purchases, and refunds for your billing account. Seeing every line item in the full usage and charges file allows you to reconcile your bill at the lowest level, but since each of these records has different properties, aggregating them within cost analysis can result in groups of empty properties. This is when you see “not applicable” today.

Now, in Cost Management Labs, you can see these costs broken down and categorized into separate groups to bring additional clarity and explain what each represents. Here are a few examples:

  • You may see Other classic resources for any classic resources that don’t include resource group in usage data when grouping by resource or resource group.
  • If you’re using any services that aren’t deployed to resource groups, like Security Center or Azure DevOps (Visual Studio Online), you will see Other subscription resources when grouping by resource group.
  • You may recall seeing Untagged costs when grouping by a specific tag. This group is now broken down further into Tags not available and Tags not supported groups. These signify services that don’t include tags in usage data (see How tags are used) and costs that can’t be tagged, like purchases and resources not deployed to resource groups, covered above.
  • Since purchases aren’t associated with an Azure resource, you might see Other Azure purchases or Other Marketplace purchases when grouping by resource, resource group, or subscription.
  • You may also see Other Marketplace purchases when grouping by reservation. This represents other purchases, which aren’t associated with a reservation.
  • If you have a reservation, you may see Unused reservation when viewing amortized costs and grouping by resource, resource group, or subscription. This represents the unused portion of your reservation that isn’t associated with any resources. These costs will only be visible from your billing account or billing profile.

Of course, these are just a few examples. You may see more. When there simply isn’t a value, you’ll see something like No department, as an example, which represents Enterprise Agreement (EA) subscriptions that aren’t grouped into a department.

We hope these changes help you better understand your cost and usage data. You can see this today in Cost Management Labs while in preview. Please check it out and let us know if you have any feedback. This should be available everywhere within the next few weeks.

 

Upcoming changes to Azure usage data

Many organizations use the full Azure usage and charges to understand what’s being used, identify what charges should be internally billed to which teams, and/or to look for opportunities to optimize costs with Azure reservations and Azure Hybrid Benefit, just to name a few. If you’re doing any analysis or have setup integration based on product details in the usage data, please update your logic for the following services.

The following change will start effective March 1:

Also, remember the key-based Enterprise Agreement (EA) billing APIs have been replaced by new Azure Resource Manager APIs. The key-based APIs will still work through the end of your enrollment, but will no longer be available when you renew and transition into Microsoft Customer Agreement. Please plan your migration to the latest version of the UsageDetails API to ease your transition to Microsoft Customer Agreement at your next renewal.

 

New videos and learning opportunities

For those visual learners out there, here are 2 new resources you should check out:

Follow the Azure Cost Management + Billing YouTube channel to stay in the loop with new videos as they’re released and let us know what you’d like to see next!

 

Documentation updates

There were lots of documentation updates. Here are a few you might be interested in:

Want to keep an eye on all of the documentation updates? Check out the Cost Management + Billing doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

What’s next?

These are just a few of the big updates from last month. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

Advancing safe deployment practices

“What is the primary cause of service reliability issues that we see in Azure, other than small but common hardware failures? Change. One of the value propositions of the cloud is that it’s continually improving, delivering new capabilities and features, as well as security and reliability enhancements. But since the platform is continuously evolving, change is inevitable. This requires a very different approach to ensuring quality and stability than the box product or traditional IT approaches — which is to test for long periods of time, and once something is deployed, to avoid changes. This post is the fifth in the series I kicked off in my July blog post that shares insights into what we’re doing to ensure that Azure’s reliability supports your most mission critical workloads. Today we’ll describe our safe deployment practices, which is how we manage change automation so that all code and configuration updates go through well-defined stages to catch regressions and bugs before they reach customers, or if they do make it past the early stages, impact the smallest number possible. Cristina del Amo Casado from our Compute engineering team authored this posts, as she has been driving our safe deployment initiatives.” – Mark Russinovich, CTO, Azure


 

When running IT systems on-premises, you might try to ensure perfect availability by having gold-plated hardware, locking up the server room and throwing away the key. Software wise, IT would traditionally prevent as much change as possible — avoiding applying updates to the operating system or applications because they’re too critical, and pushing back on change requests from users. With everyone treading carefully around the system, this ‘nobody breathe!’ approach stifles continued system improvement, and sometimes even compromises security for systems that are deemed too crucial to patch regularly. As Mark mentioned above, this approach doesn’t work for change and release management in a hyperscale public cloud like Azure. Change is both inevitable and beneficial, given the need to deploy service updates and improvements, and given our commitment to you to act quickly in the face of security vulnerabilities. As we can’t simply avoid change, Microsoft, our customers, and our partners need to acknowledge that change is expected, and we plan for it. Microsoft continues to work on making updates as transparent as possible and will deploy the changes safely as described below. Having said that, our customers and partners should also design for high availability, consume maintenance events sent by the platform to adapt as needed. Finally, in some cases, customers can take control of initiating the platform updates at a suitable time for their organization.

Changing safely

When considering how to deploy releases throughout our Azure datacenters, one of the key premises that shapes our processes is to assume that there could be an unknown problem introduced by the change being deployed, plan in a way that enables the discovery of said problem with minimal impact, and automate mitigation actions for when the problem surfaces. While a developer might judge it as completely innocuous and guarantee that it won’t affect the service, even the smallest change to a system poses a risk to the stability of the system, so ‘changes’ here refers to all kinds of new releases and covers both code changes and configuration changes. In most cases a configuration change has a less dramatic impact on the behavior of a system but, just as for a code change, no configuration change is free of risk for activating a latent code defect or a new code path.

Teams across Azure follow similar processes to prevent or at least minimize impact related to changes. Firstly, by ensuring that changes meet the quality bar before the deployment starts, through test and integration validations. Then after sign off, we roll out the change in a gradual manner and measure health signals continuously, so that we can detect in relative isolation if there is any unexpected impact associated with the change that did not surface during testing. We do not want a change causing problems to ever make it to broad production, so steps are taken to ensure we can avoid that whenever possible. The gradual deployment gives us a good opportunity to detect issues at a smaller scale (or a smaller ‘blast radius’) before it causes widespread impact.

Azure approaches change automation, aligned with the high level process above, through a safe deployment practice (SDP) framework, which aims to ensure that all code and configuration changes go through a lifecycle of specific stages, where health metrics are monitored along the way to trigger automatic actions and alerts in case of any degradation detected. These stages (shown in the diagram that follows) reduce the risk that software changes will negatively affect your existing Azure workloads.

A diagram showing how the cost and impact of failures increases throughout the production rollout pipeline, and is minimized by going through rounds of development and testing, quality gates, and integration.

This shows a simplification of our deployment pipeline, starting on the left with developers modifying their code, testing it on their own systems, and pushing it to staging environments. Generally, this integration environment is dedicated to teams for a subset of Azure services that need to test the interactions of their particular components together. For example, core infrastructure teams such as compute, networking, and storage share an integration environment. Each team runs synthetic tests and stress tests on the software in that environment, iterate until stable, and then once the quality results indicate that a given release, feature, or change is ready for production they deploy the changes into the canary regions.

Canary regions

Publicly we refer to canary regions as “Early Updates Access Program” regions, and they’re effectively full-blown Azure regions with the vast majority of Azure services. One of the canary regions is built with Availability Zones and the other without it, and both regions form a region pair so that we can validate data geo-replication capabilities. These canary regions are used for full, production level, end to end validations and scenario coverage at scale. They host some first party services (for internal customers), several third party services, and a small set of external customers that we invite into the program to help increase the richness and complexity of scenarios covered, all to ensure that canary regions have patterns of usage representative of our public Azure regions. Azure teams also run stress and synthetic tests in these environments, and periodically we execute fault injections or disaster recovery drills at the region or Availability Zone level, to practice the detection and recovery workflows that would be run if this occurred in real life. Separately and together, these exercises attempt to ensure that software is of the highest quality before the changes touch broad customer workloads in Azure.

Pilot phase

Once the results from canary indicate that there are no known issues detected, the progressive deployment to production can get started, beginning with what we call our pilot phase. This phase enables us to try the changes, still at a relatively small scale, but with more diversity of hardware and configurations. This phase is especially important for software like core storage services and core compute infrastructure services, that have hardware dependencies. For example, Azure offers servers with GPU’s, large memory servers, commodity servers, multiple generations and types of processors, Infiniband, and more, so this enables flighting the changes and may enable detection of issues that would not surface during the smaller scale testing. In each step along the way, thorough health monitoring and extended ‘bake times’ enable potential failure patterns to surface, and increase our confidence in the changes while greatly reducing the overall risk to our customers.

Once we determine that the results from the pilot phase are good, the deployment systems proceed by allowing the change to progress to more and more regions incrementally. Throughout the deployment to the broader Azure regions, the deployment systems endeavor to respect Availability Zones (a change only goes to one Availability Zone within a region) and region pairing (every region is ‘paired up’ with a second region for georedundant storage) so a change deploys first to a region and then to its pair. In general, the changes deploy only as long as no negative signals surface.

Safe deployment practices in action

Given the scale of Azure globally, the entire rollout process is completely automated and driven by policy. These declarative policies and processes (not the developers) determine how quickly software can be rolled out. Policies are defined centrally and include mandatory health signals for monitoring the quality of software as well as mandatory ‘bake times’ between the different stages outlined above. The reason to have software sitting and baking for different periods of time across each phase is to make sure to expose the change to a full spectrum of load on that service. For example, diverse organizational users might be coming online in the morning, gaming customers might be coming online in the evening, and new virtual machines (VMs) or resource creations from customers may occur over an extended period of time.

Global services, which cannot take the approach of progressively deploying to different clusters, regions, or service rings, also practice a version of progressive rollouts in alignment with SDP. These services follow the model of updating their service instances in multiple phases, progressively deviating traffic to the updated instances through Azure Traffic Manager. If the signals are positive, more traffic gets deviated over time to updated instances, increasing confidence and unblocking the deployment from being applied to more service instances over time.

Of course, the Azure platform also has the ability to deploy a change simultaneously to all of Azure, in case this is necessary to mitigate an extremely critical vulnerability. Although our safe deployment policy is mandatory, we can choose to accelerate it when certain emergency conditions are met. For example, to release a security update that requires us to move much more quickly than we normally would, or for a fix where the risk of regression is overcome by the fix mitigating a problem that’s already very impactful to customers. These exceptions are very rare, in general our deployment tools and processes intentionally sacrifice velocity to maximize the chance for signals to build up and scenarios and workflows to be exercised at scale, thus creating the opportunity to discover issues at the smallest possible scale of impact.

Continuing improvements

Our safe deployment practices and deployment tooling continue to evolve with learnings from previous outages and maintenance events, and in line with our goal of detecting issues at a significantly smaller scale. For example, we have learned about the importance of continuing to enrich our health signals and about using machine learning to better correlate faults and detect anomalies. We also continue to improve the way in which we do pilots and flighting, so that we can cover more diversity of hardware with smaller risk. We continue to improve our ability to rollback changes automatically if they show potential signs of problems. We also continue to invest in platform features that reduce or eliminate the impact of changes generally.

With over a thousand new capabilities released in the last year, we know that the pace of change in Azure can feel overwhelming. As Mark mentioned, the agility and continual improvement of cloud services is one of the key value propositions of the cloud – change is a feature, not a bug. To learn about the latest releases, we encourage customers and partners to stay in the know at Azure.com/Updates. We endeavor to keep this as the single place to learn about recent and upcoming Azure product updates, including the roadmap of innovations we have in development. To understand the regions in which these different services are available, or when they will be available, you can also use our tool at Azure.com/ProductsbyRegion.

ICYMI: A monthly roundup of stuff developers want to know

Posted by Natalie Dao, Google Developers Social TeamHappy New Year … is something we won’t say again until next January, promise. Still. There’s a lot to be thrilled about in 2020. Check out our Top Ten list of videos, blogs, and events to find out why we’re already excited for next month, the month after that, and beyond. It’s been a bit of a slow start, but one thing is for sure: 2020 is going to rule. Let’s get into it.

1. Game On 🎮


Gamers rejoice! The annual Indie Games Festival from Google Play will hit Europe, Japan, and South Korea on April 25th. Whether you’re an indie game developer or a devoted gamer, this is your chance to showcase your unique skills. Submissions close on March 2nd, so get to it!

Learn more about it on the official website.

2. It’s A Dirty Job 🧹

Finally, a vacuum cleaner that doesn’t suck! Wait. Ecovacs Robotics manufactures robotic vacuum cleaners powered by a TensorFlow Lite model to help detect and avoid obstacles.

Read the blog to learn more.

3. Take The DSC Challenge 🏆

Developer Student Clubs from 800+ universities across the globe will use technology to solve local problems within their communities. 10 winning teams (up to 4 members) will be chosen and receive prizes including a curated experience with Googlers to celebrate! Submissions will be accepted between March 15-30, 2020.

Up for the challenge? Learn how you can enter here.

4. You Gotta Check Out This New Podcast 🎙

Sound up! The Assistant on Air podcast from Actions on Google is now streaming. Tune in to listen to your favorite couch-friendly series, where guests chat about building for the Google Assistant.

Get to listening on Google Podcasts, Google Play Music, Apple, and Spotify!

5. Flutter/Dart Do Design And They Do It Well 🎨

Photo courtesy of Fast Company

Look Ma, we made it! Our favorite UI toolkit and the programming language that powers it have been listed in Fast Company’s most important design ideas of the decade. Flutter and Dart allow developers to build beautiful experiences that can be seamlessly deployed across all platforms.

Check out the star-studded lineup on Fast Company.

6. Summit Season Starts Now 🙌

The time is now to register for the TensorFlow Dev Summit! Join the machine learning community in Sunnyvale, CA this March for two full days of highly technical talks, demos, sessions, and networking with the TensorFlow team.

See how you can witness that ML magic on the official event website.

7. Registration Open For Google Cloud Next ’20 ⏩

SO. MANY. EVENTS. Registration for Google Cloud Next ‘20 has been announced! Taking place in the charming city of San Francisco, this epic conference brings together some of the brightest minds in tech for three days of networking, learning, and collaboration. Get the scoop on all the latest products, learn how leading brands use Cloud to solve challenges, immerse yourself in exhibits, and more.

Get your registration locked down on the official event website.

8. New Coral Products For 2020 👍

Coral is a platform of hardware components and software tools that makes prototyping and scaling local AI products easier. Launched last year, this portfolio of products has been used for many applications across different industries ranging from healthcare to agriculture. To kick off the new year, Coral has released new products to expand the possibilities of local AI!

Get all the details on the blog here.

9. SERIES SPOTLIGHT: Get To Know Cloud Firestore 🔥

In this episode of Get to Know Cloud Firestore from Firebase, Todd Kerpelman tackles Cloud Functions and five interesting scenarios you might come across when implementing them in your app.

Watch the full video here and don’t forget to subscribe to the Firebase YT channel.

10. Countdown to IO 🕛

#GoogleIO is returning to Mountain View in May! To announce the event, Google launched a collaborative game where users worked together to repair an intergalactic satellite network. Although the date has been decoded by savvy internet detectives, you can still embark on the mission for fun!

More event details are coming soon on the official event website. See you at Shoreline.

Stay connected!

Follow and subscribe to get all the latest news and updates from the Google Developer ecosystem.

Twitter
Instagram
Facebook
Youtube

Azure Cost Management updates – January 2020

Whether you’re a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you’re spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Azure Cost Management can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let’s dig into the details. 

Automate reporting for Microsoft Customer Agreement with scheduled exports

You already know you can dig into your cost and usage data from the Azure portal. You may even know you can get rich reporting from the Cost Management Query API or get the full details, in all its glory, from the UsageDetails API. These are both great for ad-hoc queries, but maybe you’re looking for a simpler solution. This is where Azure Cost Management exports come in.

Azure Cost Management exports automatically publish your cost and usage data to a storage account on a daily, weekly, or monthly basis. Up to this month, you’ve been able to schedule exports for Enterprise Agreement (EA) and pay-as-you-go (PAYG) accounts. Now, you can also schedule exports across subscriptions for Microsoft Customer Agreement billing accounts, subscriptions, and resource groups.

Learn more about scheduled exports in Create and manage exported data

Raising awareness of disabled costs

Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) accounts both offer an option to hide prices and charges from subscription users. While this can be useful to obscure negotiated discounts (including vendors), it also puts you at risk of over-spending since teams that deploy and manage resources don’t have visibility and cannot effectively keep costs down. To avoid this, we recommend using custom Azure RBAC roles for anyone who shouldn’t see costs, while allowing everyone else to fully manage and optimize costs.

Unfortunately, some organizations may not realize costs have been disabled. This can happen when you renew your EA enrollment or when you switch between EA partners, as an example. In an effort to help raise awareness of these settings, you will see new messaging when costs have been disabled for the organization. Someone who does not have access to see costs will see a message like the following in cost analysis:

Message stating "Cost Management not enabled for subscription users. Contact your subscription account admin about enabling 'Account owner can view charges' on the billing account."

EA billing account admins and MCA billing profile owners will also see a message in cost analysis to ensure they’re aware that subscription users cannot see or optimize costs.

Cost analysis showing a warning to Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) admins that "Subscription users cannot see or optimize costs. Enable Cost Management." with a link to enable view charges for everyone

To enable access to Azure Cost Management, simply click the banner and turn on “Account owners can view charges” for EA accounts and “Azure charges” for MCA accounts. If you’re not sure whether subscription users can see costs on your billing account, check today and unlock new cost reporting, control, and optimization capabilities for your teams. 

What’s new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what’s coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

  • Get started quicker with the cost analysis Home view
    Azure Cost Management offers five built-in views to get started with understanding and drilling into your costs. The Home view gives you quick access to those views so you get to what you need faster.
  • NEW: Try Preview gives you quick access to preview featuresNow available in the public portal
    You already know Cost Management Labs gives you early access to the latest changes. Now you can also opt in to individual preview features from the public portal using the Try preview command in cost analysis.

Of course, that’s not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it’s in the full Azure portal. We’re eager to hear your thoughts and understand what you’d like to see next. What are you waiting for? Try Cost Management Labs today. 

Custom RBAC role preview for management groups

Management groups now support defining custom RBAC roles to allow you to assign more specific permissions to users, groups, and apps within your organization. One example could be a role that allows someone to be able to create and manage the management group hierarchy as well as manage costs using Azure Cost Management + Billing APIs. Today, this requires both the Management Group Contributor and Cost Management Contributor roles, but these permissions could be combined into a single custom role to streamline role assignment.

If you’re unfamiliar with RBAC, Azure role-based access control (RBAC) is the authorization system used to manage access to Azure resources. To grant access, you assign roles to users, groups, service principals, or managed identities at a particular scope, like a resource group, subscription, or in this case, a management group. Cost Management + Billing supports the following built-in Azure RBAC roles, from least to most privileged:

  • Cost Management Reader: Can view cost data, configuration (including budgets exports), and recommendations.
  • Billing Reader: Lets you read billing data.
  • Reader: Lets you view everything, but not make any changes.
  • Cost Management Contributor: Can view costs, manage cost configuration (including budgets and exports), and view recommendations.
  • Contributor: Lets you manage everything except access to resources.
  • Owner: Lets you manage everything, including access to resources.

While most organizations will find the built-in roles to be sufficient, there are times when you need something more specific. This is where custom RBAC roles come in. Custom RBAC roles allow you to define your own set of unique permissions by specifying a set of wildcard “actions” that map to Azure Resource Manager API calls. You can mix and match actions as needed to meet your specific needs, whether that’s to allow an action or deny one (using “not actions”). Below are a few examples of the most common actions:

  • Microsoft.Consumption/*/read – Read access to all cost and usage data, including prices, usage, purchases, reservations, and resource tags.
  • Microsoft.Consumption/budgets/* – Full access to manage budgets.
  • Microsoft.CostManagement/*/read – Read access to cost and usage data and alerts.
  • Microsoft.CostManagement/views/* – Full access to manage shared views used in cost analysis.
  • Microsoft.CostManagement/exports/* – Full access to manage scheduled exports that automatically push data to storage on a regular basis.
  • Microsoft.CostManagement/cloudConnectors/* – Full access to manage AWS cloud connectors that allow you manage Azure and AWS costs together in the same management group. 

New ways to save money with Azure

Lots of cost optimization improvements over the past month! Here are a few you might be interested in:

Recent changes to Azure usage data

Many organizations use the full Azure usage and charges dataset to understand what’s being used, identify what charges should be internally billed to which teams, and/or to look for opportunities to optimize costs with Azure reservations and Azure Hybrid Benefit, just to name a few. If you’re doing any analysis or have setup integration based on product details in the usage data, please update your logic for the following services.

All of the following changes were effective January 1:

Also, remember the key-based Enterprise Agreement (EA) billing APIs have been replaced by new Azure Resource Manager APIs. The key-based APIs will still work through the end of your enrollment, but will no longer be available when you renew and transition into Microsoft Customer Agreement. Please plan your migration to the latest version of the UsageDetails API to ease your transition to Microsoft Customer Agreement at your next renewal. 

Documentation updates

There were tots of documentation updates. Here are a few you might be interested in:

Want to keep an eye on all of the documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

What’s next?

These are just a few of the big updates from last month. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

Azure Cost Management 2019 year in review

When we talk about cost management, we focus on three core tenets:

  1. Ensuring cost visibility so everyone is aware of the financial impact their solutions have.
  2. Driving accountability throughout the organization to stop bad spending patterns.
  3. Continuous cost optimization as your usage changes over time to do more with less.

These were the driving forces in 2019 as we set out to build a strong foundation that pulls together all costs across all account types and ensures everyone in the organization has a means to report on, control, and optimize costs. Our ultimate goal is to empower you to lead a healthier, more financially responsible organization.

All costs behind a single pane of glass

On the heels of the Azure Cost Management preview, 2019 started off strong with the general availability of Enterprise Agreement (EA) accounts in February and pay-as-you-go (PAYG) in April. At the same time, Microsoft as a whole embarked on a journey to modernize the entire commerce platform with the new Microsoft Customer Agreement (MCA), which started rolling out for enterprises in March, pay-as-you-go subscriptions in July, and Cloud Solution Providers (CSP) using Azure plan in November. Whether you get Azure through the Microsoft field, directly from Azure.com, or through a Microsoft partner, you have the power of Azure Cost Management at your fingertips. But getting basic coverage of your Azure usage is only part of the story.

To effectively manage costs, you need all costs together, in a single repository. This is exactly what Azure Cost Management brings you. From the unprecedented ability to monitor Amazon Web Services (AWS) costs within the Azure portal in May (a first for any cloud provider), to the inclusion of reservation and Marketplace purchases in June, Azure Cost Management enables you to manage all your costs from a single pane of glass, whether you’re using Azure or AWS.

What’s next?

Support for Sponsorship and CSP subscriptions not on an Azure plan are at the top of the list to ensure every Azure subscription can use Azure Cost Management. AWS support will become generally available and then Google Cloud Platform (GCP) support will be added.

Making it easier to report on and analyze costs

Getting all costs in one place is only the beginning. 2019 also saw many improvements that help you report on and analyze costs. You were able to dig in and explore costs with the 2018 preview, but the only way to truly control and optimize costs is to raise awareness of current spending patterns. To that end, reporting in 2019 was focused on making it easier to customize and share.

The year kicked off with the ability to pin customized views to the Azure portal dashboard in January. You could share links in May, save views directly from cost analysis in August, and download charts as an image in September. You also saw a major Power BI refresh in October that no longer required classic API keys and added reservation details and recommendations. Each option helps you not only save time, but also starts that journey of driving accountability by ensuring everyone is aware of the costs they’re responsible for.

Looking beyond sharing, you also saw new capabilities like forecasting costs in June and switching between currencies in July, simpler out-of-the-box options like the new date picker in May and invoice details view in September, and changes that simply help you get your job done the way you want to like support for the Azure portal dark theme and continuous accessibility improvements throughout the year.

From an API automation and integration perspective, 2019 was also a critical milestone as EA cost and usage APIs moved to Azure Resource Manager. The Resource Manager APIs are forward-looking and designed to minimize your effort when it comes time to transition to Microsoft Customer Agreement by standardizing terminology across account types. If you haven’t started the migration to the Resource Manager APIs, make that your number one resolution for the new year!

What’s next?

2020 will continue down this path, from more flexible reporting and scheduling email notifications to general improvements around ease of use and increased visibility throughout the Azure portal. Power BI will get Azure reservation and Hybrid Benefit reports as well as support for subscription and resource group users who don’t have access to the whole billing account. You can also expect to see continued API improvements to help make it easier than ever to integrate cost data into your business systems and processes.

Flexible cost control that puts the power in your hands

Once you understand what you’re spending and where, your next step is to figure out how to stop the bad spending patterns and keep costs under control. You already know you can define budgets to get notified about and take action on overages. You decide what actions you want to take, whether that be as simple as an email notification or as drastic as deleting all your resources to ensure you won’t be charged. Cost control in 2019 was centered on helping you stay on top of your costs and giving you the tools to control spending as you see fit.

This started with a new, consolidated alerts experience in February where you can see all your invoice, credit, and budget overage alerts in a single place. Budgets were expanded to support new account types we talked about above, and to support management groups in June giving you a view of all your costs across subscriptions. Then in August, you were able to create targeted budgets with filters for fine-grained tracking, whether that be for an entire service, a single resource, or an application that spans multiple subscriptions (via tags). This also came with an improved experience when creating budgets to help you better estimate what your budget should be based on historical and forecasted trends.

What’s next?

2020 will take cost control to the next level by allowing you to split shared costs with cost allocation rules and define an additional markup for central teams who typically run on overhead or don’t want to expose discounts to the organization. We’re also looking at improvements around management groups and tags to give you more flexibility to manage costs the way you need to for your organization.

New ways to save and do more with less

Cloud computing comes with a lot of promises, from flexibility and speed to scalability and security. The promise of cost savings is often the driving force behind cloud migrations, yet is also one of the more elusive to achieve. Luckily, Azure delivers new cost optimization opportunities nearly every month! This is on top of the recommendations offered by Azure Advisor, which are specifically tuned to save money on the resources you already have deployed. Here are a few of the over two dozen new cost saving opportunities you saw in 2019:

What’s next?

Expect to see continued updates in these areas through 2020. We’re also partnering with individual service teams to deliver even more built-in recommendations for database, storage, and PaaS services, just to name a few.

Streamlined account and subscription management

Throughout 2019, you may have noticed a lot of changes to Cost Management + Billing in the Azure portal. What was purely focused on PAYG subscriptions in early 2018 became a central hub for billing administrators in 2019 with full administration for MCA accounts in March, new EA account management capabilities in July, and subscription provisioning and transfer updates in August. All of these are helping you get one step closer to having a single portal to manage every aspect of your account.

What’s next?

2020 will be the year of converged and consolidated experiences for Cost Management + Billing. This will start with the Billing and Cost Management experiences within the Azure portal and will expand to include capabilities you’re currently using the EA, Account, or Cloudyn portals for today. Whichever portal you use, expect to see all these come together into a single, consolidated experience that has more consistency across account types. This will be especially evident as your account moves from the classic EA, PAYG, and CSP programs to Microsoft Customer Agreement (and Azure plan), which is fully managed within the Azure portal and offers critical new billing capabilities, like finer-grained access control and grouping subscriptions into separate invoices.

Looking forward to another year

The past 12 months have been packed with one improvement after another, and we’re just getting started! We couldn’t list them all here, but if you only take one thing away, please do check out and subscribe to the Azure Cost Management monthly updates for the latest news on what’s changed and what’s coming. We’ve already talked about what you can expect to see in 2020 for each area, but the key takeaway is:

2020 will bring one experience to manage all your Azure, AWS, and GCP costs from the Azure portal, with simpler, yet more powerful cost reporting, control, and optimization tools that help you stay more focused on your mission.

We look forward to hearing your feedback as these new and updated capabilities become available. And if you’re interested in the latest features, before they’re available to everyone, check out Azure Cost Management Labs (introduced in July) and don’t hesitate to reach out with any feedback. Cost Management Labs gives you a direct line to the Azure Cost Management engineering team and is the best way to influence and make an immediate impact on features being actively developed and tuned for you.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks! And, as always, share your ideas and vote up others in the Cost Management feedback forum. See you in 2020!

Customer Provided Keys with Azure Storage Service Encryption

Azure storage offers several options to encrypt data at rest. With client-side encryption you can encrypt data prior to uploading it to Azure Storage. You can also choose to have Azure storage manage encryption operations with storage service encryption using Microsoft managed keys or using customer managed keys in Azure Key Vault. Today, we present enhancement to storage service encryption to support granular encryption settings on storage account with keys hosted in any key store. Customer provided keys (CPK) enables you to store and manage keys in on-premises or key stores other than Azure Key Vault to meet corporate, contractual, and regulatory compliance requirements for data security.

Customer provided keys allows you to pass an encryption key as part of read or write operation to storage service using blob APIs. Since the encryption key is defined at the object level, you can have multiple encryption keys within a storage account. When you create a blob with customer provided key, storage service persists the SHA-256 hash of the encryption key with the blob to validate future requests. When you retrieve an object, you must provide the same encryption key as part of the request. For example, if a blob is created with Put Blob using CPK, all subsequent write operations must provide the same encryption key. If a different key is provided, or if no key is provided in the request, the operation will fail with 400 Bad Request. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key. Here’s the process:
 

740 Blog
Figure 1 Customer Provided Keys

Getting started

Customer Provided Keys may be used with supported blob operations by adding the x-ms-encryption-* headers to the request.

Request HeaderDescription
x-ms-encryption-keyRequired. A Base64-encoded AES-256 encryption key value.
x-ms-encryption-key-sha256Required. The Base64-encoded SHA256 of the encryption key.
x-ms-encryption-algorithmRequired. Specifies the algorithm to use when encrypting data using the given key. Must be AES256.

Request

PUT mycontainer/myblob.txt
x-ms-version: 2019-02-02
x-ms-encryption-key: MDEyMzQ1NjcwMTIzNDU2NzAxMjM0NTY3MDEyMzQ1Njc=
x-ms-encryption-key-sha256: 3QFFFpRA5+XANHqwwbT4yXDmrT/2JaLt/FKHjzhOdoE=
x-ms-encryption-algorithm: AES256
Content-Length:
...

Key management

Azure Storage does not store or manage customer provided encryption keys. Keys are securely discarded as soon as possible after they’ve been used to encrypt or decrypt the blob data. If customer provided keys are used on blobs with snapshots enabled, each snapshot can be provisioned with different encryption key. You must keep track of snapshot and associated encryption key to pass the correct key with blob operations. If you need to rotate the key associated with an object, you can download the object and upload with new encryption key.

Next steps

This feature is available now on your storage account with recent release of Storage services REST API (version 2019-02-02). You may also use .NET Client library and Java Client library. There are no additional charges for customer provided keys.

For more information on customer provided keys please visit our documentation page. For any further questions, or to discuss your specific scenario, send us an email at [email protected] or post your ideas and suggestions about Azure Storage on our feedback forum.

Azure Cosmos DB recommendations keep you on the right track

The tech world is fast-paced, and cloud services like Azure Cosmos DB get frequent updates with new features, capabilities, and improvements. It’s important—but also challenging—to keep up with the latest performance and security updates and assess whether they apply to your applications. To make it easier, we’ve introduced automatic and tailored recommendations for all Azure Cosmos DB users. A large spectrum of personalized recommendations now show up in the Azure portal when you browse your Azure Cosmos DB accounts.

Some of the recommendations we’re currently dispatching cover the following topics

  • SDK upgrades: When we detect the usage of an old version of our SDKs, we recommend upgrading to a newer version to benefit from our latest bug fixes and performance improvements.
  • Fixed to partitioned collections: To fully leverage Azure Cosmos DB’s massive scalability, we encourage users of legacy, fixed-sized containers that are approaching the limit of their storage quota to migrate these containers to partitioned ones.
  • Query page size: We recommend using a query page size of -1 for users that define a specific value instead.
  • Composite indexes: Composite indexes can dramatically improve the performance and RU consumption of some queries, so we suggest their usage whenever our telemetry detects queries that can benefit from them.
  • Incorrect SDK usage: It’s possible for us to detect when our SDKs are incorrectly used, like when a client instance is created for each request instead of being used as a singleton throughout the application; corresponding recommendations are provided in these cases.
  • Lazy indexing: The purpose of Azure Cosmos DB’s lazy indexing mode is rather limited and can impact the freshness of query results in some situations. We advise using the (default) consistent indexing mode instead of lazy indexing.
  • Transient errors: In rare occurrences, some transient errors can happen when a database or collection gets created. SDKs usually retry operations whenever a transient error occurs, but if that’s not the case, we notify our users that they can safely retry the corresponding operation.

Each of our recommendations includes a link that brings you directly to the relevant section of our documentation, so it’s easy for you to take action.

3 ways to find your Azure Cosmos DB recommendations

1.    Click on this message at the top of the Azure Cosmos DB blade:

A pop-up message in Azure Cosmos DB saying that new notifications are available.
2.    Head directly to the new “Notifications” section of your Cosmos DB accounts:

The Notifications section showing all received Cosmos DB recommendations.
3.    Or even find them through Azure Advisor, making it easier to receive our recommendations for users who don’t routinely visit the Azure portal.

Over the coming weeks and months, we’ll expand the coverage of these notifications to include topics like partitioning, indexing, network security, and more. We also plan to surface general best practices to ensure you’re making the most out of Azure Cosmos DB.

Have ideas or suggestions for more recommendations? Email us or leave feedback using the smiley on the top-right corner of the Azure portal!

Hot patching SQL Server Engine in Azure SQL Database

In the world of cloud database services, few things are more important to customers than having uninterrupted access to their data. In industries like online gaming and financial services that experience high transaction rates, even the smallest interruptions can potentially impact the end-user’s experience. Azure SQL Database is evergreen, meaning that it always has the latest version of the SQL Engine, but maintaining this evergreen state requires periodic updates to the service that can take the database offline for a second. For this reason, our engineering team is continuously working on innovative technology improvements that reduce workload interruption.

Today’s post, in collaboration with the Visual C++ Compiler team, covers how we patch SQL Server Engine without impacting workload at all.
A diagram showing the details of how hot patching works.

Figure 1 – This is what hot patching looks like under the covers. If you’re interested in the low-level details, see our technical blog post.

The challenge

The SQL Engine we are running in Azure SQL Database is the very latest version of the same engine customers run on their own servers, except we manage and update it. To update SQL Server or the underlying infrastructure (i.e., Azure Service Fabric or the operating system), we must stop the SQL Server process. If that process hosts the primary database replica, we move the replica to another machine, requiring a failover.

During a failover, the database may be offline for a second and still meet our 99.995 percent SLA. However, failover of the primary replica impacts workload because it aborts in-flight queries and transactions. We built features such as resumable index (re)build and accelerated database recovery to address these situations, but not all running operations are automatically resumable. It may be expensive to restart complex queries or transactions that were aborted due to an upgrade. So even though failovers are quick, we want to avoid them.

SQL Server and the overall Azure platform invests significant engineering effort into platform availability and reliability. In SQL database, we have multiple replicas of every database. During upgrade, we ensure that hot standbys are available to take over immediately.

We’ve worked closely with the broader Azure and Service Fabric teams to minimize the number of failovers. When we first decide to fail over a database for upgrade, we apply updates to all components in the stack at the same time: OS, Service Fabric, and SQL Server. We have automatic scheduling that avoids deploying during an Azure region’s core business hours. Just before failover, we attempt to drain active transactions to avoid aborting them. We even utilize database workload patterns to perform failover at the best time for the workload.

Even with all that, we don’t get away from the fact that to update SQL Engine to a new version, we must restart the process and failover the database’s primary replica at least once. Or do we?

Hot patching and results

Hot patching is modifying in-memory code in a running process without restarting the process. In our case, it gives us the capability to modify C++ code in SQL Engine without restarting sqlservr.exe. Since we don’t restart, we don’t failover the primary replica and interrupt the workload. We don’t even need to pause SQL Server activity while we patch. Hot patching is unnoticed by the user workload, other than the patch payload, of course!

Hot patching does not replace traditional, restarting upgrades – it complements them. Hot patching currently has limitations that make it unsuitable when there are a large number of changes, such as when a major new feature is introduced. But it is perfect for smaller, targeted changes. More than 80 percent of typical SQL bug fixes are hot patchable. Benefits of hot patching include:

  • Reduced workload disruption – No restart means no database failover and no workload impact.
  • Faster bug fixes – Previously, we weighed the urgency of a bug fix vs. impact on customer workloads from deploying it. Sometimes we would deem a bug fix not important enough for worldwide rollout because of the workload impact. With hot patching, we can now deploy bug fixes worldwide right away.
  • Features available sooner – Even with the 500,000+ functional tests that we run several times per day and thorough testing of every new feature, sometimes we discover problems after a new feature has been made available to customers. In such cases, we may have to disable the feature or delay go-live until the next scheduled full upgrade. With hot patching, we can fix the problem and make the feature available sooner.

We did the first hot patch in production in 2018. Since then, we have hot patched millions of SQL Servers every month. Hot patching increases SQL Database ship velocity by 50 percent, while at the same time improving availability.

How hot patching works

For the technically interested, see our technical blog post for a detailed explanation of how hot patching works under the covers. Start reading at section three.

Closing words and next steps

With the capability in place, we are now working to improve the tooling and remove limitations to make more changes hot patchable with quick turnaround. For now, hot patching is only available in Azure SQL Database, but some day it may also come to SQL Server. Let us know via [email protected] if you would be interested in that.

Please leave comments and questions below or contact us on the email above if you would like to see more in-depth coverage of cool technology we work on.

Azure Files premium tier gets zone redundant storage

Azure Files premium tier is now zone redundant!

We’re excited to announce the general availability of zone redundant storage (ZRS) for Azure Files premium tier. Azure Files premium tier with ZRS replication enables highly performant, highly available file services, that are built on solid-state drives (SSD).

Azure Files ZRS premium tier should be considered for managed file services where performance and regional availability are critical for the business. ZRS provides high availability by synchronously writing three replicas of your data across three different Azure Availability Zones, thereby protecting your data from cluster, datacenter, or entire zone outage. Zonal redundancy enables you to read and write data even if one of the availability zones is unavailable.

With the release of the ZRS for Azure Files premium tier, premium tier now offers two sets of durability options to meet your storage needs, zone redundant storage (ZRS) for intra-region high availability and locally-redundant storage (LRS) for lower-cost single region durable storage.

Getting started

You can create ZRS Azure premium files account through Azure Portal, Azure CLI, or Azure PowerShell.

Azure Files premium tier requires FileStorage as the account kind. To create a ZRS account in the Azure Portal, set the following properties:

An image showing the account settings.

Currently, ZRS option for Azure Files premium tier is available in West Europe and we will be gradually expanding the regional coverage. Stay up to date on the premium tier ZRS region availability through the Azure documentation.

Migration from LRS premium files account to ZRS premium files account requires manual copy or movement of data from an existing LRS account to a new ZRS account. Live account migration on request is not supported yet. Please check the migration documentation for the latest information.

Refer to the pricing page for the latest pricing information.

To learn more about premium tier, visit Azure Files premium tier documentation. Give it a try and share your feedback on the Azure Storage forum or email us at [email protected].

Happy sharing!

Plan migration of your Hyper-V servers using Azure Migrate Server Assessment

Azure Migrate is focused on streamlining your migration journey to Azure. We recently announced the evolution of Azure Migrate, which provides a streamlined, comprehensive portfolio of Microsoft and partner tools to meet migration needs, all in one place. An important capability included in this release is upgrades to Server Assessment for at-scale assessments of VMware and Hyper-V virtual machines (VMs.)

This is the first in a series of blogs about the new capabilities in Azure Migrate. In this post, I will talk about capabilities in Server Assessment that help you plan for migration of Hyper-V servers. This capability is now generally available as part of the Server Assessment feature of Azure Migrate. After assessing your servers for migration, you can migrate your servers using Microsoft’s Server Migration solution available on Azure Migrate. You can get started right away by creating an Azure Migrate project.

Server Assessment earlier supported assessment of VMware VMs for migration to Azure. We’ve now included Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis for Hyper-V VMs. You can now plan at-scale, assessing up to 35,000 Hyper-V servers in one Azure Migrate project. If you use VMware as well, you can discover and assess both Hyper-V and VMware servers in the same Azure Migrate project. You can create groups of servers, assess by group, and refine the groups further using application dependency information.

An image of the Overview page or an Azure Migrate assessment.

Azure suitability analysis

The assessment determines whether a given server can be migrated as-is to Azure. Azure support is checked for each server discovered. If it is found that a server is not ready to be migrated, remediation guidance is automatically provided. You can customize your assessment and regenerate the assessment reports. You can apply subscription offers and reserved instance pricing on the cost estimates. You can also generate a cost estimate by choosing a VM series of your choice, and specify the uptime of the workloads you will run in Azure.

Cost estimation and sizing

Assessment reports provide detailed cost estimates. You can optimize on cost using performance-based rightsizing assessments. The performance data of your on-premise server is taken into consideration to recommend an appropriate Azure VM and disk SKU. This helps to optimize and right-size on cost as you migrate servers that might be over-provisioned in your on-premise data center.

An image of the Azure readiness section of an Azure Migrate assessment.

Dependency analysis

Once you have established cost estimates and migration readiness, you can go ahead and plan your migration phases. Use the dependency analysis feature to understand the dependencies between your applications. This is helpful to understand which workloads are interdependent and need to be migrated together, ensuring you do not leave critical elements behind on-premises. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration using this feature.

Assess your Hyper-V servers in three simple steps:

  • Create an Azure Migrate project and add the Server Assessment solution to the project.
  • Set up the Azure Migrate appliance and start discovery of your Hyper-V virtual machines. To set up discovery, the Hyper-V host or cluster names are required. Each appliance supports discovery of 5,000 VMs from up to 300 Hyper-V hosts. You can set up more than one appliance if required.
  • Once you have successfully set up discovery, create assessments and review the assessment reports.
  • Use the application dependency analysis features to create and refine server groups to phase your migration.

Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Azure Government, Canada, Europe, India, Japan, United Kingdom, and United States geographies.

When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You will be able automatically carry over the assessment recommendations from Server Assessment into Server Migration. You can read more in our documentation “Migrate Hyper-V VMs to Azure.”

In the coming months, we will add assessment capabilities for physical servers. You will also be able to run a quick assessment by adding inventory information using a CSV file. Stay tuned!

In the upcoming blogs, we will talk about tools for scale assessments, scale migrations, and the partner integrations available in Azure Migrate.

Resources to get started

Azure Archive Storage expanded capabilities: faster, simpler, better

Since launching Azure Archive Storage, we have seen unprecedented interest and innovative usage from a variety of industries. Archive Storage is built as a scalable service for cost-effectively storing rarely accessed data for long periods of time. Cold data such as application backups, healthcare records, autonomous driving recordings, etc. that might have been previously deleted could be stored in Azure Storage’s Archive tier in an offline state, then rehydrated to an online tier when needed. Earlier this month, we made Azure Archive Storage even more affordable by reducing prices by up to 50 percent in some regions, as part of our commitment to provide the most cost-effective data storage offering.

We’ve gathered your feedback regarding Azure Archive Storage, and today, we’re happy to share three archive improvements in public preview that make our service even better.

1. Priority retrieval from Azure Archive

To read data stored in Azure Archive Storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and takes a matter of hours to complete. Today we’re sharing the public preview release of priority retrieval from archive allowing for much faster offline data access. Priority retrieval allows you to flag the rehydration of your data from the offline archive tier back into an online hot or cool tier as a high priority action. By paying a little bit more for the priority rehydration operation, your archive retrieval request is placed in front of other requests and your offline data is expected to be returned in less than one hour.

Priority retrieval is recommended to be used for emergency requests for a subset of an archive dataset. For the majority of use cases, our customers plan for and utilize standard archive retrievals which complete in less than 15 hours. But on rare occasions, a retrieval time of an hour or less is required. Priority retrieval requests can deliver archive data in a fraction of the time of a standard retrieval operation, allowing our customers to quickly resume business as usual. For more information, please see Blob Storage Rehydration.

The archive retrieval options now provided under the optional parameter are:

  • Standard rehydrate-priority is the new name for what Archive has provided over the past two years and is the default option for archive SetBlobTier and CopyBlob requests, with retrievals taking up to 15 hours.
  • High rehydrate-priority fulfills the need for urgent data access from archive, with retrievals for blobs under ten GB, typically taking less than one hour.

Regional priority retrieval demand at the time of request can affect the speed at which your data rehydration is completed. In most scenarios, a high rehydrate-priority request may return your Archive data in under one hour. In the rare scenario where archive receives an exceptionally large amount of concurrent high rehydrate-priority requests, your request will still be prioritized over standard rehydrate-priority but may take one to five hours to return your archive data. In the extremely rare case that any high rehydrate-priority requests take over five hours to return archive blobs under a few GB, you will not be charged the priority retrieval rates.

2. Upload blob direct to access tier of choice (hot, cool, or archive)

Blob-level tiering for general-purpose v2 and blob storage accounts allows you to easily store blobs in the hot, cool, or archive access tiers all within the same container. Previously when you uploaded an object to your container, it would inherit the access tier of your account and the blob’s access tier would show as hot (inferred) or cool (inferred) depending on your account configuration settings. As data usage patterns change, you would change the access tier of the blob manually with the SetBlobTier API or automate the process with blob lifecycle management rules.

Today we’re sharing the public preview release of Upload Blob Direct to Access tier, which allows you to upload your blob using PutBlob or PutBlockList directly to the access tier of your choice using the optional parameter x-ms-access-tier. This allows you to upload your object directly into the hot, cool, or archive tier regardless of your account’s default access tier setting. This new capability makes it simple for customers to upload objects directly to Azure Archive in a single transaction. For more information, please see Blob Storage Access Tiers.

3. CopyBlob enhanced capabilities

In certain scenarios, you may want to keep your original data untouched but work on a temporary copy of the data. This holds especially true for data in Archive that needs to be read but still kept in Archive. The public preview release of CopyBlob enhanced capabilities builds upon our existing CopyBlob API with added support for the archive access tier, priority retrieval from archive, and direct to access tier of choice.

The CopyBlob API is now able to support the archive access tier; allowing you to copy data into and out of the archive access tier within the same storage account. With our access tier of choice enhancement, you are now able to set the optional parameter x-ms-access-tier to specify which destination access tier you would like your data copy to inherit. If you are copying a blob from the archive tier, you will also be able to specify the x-ms-rehydrate-priority of how quickly you want the copy created in the destination hot or cool tier. Please see Blob Storage Rehydration and the following table for information on the new CopyBlob access tier capabilities.

 

Hot tier source

Cool tier source

Archive tier source

Hot tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Cool tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Archive tier destination

Supported

Supported

Unsupported

Getting Started

All of the features discussed today (upload blob direct to access tier, priority retrieval from archive, and CopyBlob enhancements) are supported by the most recent releases of the Azure Portal, .NET Client Library, Java Client Library, Python Client Library. As always you can also directly use the Storage Services REST API (version 2019-02-02 and greater). In general, we always recommend using the latest version regardless of whether you are using these new features.

Build it, use it, and tell us about it!

We will continue to improve our Archive and Blob Storage services and are looking forward to hearing your feedback about these features through email at [email protected]. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.

Thanks, from the entire Azure Storage Team!

Six ways we’re making Azure reservations even more powerful

New Azure reservations features can help you save more on your Azure costs, easily manage reservations, and create internal reports. Based on your feedback, we’ve added the following features to reservations:

 

Azure Databricks pre-purchase plans

You can now save up to 37 percent on your Azure Databricks costs when you pre-purchase Azure Databricks commit units (DBCU) for one or three years. Any Azure Databricks use deducts from the pre-purchased DBUCs automatically. You can use the pre-purchased DBCUs at any time during the purchase term.

Databricks SKU selection

See our documentation “Optimize Azure Databricks costs with a pre-purchase” to learn more, or purchase an Azure Databricks plan in the Azure portal.

 

App Service Isolated stamp fee reservations

Save up to 40 percent on your App Service Isolated stamp fee costs with App Service reserved capacity. After you purchase a reservation, the isolated stamp fee usage that matches the reservation is no longer charged at the on-demand rates. App Service workers are charged separately and don’t get reservation discount.

App Service Reserved Capacity

Visit our documentation “Prepay for Azure App Service Isolated Stamp Fee with reserved capacity to learn more or purchase a reservation in the Azure portal.

 

Automatically renew your reservations

Now you can setup your reservations to renew automatically. This ensures that you keep getting the reservation discounts without any gaps. You can opt-in to automatically renew your reservations anytime during the term of the reservation and opt-out anytime. You can also update the renewal quantity to better align with any changes in your usage pattern. To setup automatic renewal, just go to any reservation that you’ve already purchased and click on the Renewal tab.

Renewal setup

 

Scope reservation to resource group

You can now scope reservations to a resource group. This feature is helpful in scenarios where same subscription has deployments from multiple cost centers, represented by their respective resource groups, and the reservation is purchased for a particular cost center. This feature helps you narrow down the reservation application to a resource group making internal charge-back easier. You can scope a reservation to a resource group at the time of purchase or update the scope after purchase. If you delete or migrate a resource group then the reservation will have to be rescoped manually.

Resource group scope

Learn more in our documentation “Scope reservations.”

 

Enhanced usage data to help with charge back, savings, and utilization

Organizations rely on their enterprise agreement (EA) usage data to reconcile invoices, track usage, and charge back internally. We recently added more details to the EA usage data to make your reservation reporting easier. With these changes you can easily perform following tasks:

  • Get reservation purchase and refund charges
  • Know which resource consumed how many hours of a reservation and charge back data for the usage
  • Know how many hours of a reservation was not used
  • Amortize reservation costs
  • Calculate reservation savings

The new data files are available only through the Azure portal and not through the EA portal. Besides the raw data, now you can also see reservations in cost analysis.

You can visit our documentation “Get Enterprise Agreement reservation costs and usage” to learn more.

 

Purchase using API

You can now purchase reservations using REST APIs. The APIs below will help you get the SKUs, calculate the cost, and then make the purchase: