Migrating from App Engine ndb to Cloud NDB

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Migrating to standalone services

Today we’re introducing the first video showing long-time App Engine developers how to migrate from the App Engine ndb client library that connects to Datastore. While the legacy App Engine ndb service is still available for Datastore access, new features and continuing innovation are going into Cloud Datastore, so we recommend Python 2 users switch to standalone product client libraries like Cloud NDB.

This video and its corresponding codelab show developers how to migrate the sample app introduced in a previous video and gives them hands-on experience performing the migration on a simple app before tackling their own applications. In the immediately preceding “migration module” video, we transitioned that app from App Engine’s original webapp2 framework to Flask, a popular framework in the Python community. Today’s Module 2 content picks up where that Module 1 leaves off, migrating Datastore access from App Engine ndb to Cloud NDB.

Migrating to Cloud NDB opens the doors to other modernizations, such as moving to other standalone services that succeed the original App Engine legacy services, (finally) porting to Python 3, breaking up large apps into microservices for Cloud Functions, or containerizing App Engine apps for Cloud Run.

Moving to Cloud NDB

App Engine’s Datastore matured to becoming its own standalone product in 2013, Cloud Datastore. Cloud NDB is the replacement client library designed for App Engine ndb users to preserve much of their existing code and user experience. Cloud NDB is available in both Python 2 and 3, meaning it can help expedite a Python 3 upgrade to the second generation App Engine platform. Furthermore, Cloud NDB gives non-App Engine apps access to Cloud Datastore.

As you can see from the screenshot below, one key difference between both libraries is that Cloud NDB provides a context manager, meaning you would use the Python with statement in a similar way as opening files but for Datastore access. However, aside from moving code inside with blocks, no other changes are required of the original App Engine ndb app code that accesses Datastore. Of course your “YMMV” (your mileage may vary) depending on the complexity of your code, but the goal of the team is to provide as seamless of a transition as possible as well as to preserve “ndb“-style access.

The difference between the App Engine ndb and Cloud NDB versions of the sample app

The “diffs” between the App Engine ndb and Cloud NDB versions of the sample app

Next steps

To try this migration yourself, hit up the corresponding codelab and use the video for guidance. This Module 2 migration sample “STARTs” with the Module 1 code completed in the previous codelab (and video). Users can use their solution or grab ours in the Module 1 repo folder. The goal is to arrive at the end with an identical, working app that operates just like the Module 1 app but uses a completely different Datastore client library. You can find this “FINISH” code sample in the Module 2a folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. Bonus content migrating to Python 3 App Engine can also be found in the video and codelab, resulting in a second FINISH, the Module 2b folder.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned! Developers should also check out the official Cloud NDB migration guide which provides more migration details, including key differences between both client libraries.

Ahead in Module 3, we will continue the Cloud NDB discussion and present our first optional migration, helping users move from Cloud NDB to the native Cloud Datastore client library. If you can’t wait, try out its codelab found in the table at the repo above. Migrations aren’t always easy; we hope this content helps you modernize your apps and shows we’re focused on helping existing users as much as new ones.

Migrating from App Engine webapp2 to Flask

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Migrating web framework

The Google Cloud team recently introduced a series of codelabs (free, self-paced, hands-on tutorials) and corresponding videos designed to help users on one of our serverless compute platforms modernize their apps, with an initial focus on our earliest users running their apps on Google App Engine. We kick off this content by showing users how to migrate from App Engine’s webapp2 web framework to Flask, a popular framework in the Python community.

While users have always been able to use other frameworks with App Engine, webapp2 comes bundled with App Engine, making it the default choice for many developers. One new requirement in App Engine’s next generation platform (which launched in 2018) is that web frameworks must do their own routing, which unfortunately, means that webapp2 is no longer supported, so here we are. The good news is that as a result, modern App Engine is more flexible, lets users to develop in a more idiomatic fashion, and makes their apps more portable.

For example, while webapp2 apps can run on App Engine, Flask apps can run on App Engine, your servers, your data centers, or even on other clouds! Furthermore, Flask has more users, more published resources, and is better supported. If Flask isn’t right for you, you can select from other WSGI-compliant frameworks such as Django, Pyramid, and others.

Video and codelab content

In this “Module 1” episode of Serverless Migration Station (part of the Serverless Expeditions series) Google engineer Martin Omander and I explore this migration and walk developers through it step-by-step.

In the previous video, we introduced developers to the baseline Python 2 App Engine NDB webapp2 sample app that we’re taking through each of the migrations. In the video above, users see that the majority of the changes are in the main application handler, MainHandler:

The diffs between the webapp2 and Flask versions of the sample app

The “diffs” between the webapp2 and Flask versions of the sample app

Upon (re)deploying the app, users should see no visible changes to the output from the original version:

VisitMe application sample output

VisitMe application sample output

Next steps

Today’s video picks up from where we left off: the Python 2 baseline app in its Module 0 repo folder. We call this the “START”. By the time the migration has completed, the resulting source code, called “FINISH”, can be found in the Module 1 repo folder. If you mess up partway through, you can rewind back to the START, or compare your solution with ours, FINISH. We also hope to one day provide a Python 3 version as well as cover other legacy runtimes like Java 8, PHP 5, and Go 1.11 and earlier, so stay tuned!

All of the migration learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. The next video (Module 2) will cover migrating from App Engine’s ndb library for Datastore to Cloud NDB. We hope you find all these resources helpful in your quest to modernize your serverless apps!

Introducing “Serverless Migration Station” Learning Modules

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Helping users modernize their serverless apps

Earlier this year, the Google Cloud team introduced a series of codelabs (free, online, self-paced, hands-on tutorials) designed for technical practitioners modernizing their serverless applications. Today, we’re excited to announce companion videos, forming a set of “learning modules” made up of these videos and their corresponding codelab tutorials. Modernizing your applications allows you to access continuing product innovation and experience a more open Google Cloud. The initial content is designed with App Engine developers in mind, our earliest users, to help you take advantage of the latest features in Google Cloud. Here are some of the key migrations and why they benefit you:

  • Migrate to Cloud NDB: App Engine’s legacy ndb library used to access Datastore is tied to Python 2 (which has been sunset by its community). Cloud NDB gives developers the same NDB-style Datastore access but is Python 2-3 compatible and allows Datastore to be used outside of App Engine.
  • Migrate to Cloud Run: There has been a continuing shift towards containerization, an app modernization process making apps more portable and deployments more easily reproducible. If you appreciate App Engine’s easy deployment and autoscaling capabilities, you can get the same by containerizing your App Engine apps for Cloud Run.
  • Migrate to Cloud Tasks: while the legacy App Engine taskqueue service is still available, new features and continuing innovation are going into Cloud Tasks, its standalone equivalent letting users create and execute App Engine and non-App Engine tasks.

The “Serverless Migration Station” videos are part of the long-running Serverless Expeditions series you may already be familiar with. In each video, Google engineer Martin Omander and I explore a variety of different modernization techniques. Viewers will be given an overview of the task at hand, a deeper-dive screencast takes a closer look at the code or configuration files, and most importantly, illustrates to developers the migration steps necessary to transform the same sample app across each migration.

Sample app

The baseline sample app is a simple Python 2 App Engine NDB and webapp2 application. It registers every web page visit (saving visiting IP address and browser/client type) and displays the most recent queries. The entire application is shown below, featuring Visit as the data Kind, the store_visit() and fetch_visits() functions, and the main application handler, MainHandler.


import os
import webapp2
from google.appengine.ext import ndb
from google.appengine.ext.webapp import template

class Visit(ndb.Model):
'Visit entity registers visitor IP address & timestamp'
visitor = ndb.StringProperty()
timestamp = ndb.DateTimeProperty(auto_now_add=True)

def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()

def fetch_visits(limit):
'get most recent visits'
return (v.to_dict() for v in Visit.query().order(
-Visit.timestamp).fetch(limit))

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(10)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

app = webapp2.WSGIApplication([
('/', MainHandler),
], debug=True)

Baseline sample application code

Upon deploying this application to App Engine, users will get output similar to the following:

image of a website with text saying VisitMe example

VisitMe application sample output

This application is the subject of today’s launch video, and the main.py file above along with other application and configuration files can be found in the Module 0 repo folder.

Next steps

Each migration learning module covers one modernization technique. A video outlines the migration while the codelab leads developers through it. Developers will always get a starting codebase (“START”) and learn how to do a specific migration, resulting in a completed codebase (“FINISH”). Developers can hit the reset button (back to START) if something goes wrong or compare their solutions to ours (FINISH). The hands-on experience helps users build muscle-memory for when they’re ready to do their own migrations.

All of the migration learning modules, corresponding Serverless Migration Station videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. While there’s an initial focus on Python 2 and App Engine, you’ll also find content for Python 3 users as well as non-App Engine users. We’re looking into similar content for other legacy languages as well so stay tuned. We hope you find all these resources helpful in your quest to modernize your serverless apps!

Modernizing your Google App Engine applications

Posted by Wesley Chun, Developer Advocate, Google Cloud

Modernizing your Google App Engine applications header

Next generation service

Since its initial launch in 2008 as the first product from Google Cloud, Google App Engine, our fully-managed serverless app-hosting platform, has been used by many developers worldwide. Since then, the product team has continued to innovate on the platform: introducing new services, extending quotas, supporting new languages, and adding a Flexible environment to support more runtimes, including the ability to serve containerized applications.

With many original App Engine services maturing to become their own standalone Cloud products along with users’ desire for a more open cloud, the next generation App Engine launched in 2018 without those bundled proprietary services, but coupled with desired language support such as Python 3 and PHP 7 as well as introducing Node.js 8. As a result, users have more options, and their apps are more portable.

With the sunset of Python 2, Java 8, PHP 5, and Go 1.11, by their respective communities, Google Cloud has assured users by expressing continued long-term support of these legacy runtimes, including maintaining the Python 2 runtime. So while there is no requirement for users to migrate, developers themselves are expressing interest in updating their applications to the latest language releases.

Google Cloud has created a set of migration guides for users modernizing from Python 2 to 3, Java 8 to 11, PHP 5 to 7, and Go 1.11 to 1.12+ as well as a summary of what is available in both first and second generation runtimes. However, moving from bundled to unbundled services may not be intuitive to developers, so today we’re introducing additional resources to help users in this endeavor: App Engine “migration modules” with hands-on “codelab” tutorials and code examples, starting with Python.

Migration modules

Each module represents a single modernization technique. Some are strongly recommended, others less so, and, at the other end of the spectrum, some are quite optional. We will guide you as far as which ones are more important. Similarly, there’s no real order of modules to look at since it depends on which bundled services your apps use. Yes, some modules must be completed before others, but again, you’ll be guided as far as “what’s next.”

More specifically, modules focus on the code changes that need to be implemented, not changes in new programming language releases as those are not within the domain of Google products. The purpose of these modules is to help reduce the friction developers may encounter when adapting their apps for the next-generation platform.

Central to the migration modules are the codelabs: free, online, self-paced, hands-on tutorials. The purpose of Google codelabs is to teach developers one new skill while giving them hands-on experience, and there are codelabs just for Google Cloud users. The migration codelabs are no exception, teaching developers one specific migration technique.

Developers following the tutorials will make the appropriate updates on a sample app, giving them the “muscle memory” needed to do the same (or similar) with their applications. Each codelab begins with an initial baseline app (“START”), leads users through the necessary steps, then concludes with an ending code repo (“FINISH”) they can compare against their completed effort. Here are some of the initial modules being announced today:

  • Web framework migration from webapp2 to Flask
  • Updating from App Engine ndb to Google Cloud NDB client libraries for Datastore access
  • Upgrading from the Google Cloud NDB to Cloud Datastore client libraries
  • Moving from App Engine taskqueue to Google Cloud Tasks
  • Containerizing App Engine applications to execute on Cloud Run

Examples

What should you expect from the migration codelabs? Let’s preview a pair, starting with the web framework: below is the main driver for a simple webapp2-based “guestbook” app registering website visits as Datastore entities:

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(LIMIT)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

A “visit” consists of a request’s IP address and user agent. After visit registration, the app queries for the latest LIMIT visits to display to the end-user via the app’s HTML template. The tutorial leads developers a migration to Flask, a web framework with broader support in the Python community. An Flask equivalent app will use decorated functions rather than webapp2‘s object model:

@app.route('/')
def root():
'main application (GET) handler'
store_visit(request.remote_addr, request.user_agent)
visits = fetch_visits(LIMIT)
return render_template('index.html', visits=visits)

The framework codelab walks users through this and other required code changes in its sample app. Since Flask is more broadly used, this makes your apps more portable.

The second example pertains to Datastore access. Whether you’re using App Engine’s ndb or the Cloud NDB client libraries, the code to query the Datastore for the most recent limit visits may look like this:

def fetch_visits(limit):
'get most recent visits'
query = Visit.query()
visits = query.order(-Visit.timestamp).fetch(limit)
return (v.to_dict() for v in visits)

If you decide to switch to the Cloud Datastore client library, that code would be converted to:

def fetch_visits(limit):
'get most recent visits'
query = DS_CLIENT.query(kind='Visit')
query.order = ['-timestamp']
return query.fetch(limit=limit)

The query styles are similar but different. While the sample apps are just that, samples, giving you this kind of hands-on experience is useful when planning your own application upgrades. The goal of the migration modules is to help you separate moving to the next-generation service and making programming language updates so as to avoid doing both sets of changes simultaneously.

As mentioned above, some migrations are more optional than others. For example, moving away from the App Engine bundled ndb library to Cloud NDB is strongly recommended, but because Cloud NDB is available for both Python 2 and 3, it’s not necessary for users to migrate further to Cloud Datastore nor Cloud Firestore unless they have specific reasons to do so. Moving to unbundled services is the primary step to giving users more flexibility, choices, and ultimately, makes their apps more portable.

Next steps

For those who are interested in modernizing their apps, a complete table describing each module and links to corresponding codelabs and expected START and FINISH code samples can be found in the migration module repository. We are also working on video content based on these migration modules as well as producing similar content for Java, so stay tuned.

In addition to the migration modules, our team has also setup a separate repo to support community-sourced migration samples. We hope you find all these resources helpful in your quest to modernize your App Engine apps!

SQL Server runs best on Azure. Here’s why.

SQL Server customers migrating their databases to the cloud have multiple choices for their cloud destination. To thoroughly assess which cloud is best for SQL Server workloads, two key factors to consider are:

  1. Innovations that the cloud provider can uniquely provide.
  2. Independent benchmark results.

What innovations can the cloud provider bring to your SQL Server workloads?

As you consider your options for running SQL Server in the cloud, it’s important to understand what the cloud provider can offer both today and tomorrow. Can they provide you with the capabilities to maximize the performance of your modern applications? Can they automatically protect you against vulnerabilities and ensure availability for your mission-critical workloads?

SQL Server customers benefit from our continued expertise developed over the past 25 years, delivering performance, security, and innovation. This includes deploying SQL Server on Azure, where we provide customers with innovations that aren’t available anywhere else. One great example of this is Azure BlobCache, which provides fast, free reads for customers. This feature alone provides tremendous value to our customers that is simply unmatched in the market today.

Additionally, we offer preconfigured, built-in security and management capabilities that automate tasks like patching, high availability, and backups. Azure also offers advanced data security that enables both vulnerability assessments and advanced threat protection. Customers benefit from all of these capabilities both when using our Azure Marketplace images and when self-installing SQL Server on Azure virtual machines.

Only Azure offers these innovations.

What are their performance results on independent, industry-standard benchmarks?

Benchmarks can often be useful tools for assessing your cloud options. It’s important, though, to ask if those benchmarks were conducted by independent third parties and whether they used today’s industry-standard methods.

bar graphs comparing the prefromance and price differences between Azure and AWS.

The images above show performance and price-performance comparisons from the February 2020 GigaOm performance benchmark blog post

In December, an independent study by GigaOm compared SQL Server on Azure Virtual Machines to AWS EC2 using a field test derived from the industry standard TPC-E benchmark. GigaOm found Azure was up to 3.4x faster and 87 percent cheaper than AWS. Today, we are pleased to announce that in GigaOm’s second benchmark analysis, using the latest virtual machine comparisons and disk striping, Azure was up to 3.6x faster and 84 percent cheaper than AWS.1 

These results continue to demonstrate that SQL Server runs best on Azure.

Get started today

Learn more about how you can start taking advantage of these benefits today with SQL Server on Azure.

 


1Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in February 2020. The study compared price performance between SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in Azure E32as_v4 instance type with P30 Premium SSD Disks and the SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in AWS EC2 r5a.8xlarge instance type with General Purpose (gp2) volumes. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of January 2020. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs. Actual results and prices may vary based on configuration and region.

Building Xbox game streaming with Site Reliability best practices

Last month, we started sharing the DevOps journey at Microsoft through the stories of several teams at Microsoft and how they approach DevOps adoption. As the next story in this series, we want to share the transition one team made from a classic operations role to a Site Reliability Engineering (SRE) role: the story of the Xbox Reliability Engineering and Operations (xREO) team.

This transition was not easy and came out of necessity when Microsoft decided to bring Xbox games to gamers wherever they are through cloud game streaming (project xCloud). In order to deliver cutting-edge technology with top-notch customer experience, the team had to redefine the way it worked—improving collaboration with the development team, investing in automation, and get involved in the early stages of the application lifecycle. In this blog, we’ll review some of the key learnings the team collected along the way. To explore the full story of the team, see the journey of the xREO team.

Consistent gameplay requirements and the need to collaborate

A consistent experience is crucial to a successful game streaming session. To ensure gamers experience a game streamed from the cloud, it has to feel like it is running on a nearby console. This means creating a globally distributed cloud solution that runs on many data centers, close to end users. Azure’s global infrastructure makes this possible, but operating a system running on top of so many Azure regions is a serious challenge.

The Xbox developers who have started architecting and building this technology understood that they could not just build this system and “throw it over the wall” to operations. Both teams had to come together and collaborate through the entire application lifecycle so the system can be designed from the start with considerations on how it will be operated in a production environment.

Mobile device showing a racing game streamed from the cloud

Architecting a cloud solution with operations in mind

In many large organizations, it is common to see development and operation teams working in silos. Developers don’t always consider operation when planning and building a system, while operations teams are not empowered to touch code even though they deploy it and operate it in production. With an SRE approach, system reliability is baked into the entire application lifecycle and the team that operates the system in production is a valued contributor in the planning phase. In a new approach, involving the xREO team in the design phase enabled a collaborative environment, making joint technology choices and architecting a system that could operate with the requirements needed to scale.

Leveraging containers to clearly define ownership

One of the first technological decisions the development and xREO teams made together was to implement a microservices architecture utilizing container technologies. This allowed the development teams to containerize .NET Core microservices they would own and remove the dependency from the cloud infrastructure that was running the containers and was to be owned by the xREO team.

Another technological decision both teams made early on, was to use Kubernetes as the underlying container orchestration platform. This allowed the xREO team to leverage Azure Kubernetes Service (AKS), a managed Kubernetes cloud platform that simplifies the deployment of Kubernetes clusters, removing a lot of the operational complexity the team would have to face running multiple clusters across several Azure regions. These joint choices made ownership clear—the developers are responsible for everything inside the containers and the xREO team is responsible for the AKS clusters and other Azure services make the cloud infrastructure hosting these containers. Each team owns the deployment, monitoring and operation of its respective piece in production.

This kind of approach creates clear accountability and allows for easier incident management in production, something that can be very challenging in a monolithic architecture where infrastructure and application logic have code dependencies and are hard to untangle when things go sideways.

Two members of the xREO team, seated in a conference room in front of a laptop.

Scaling through infrastructure automation

Another best practice the xREO team invested in was infrastructure automation. Deploying multiple cloud services manually on each Azure region was not scalable and would take too much time. Using a practice known as “infrastructure as code” (IaC) the team used Azure Resource Manager templates to create declarative definitions of cloud environments that allow deployments to multiple Azure regions with minimal effort.

With infrastructure managed as code, it can also be deployed using continuous integration and continuous delivery (CI/CD) to bring further automation to the process of deploying new Azure resources to existing data centers, updating infrastructure definitions or bringing online new Azure regions when needed. Both IaC and CI/CD, allowed the team to remain lean, avoid repetitive mundane work and remove most of the risk of human error that comes with manual steps. Instead of spending time on manual work and checklists, the team can focus on further improving the platform and its resilience.

Site Reliability Engineering in action 

The journey of the xREO team started with a need to bring the best customer experience to gamers. This is a great example that shows how teams who want to delight customers with new experiences through cutting edge innovation must evolve the way they design, build, and operate software. Shifting their approach to operations and collaborating more closely with the development teams was the true transformation the xREO team has undergone.

With this new mindset in place, the team is now well positioned to continue building more resilience and further scale the system and by so, deliver the promise of cloud game streaming to every gamer.

Resources

Achieve operational excellence in the cloud with Azure Advisor

Many customers have questions when it comes to managing cloud operations. How can I implement real-time cloud governance at scale? What’s the best way to monitor my cloud workloads? How can I get help when I need it?

Azure offers a great deal of guidance when it comes to optimizing your cloud operations. At the organizational level, the Microsoft Cloud Adoption Framework for Azure can help you design and implement your approach to management and governance in the cloud. At the cloud resource level, Azure Advisor provides personalized recommendations to help you optimize your Azure workloads for a variety of objectives—including cost savings, security, performance, and availability—based on your usage and configurations.

Recently, Advisor introduced a new recommendation category—operational excellence—to help you follow best practices for process and workflow efficiency, resource manageability, and deployment.

Introducing a new Azure Advisor recommendation category: operational excellence

Azure Advisor now offers a new category of recommendations—operational excellence—to help you optimize your cloud process and workflow efficiency, resource manageability, and deployment practices. You can get these recommendations from Advisor in the operational excellence tab of the Advisor dashboard. They’re also available via Advisor’s CLI and API.

Screenshot of Azure Advisor in the Azure portal, showing the new operational excellence category.
The operational excellence category is launching with nine recommendations, and more on the way. Examples include creating Azure Service Health alerts to be notified when Azure service issues affect you; repairing invalid log alert rules; and following best practices using Azure policy, such as tag management, geo-compliance requirements, and specifying permitted virtual machine (VM) SKUs for deployment. Together, these recommendations will help you optimize your cloud operations practices.

New operational excellence recommendations

Here’s a quick round-up of the new operational excellence recommendations in Advisor at launch:

  • Create Azure Service Health alerts to be notified when Azure service issues affect you.
  • Design your storage accounts to prevent hitting the maximum subscription limit.
  • Ensure you have access to Azure cloud experts when you need it.
  • Repair invalid log alert rules.
  • Follow best practices using Azure Policy, including tag management, geo-compliance requirements, and VM audits for managed disks.

For more detailed information on Advisor’s operational excellence recommendations, refer to our documentation. Be sure to check back regularly, as we’re constantly adding new recommendations.

Review your operational excellence recommendations today

Visit Advisor in the Azure portal here to start optimizing your cloud workloads for operational excellence. For more in-depth guidance, visit our documentation. Let us know if you have a suggestion for Advisor by submitting an idea here.

Faster and cheaper: SQL on Azure continues to outshine AWS

Over a million on-premises SQL Server databases have moved to Azure, representing a massive shift in where customers are collecting, storing, and analyzing their data.

Modernizing your databases provides the opportunity to transform your data architecture. SQL Server on Azure Virtual Machines allows you to maintain control over your database and operating system while still benefiting from cloud flexibility and scale. For some, this represents a step in the journey to a fully-managed database, while others choose this deployment option for compatibility with on-premises workloads such as SQL Server Reporting Services.

Whatever the reason, migrating SQL workloads to Azure Virtual Machines is a popular option. Azure customers benefit from our unique built-in security and manageability capabilities, which automate tasks like patching and backups. In addition to providing these unparalleled innovations, it is important to provide customers with the best price-performance possible. Once again, SQL Server on Azure Virtual Machines comes out on top.

Ready for the next stage of your modernization journey? Azure SQL Database is a fully-managed service that leads in price-performance for your mission-critical workloads and it provides limitless scale, built-in artificial intelligence, and industry-leading availability guarantees.

SQL Server on Azure leads in price-performance

GigaOm, an independent research firm, recently published a study comparing throughput performance between SQL Server on Azure Virtual Machines and SQL Server on AWS EC2. Azure emerged as the clear leader across both Windows and Linux for mission-critical workloads, up to 3.4 times faster and up to 87 percent less expensive than AWS EC2.1

GigaOm Report

The images above are performance and price-performance comparisons from the GigaOm report. The performance metric is throughput (transactions per second, tps); higher performance is better. The price-performance metric is three-year pricing divided by throughput (transactions per second, tps), lower price-performance is better.

With Azure Ultra Disk, GigaOm was able to achieve 80,000 input or output per second (IOPS) per single disk, maxing out the virtual machine’s throughput limit, and well exceeding the capabilities of AWS provisioned IOPS.2

A key reason why Azure price-performance is superior to AWS is Azure BlobCache, which provides free reads. Given that most online transaction processing (OLTP) workloads today come with a ten-to-one read-to-write ratio, this provides customers with significant savings.

Unmatched innovation from the team that brought SQL Server to the world

With a proven track record over 25 years, the engineering team behind SQL Server continues to drive security and innovation to meet our customers’ changing needs. Whether executing on-premises, in the cloud, or on the edge, the result is the most comprehensive, consistent, and secure solution for your data.

Azure SQL Virtual Machines offer unique built-in security and manageability, including automatic security patching and automated high-availability, and database recovery to a specific point in time. Azure’s unique security capabilities include advanced data security for SQL Server on Azure Virtual Machines, which enables both vulnerability assessments and advanced threat protection. Customers self-installing SQL Server on virtual machines in the cloud can now register with our resource provider to enable this same functionality.

Get started with SQL in Azure today

Migrate from SQL Server on-premises to SQL Server 2019 in Azure Virtual Machines today. Get started with preconfigured Azure SQL Virtual Machine images on Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, and Windows in minutes. Take advantage of the Azure Hybrid Benefit to reuse your existing on-premises Windows server and SQL Server licenses in Azure for significant savings.

When you add it up, SQL databases are simply best on Azure. Learn more about why SQL Server is best on Azure, and use a $200 in Azure credits with a free account3 or Azure Dev or Test credits4 for additional cost savings.

 


1Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in October 2019. The study compared price performance between SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in Azure E64s_v3 instance type with 4x P30 1TB Storage Pool data (Read-Only Cache) + 1x P20 0.5TB log (No Cache) and the SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in AWS EC2 r4.16xlarge instance type with 1x 4TB gp2 data + 1x 1TB gp2 log. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. The Field Test is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of October 2019. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs.  Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test.  Actual results and prices may vary based on configuration and region.

2Claims based on data from a study commissioned by Microsoft and conducted by GigaOm in October 2019. The study compared price-performance between SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in Azure E64s_v3 instance type with 1x Ultra 1.5TB with 650MB per sec throughput and the SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in AWS EC2 r4.16xlarge instance type with 1x 1.5TB io1 provisioned log + data. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. The Field Test is based on a mixture of read-only and update intensive transactions that simulate activities found in complex OLTP application environments. Price-performance is calculated by GigaOm as the cost of running the cloud platform continuously for three years divided by transactions per second throughput. Prices are based on publicly available US pricing in north Europe for SQL Server on Azure Virtual Machines and Ireland for AWS EC2 as of October 2019. Price-performance results are based upon the configurations detailed in the GigaOm Analytic Field Test.  Actual results and prices may vary based on configuration and region.

3Additional information about $200 Azure free account available at https://azure.microsoft.com/en-us/free/.

4Dev or Test Azure credits and pricing available for paid Visual Studio subscribers only.

A year of bringing AI to the edge

This post is co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.

In an age where low-latency and data security can be the lifeblood of an organization, containers make it possible for enterprises to meet these needs when harnessing artificial intelligence (AI).

Since introducing Azure Cognitive Services in containers this time last year, businesses across industries have unlocked new productivity gains and insights. The combination of both the most comprehensive set of domain-specific AI services in the market and containers enables enterprises to apply AI to more scenarios with Azure than with any other major cloud provider. Organizations ranging from healthcare to financial services have transformed their processes and customer experiences as a result.

 

These are some of the highlights from the past year:

Employing anomaly detection for predictive maintenance

Airbus Defense and Space, one of the world’s largest aerospace and defense companies, has tested Azure Cognitive Services in containers for developing a proof of concept in predictive maintenance. The company runs Anomaly Detector for immediately spotting unusual behavior in voltage levels to mitigate unexpected downtime. By employing advanced anomaly detection in containers without further burdening the data scientist team, Airbus can scale this critical capability across the business globally.

“Innovation has always been a driving force at Airbus. Using Anomaly Detector, an Azure Cognitive Service, we can solve some aircraft predictive maintenance use cases more easily.”  —Peter Weckesser, Digital Transformation Officer, Airbus

Automating data extraction for highly-regulated businesses

As enterprises grow, they begin to acquire thousands of hours of repetitive but critically important work every week. High-value domain specialists spend too much of their time on this. Today, innovative organizations use robotic process automation (RPA) to help manage, scale, and accelerate processes, and in doing so free people to create more value.

Automation Anywhere, a leader in robotic process automation, partners with these companies eager to streamline operations by applying AI. IQ Bot, their unique RPA software, automates data extraction from documents of various types. By deploying Cognitive Services in containers, Automation Anywhere can now handle documents on-premises and at the edge for highly regulated industries:

“Azure Cognitive Services in containers gives us the headroom to scale, both on-premises and in the cloud, especially for verticals such as insurance, finance, and health care where there are millions of documents to process.” —Prince Kohli, Chief Technology Officer for Products and Engineering, Automation Anywhere

For more about Automation Anywhere’s partnership with Microsoft to democratize AI for organizations, check out this blog post.

Delighting customers and employees with an intelligent virtual agent

Lowell, one of the largest credit management services in Europe, wants credit to work better for everybody. So, it works hard to make every consumer interaction as painless as possible with the AI. Partnering with Crayon, a global leader in cloud services and solutions, Lowell set out to solve the outdated processes that kept the company’s highly trained credit counselors too busy with routine inquiries and created friction in the customer experience. Lowell turned to Cognitive Services to create an AI-enabled virtual agent that now handles 40 percent of all inquiries—making it easier for service agents to deliver greater value to consumers and better outcomes for Lowell clients.

With GDPR requirements, chatbots weren’t an option for many businesses before containers became available. Now companies like Lowell can ensure the data handling meets stringent compliance standards while running Cognitive Services in containers. As Carl Udvang, Product Manager at Lowell explains:

“By taking advantage of container support in Cognitive Services, we built a bot that safeguards consumer information, analyzes it, and compares it to case studies about defaulted payments to find the solutions that work for each individual.”

One-to-one customer care at scale in data-sensitive environments has become easier to achieve.

Empowering disaster relief organizations on the ground

A few years ago, there was a major Ebola outbreak in Liberia. A team from USAID was sent to help mitigate the crisis. Their first task on the ground was to find and categorize the information such as the state of healthcare facilities, wifi networks, and population density centers.  They tracked this information manually and had to extract insights based on a complex corpus of data to determine the best course of action.

With the rugged versions of Azure Stack Edge, teams responding to such crises can carry a device running Cognitive Services in their backpack. They can upload unstructured data like maps, images, pictures of documents and then extract content, translate, draw relationships among entities, and apply a search layer. With these cloud AI capabilities available offline, at their fingertips, response teams can find the information they need in a matter of moments. In Satya’s Ignite 2019 keynote, Dean Paron, Partner Director of Azure Storage and Edge, walks us through how Cognitive Services in Azure Stack Edge can be applied in such disaster relief scenarios (starting at 27:07): 

Transforming customer support with call center analytics

Call centers are a critical customer touchpoint for many businesses, and being able to derive insights from customer calls is key to improving customer support. With Cognitive Services, businesses can transcribe calls with Speech to Text, analyze sentiment in real-time with Text Analytics, and develop a virtual agent to respond to questions with Text to Speech. However, in highly regulated industries, businesses are typically prohibited from running AI services in the cloud due to policies against uploading, processing, and storing any data in public cloud environments. This is especially true for financial institutions.

A leading bank in Europe addressed regulatory requirements and brought the latest transcription technology to their own on-premises environment by deploying Cognitive Services in containers. Through transcribing calls, customer service agents could not only get real-time feedback on customer sentiment and call effectiveness, but also batch process data to identify broad themes and unlock deeper insights on millions of hours of audio. Using containers also gave them flexibility to integrate with their own custom workflows and scale throughput at low latency.

What’s next?

These stories touch on just a handful of the organizations leading innovation by bringing AI to where data lives. As running AI anywhere becomes more mainstream, the opportunities for empowering people and organizations will only be limited by the imagination.

Visit the container support page to get started with containers today.

For a deeper dive into these stories, visit the following

Sharing the DevOps journey at Microsoft

Today, more and more organizations are focused on delivering new digital solutions to customers and finding that the need for increased agility, improved processes, and collaboration between development and operation teams is becoming business-critical. For over a decade, DevOps has been the answer to these challenges. Understanding the need for DevOps is one thing, but the actual adoption of DevOps in the real world is a whole other challenge. How can an organization with multiple teams and projects, with deeply rooted existing processes, and with considerable legacy software change its ways and embrace DevOps?

At Microsoft, we know something about these challenges. As a company that has been building software for decades, Microsoft consists of thousands of engineers around the world that deliver many different products. From Office, to Azure, to Xbox we also found we needed to adapt to a new way of delivering software. The new era of the cloud unlocks tremendous potential for innovation to meet our customers’ growing demand for richer and better experienceswhile our competition is not slowing down. The need to accelerate innovation and to transform how we work is real and urgent.

The road to transformation is not easy and we believe that the best way to navigate this challenging path is by following the footsteps of those who have already walked it. This is why we are excited to share our own DevOps journey at Microsoft with learnings from teams across the company who have transformed through the adoption of DevOps.

 

More than just tools

An organization’s success is achieved by providing engineers with the best tools and latest practices. At Microsoft, the One Engineering System (1ES) team drives various efforts to help teams across the company become high performing. The team initially focused on tool standardization and saw some good results—source control issues decreased, build times and build reliability improved. But over time it became clear that the focus on tooling is not enough, to help teams, 1ES had to focus on culture change as well. Approaching culture change can be tricky, do you start with quick wins, or try to make a fundamental change at scale? What is the right engagement model for teams of different sizes and maturity levels? Learn more about the experimental journey of the One Engineering System team.

Redefining IT roles and responsibilities

The move to the cloud can challenge the definitions of responsibilities in an organization. As development teams embrace cloud innovation, IT operations teams find that the traditional models of ownership over infrastructure no longer apply. The Manageability Platforms team in the Microsoft Core Service group (previously Microsoft IT), found that the move to Azure required rethinking the way IT and development teams work together. How can the centralized IT model be decentralized so the team can move away from mundane, day-to-day work while improving the relationship with development teams? Explore the transformation of the Manageability Platforms team.

Streamlining developer collaboration

Developer collaboration is a key component of innovation. With that in mind, Microsoft open-sourced the .NET framework to invite the community to collaborate and innovate on .NET. As the project was open-sourced over time, its scale and complexity became apparent. The project spanned over many repositories, each with its own structure using multiple different continuous integration (CI) systems, making it hard for developers to move between repositories. The .NET infrastructure team at Microsoft decided to invest in streamlining developer processes. That challenge was approached by focusing on standardizing repo structure, shared tooling, and converging on a single CI system so both internal and external contributors to the project would benefit. Learn more about the investments made by the .NET infrastructure team.

A journey of continuous learning

DevOps at Microsoft is a journey, not a destination. Teams adapt, try new things, and continue to learn how to change and improve. As there is always more to learn, we will continue to share the transformation stories of additional teams at Microsoft in the coming months. As an extension of this continuous internal learning journey, we invite you to join us on the journey and learn how to embrace DevOps and empower your teams to build better solutions, faster and deliver them to happier customers.

DevOps at Microsoft

Resources


Azure. Invent with purpose.

10 user experience updates to the Azure portal

We’re constantly working to improve your user experience in the Azure portal. Our goal is to offer you a productive and easy-to-use single-pane-of glass where you can build, manage, and monitor your Azure services, applications, and infrastructure. In this post, I’d like to share the highlights of our latest experience improvements, including:

Improved portal home experience

We have improved the Azure portal home page to increase focus and clarity and to make things that are important to you easily accessible.

Image of the simplified Azure portal home.
  Figure 1 – simplified Azure portal home.

We’ve organized these into differentiated sections for ease of use:

  • Services and resources (dynamic): the top section has dynamic content that gets adjusted based on your usage without requiring any additional customizations. The more you use the portal, the more it adjusts to you!
  • Common entry points and useful info (static): the lower section contains static content with common entry points to provide quick access to main navigation flows that are always there, enabling users to develop muscle memory for repeated usage.

Screenshot showing the new sections of the Azure home page.

Figure 2 – sections of the home page.

The Azure services section provides quick access to the Azure Marketplace, a list of eight of the most-used Azure services, and access to browse the entire Azure offering. The list of services is populated by default with some of our most popular services and gets automatically updated with your most recently used services. The Recent resources section shows a list of your recently used resources. Both lists get updated as you use the product. Our goal is to bring relevant services and instances front and center without requiring customization. The more you use the product, the more useful it gets for you! The rest of the sections are static, providing important points of reference for navigation and access to key Azure products, services, content, and training.

The overall home experience has been streamlined by hiding the left navigation bar under an always present menu button in the top navigation bar:

A screenshot pointing out the menu button in the Azure portal.

Figure 3 – The menu button

The main motivation for this change is improving focus, reducing distractions and redundancy, and to enable more immersive experiences. Before this change, when you were immersed in a workload in the portal you always had two vertical menus side by side, the left navigation bar and the menu for the experience. The left navigation bar is still available with all its functionality, including favorites, through the menu button at the top bar, always only one click away.

An image comparing the new and old left navigation bars.

Figure 4 – The new experience allows for more focus.

If you prefer the old visual, having the left navigation always present, you can always bring it back using the Portal Settings panel.

New service cards

We have added hover cards associated with each service that show contextual information and provide direct access to some of the most common workflows. These hover cards are displayed after the cursor is placed for about a second on a service tile. We used the same interaction pattern and design than Outlook uses for identities (users and groups) that are well established with our customer base.

A gif of the Azure services page and the Virtual machines hover card.

Figure 5 – hover card for virtual machines.

The cards expose relevant contextual information and actions for a service, including:

  • Create an instance: this provides quick access to a very common flow, short circuiting going though intermediate screens to launch the creation.
  • Browse instances: browse the full list of instances of that service.
  • Recently used: the last three recently used instances of that service, providing direct contextual access.
  • Microsoft Learn content: specialized free training curated for that service. The curation has been done by the Microsoft Learn team based on usage data and customer feedback.
  • Links to documents: key documents to learn or use the product (quick starts, technical docs, pricing.)
  • Free offerings available: if the service has free options available, surface them.

A screenshot showing the anatomy of the Virtual machines service card.

Figure 6 – Anatomy of the card

The cards help improve on multiple aspects including more efficient customer journeys, better discoverability, and contextualized information, all presented in the context of one service. The card also helps customers of all levels of expertise: While new customers can benefit from Microsoft Learn content and free offerings advanced customers have a faster path the create instances or access their recently used instances of that service.

The card does not only show on the home page. It is available in every place we display a service like the left navigation bar, the all services list, as well as the Azure home page.

Extended Microsoft Learn integration

Microsoft Learn provides official high-quality free learning material for Microsoft technologies. In this portal update we have introduced several contextual integration points:

  • Service browsing: contextual integration at the service category level (compute, storage, web, etc.)
  • Service cards: contextual integration at the service level (virtual machine, Cosmos DB, etc.) available in Azure home page, left navigation, and service browsing experience.
  • Azure QuickStart center: integration of most popular trainings in the landing page
  • Azure home: direct access to the main Microsoft Learn entry point

Moving forward, the Azure portal and Microsoft Learn integration will continue to grow, to help you improve your Azure journey!

Enhanced service browsing experience

Azure is big and gets bigger every day. Navigating through Azure’s offering in the portal can be intimidating and challenging due to the vast set of available services. To make this easier, we’ve made the following updates:

  • Improved global search: improved performance and functionality when searching for services in the global search box in the top bar of the portal. This improved search is also always present and available in your portal session.
  • Improved service browsing experience: improved the All services experience adding an overview category supporting progressive disclosure of services, reducing visual clutter, and adding contextual Microsoft Learn content.

For service browsing, we introduced an overview category with the goal of progressively disclosing information.

A screenshot showing the progressive disclosure of information.

Figure 7 – progressive disclosure of information and better discoverability

The new Overview category presents a list of 15 of Azure’s most popular services, curated Microsoft Learn training content, and access to key functionality like Azure QuickStart center and free offerings.

If the service that you are looking for is not available on this screen you can use the service search functionality, at the top left, or you can browse through the different categories available, at the left of the screen. When displaying a category, we are now surfacing contextual and free Microsoft Learn content to assist you in your Azure learning journey.

A screenshot of the service categories.

Figure 8 – service category with contextual and free Microsoft Learn integration. The training offered in this category is contextual and related to databases in this case.

Improved instance browsing experience

The resource instances browsing experience, going through the list of instances and services is one of the most common entry points for customers using the portal. We are introducing an updated experience that leverages the power of Azure Resource Graph to provide improved performance, better filtering and sorting options, better grouping, and allows exporting your resource lists to a CSV file.

A screenshot showcasing the improved resource browsing experience.

Figure 9 – improved resource browsing experience

As of this month, this experience will be available for more than 70 services and over the next few months it will be rolled out across the entire platform.

Improved Azure Resource Graph experience

The Azure Resource Graph Explorer available in the portal enables you to write queries and create dashboards using the full power of Azure Resource Graph. Here is a video that shows how to use Resource Graph to write queries and create an inventory dashboard for your Azure subscriptions.

We have now introduced Azure Resource Graph Queries in the Azure portal as a new top-level resource. Basically, you can save any Kusto Query Language (KQL) query as a resource in your Azure subscription. Like any other resource you can share it with colleagues, set permissions, check activity logs, and tag it.

A screenshot showing Azure Graph Queries.

Figure 10 – Azure Graph Queries

Automatic refresh in Azure Dashboards

We have added automatic refresh to our Azure dashboards, allowing to automatically refresh your dashboards over several time intervals.

A screenshot showing how to configure automatic refresh, and choose the time period.

Figure 11 – Configuring automatic refresh

Improved service icons

We’ve updated all of the service icons in the Azure portal with a more consistent and modern look. All these icons have been designed together as a family to provide better visual consistency and reduce distractions.

An image showing all of the improved icons for the Azure portal.

Figure 12 Improved icons

Simplified settings panel

The settings panel has been simplified. The main reason for this change is that many customers could not find the “Language & region” settings in the previous design and were asking us for capabilities that were already available in the portal. This new design separates the general and the Language & region settings, the portal supports 18 languages and dozens of regional formats, which was a common source of confusion for many of our users.

Screenshots showcasing the different portal settings tabs for General and language & region.

Figure 13 – separation of general and localization settings

New landing page for Azure Mobile application

The Azure mobile app enables you to stay connected, informed, and in control of your Azure assets while on the go. The app is available for iOS and Android devices.

We have added a brand-new landing screen to the Azure Mobile App that brings all important information together as soon as you open the application. The new Home experience is composed of multiple cards with support for:

  • Azure services
  • Recent resources
  • Latest alerts
  • Service Health
  • Resource groups
  • Favorites

The home view is fully customizable, you can decide what sections to show and in which order to show them.

An image showing the new home page in the Azure Mobile App.

Figure 14 – new home in the Azure Mobile App

If you have not tried the Azure Mobile app yet, make sure to try it out.

Let us know what you think

We’ve gone through a lot of new capabilities and still did not cover everything that is coming up in this release! The team is always hard at work focusing on improving the experience and is always eager to get your feedback and learn how can we make your experience better.


Azure. Invent with purpose.

Secure and compliant APIs for a hybrid and multi-cloud world

APIs are everywhere. The broad proliferation of applications throughout enterprises often results in large silos of opaque processes and services, making it hard for IT to manage and govern APIs in a systematic way, and for development teams to gain visibility into and make use of APIs that already exist.

Entire industries, such as financial services, are embracing APIs as a means to become more open, for example with open banking initiatives. Open banking is an API-first approach to creating more open, rich ecosystems that encourage third-party participation and usage of the services financial institutions have previously kept behind the scenes.

Products, such as Azure API Management, were created to address these issues. By letting you manage all APIs in a single, centralized location, you are able to impose authentication, authorization, throttling, and transformation policies and easily monitor the usage of the APIs associated with your applications, giving you the much-needed visibility into your application portfolio(s) at a macro-level.

To succeed in an increasingly connected world, it is key to adopt an API-first approach that lets you:

  • Embrace innovation by creating vibrant API ecosystems.
  • Secure and manage APIs seamlessly in a hybrid world.

APIs can be a bridge to the uncertain future and help you safely traverse over turbulent waters.

Embrace innovation by creating vibrant API ecosystems

Microsoft offers all of the tools to be able to immediately capitalize on new opportunities as they emerge in the business landscape. Our infrastructure technologies, such as Kubernetes and serverless computing, accelerate development velocity and help developers move faster than ever before. Our API technologies, such as API management, accelerate the speed at which new opportunities can be acted upon, by immediately providing channels for partners, developers, customers, and other third-parties to leverage new technology which is created. These types of activities are often done with tools such as an API developer portal.

Azure API Management’s developer portal lets you easily grant access (and control) to APIs. The developer portal provides documentation on how to use the APIs and creates a simple, easy way for people to get started. A developer portal is an integral part of any API-first approach, which is why we’re announcing the general availability of our greatly improved developer portal experience.

You can now easily customize the developer portal with a visual user interface, helping create a branded experience. The developer portal is open-source and built with extensibility in mind. You can easily fork our exacting repository and customize it to meet your needs. It was created using contemporary JAMstack technologies that significantly reduce page load times, to make it as frictionless of user experience as possible.

You can learn more about this announcement by reading our Azure Update on the release.

Secure and manage APIs seamlessly in a hybrid world

Today’s most popular API management solutions run in public clouds. And while having a purely cloud-based API management service can work for pretty much all scenarios, it’s not always the best choice. Perhaps compliance requirements mandate that information must stay on the corporate network, or maybe accessing the cloud is prohibited by company policy. Whatever the reason, scenarios like this can’t use an API management service running in any public cloud; the service must run on-premises.

To meet your hybrid requirements, we’re announcing the preview of Azure Arc enabled API Management, a self-hosted API gateway. The new self-hosted API gateway doesn’t replace the primary cloud-based API management service. Instead, it augments this service by providing the essential aspects of API management in software that organizations can run wherever they choose.

Azure Arc enabled API management

It adds a containerized version of the Azure API Management gateway you can host on-premises or another environment that supports the deployment of Docker containers. It enables more efficient call patterns for internal-only and internal and external APIs and is managed from a cloud-based Azure API Management instance. Azure Arc enabled API Management enables you to run the self-hosted API management gateway in your own on-premises datacenter or run the self-hosted API management gateway in another cloud.

Read the whitepaper we’ve released, API management in a hybrid and multi-cloud world, which goes into further detail technical detail on Azure Arc enabled API Management, as well as the strategic benefits you receive when adopting this approach.

Or, you can start a free trial of Microsoft Azure and check out API Management for yourself.

Heading into the future

APIs are the way that businesses will continue to communicate. The growth of APIs has continued to increase, and the rise of the API product is happening right now. Many different companies now offer API-first products and are a powerful reminder that a well thought out API strategy is going to be key to any business’ strategy moving forward.

To learn more about what APIs and API Management can do for you, you can visit API Management on Azure.


Azure. Invent with purpose.