Plan your Next ‘20 journey: Session guide available now

Get ready to make the most of Google Cloud Next ‘20: our session guide is now available.

At Google Cloud Next, our aim is to give you the tools you need to sharpen your technical skills, expand your network, and accelerate personal development. Across our hundreds of breakout sessions you’ll get the chance to connect and learn about all aspects of Google Cloud—from multi-cloud deployments, to application modernization, to next-level collaboration and productivity. Developers, practitioners, and operators from all over the world will come together at Next, and we hope you’ll join them.

This year we’re going deep on the skills and knowledge enterprises need to be successful in the cloud. Our catalog of technical content keeps growing, and this year we’re offering more than 500 breakout sessions, panels, bootcamps, and hands-on labs. These sessions will give you in-depth knowledge in seven core areas: 

  • Infrastructure—Migrate and modernize applications and systems on premises and in the cloud.
  • Application modernization—Develop, deploy, integrate, and manage both your existing apps and new cloud-native applications.
  • Data management and analytics—Take advantage of highly available and scalable tools to store and manage your structured and unstructured data, then derive meaningful insights from that data.
  • Cloud AI and machine learning—Leverage your data by applying artificial intelligence and machine learning to transform your business.
  • Business application development—Reimagine app development by helping you innovate with no-code development, workflow automation, app integration, and API management. 
  • Cloud security—Keep your systems, apps, and users better protected with world-class security tools.
  • Productivity and collaboration—Transform the ways teams grow, share, and work together.

This means you can troubleshoot and debug microservices in Kubernetes, get a primer on big data and machine learning fundamentals, then finish up your day by learning to build, deploy, modernize and manage apps using Anthos. Or pick from hundreds of other topics

Want to learn which sessions you don’t want to miss? Beginning in March, we’ll be publishing guides to Next from Google experts. Keep an eye on our blog.

Bringing confidential computing to Kubernetes

Historically, data has been protected at rest through encryption in data stores, and in transit using network technologies, however as soon as that data is processed in the CPU of a computer it is decrypted and in plain text. New confidential computing technologies are game changing as they provide data protection, even when the code is running on the CPU, with secure hardware enclaves. Today, we are announcing that we are bringing confidential computing to Kubernetes workloads.

Confidential computing with Azure

Azure is the first major cloud platform to support confidential computing building on Intel® Software Guard Extensions (Intel SGX). Last year, we announced the preview of the DC-series of virtual machines that run on Intel® Xeon® processors and are confidential computing ready.

This confidential computing capability also provides an additional layer of protection even from potentially malicious insiders at a cloud provider, reduces the chances of data leaks and may help address some regulatory compliance needs.

Confidential computing enables several previously not possible use-cases. Customers in regulated industries can now collaborate together using sensitive partner or customers data to detect fraud scenarios without giving the other party visibility into that data. In another example customers can perform mission critical payment processing in secure enclaves.

How it works for Kubernetes

With confidential computing for Kubernetes, customers can now get this additional layer of data protection for their Kubernetes workloads with the code running on the CPU with secure hardware enclaves. Use the open enclave SDK for confidential computing in code. Create a Kubernetes cluster on hardware that supports Intel SGX, such as the DC-series virtual machines running Ubuntu 16.04 or Ubuntu 18.04 and install the confidential computing device plugin into those virtual machines. The device plugin (running as a DaemonSet) surfaces the usage of the Encrypted Page Cache (EPC) RAM as a schedulable resource for Kubernetes. Kubernetes users can then schedule pods and containers that use the Open Enclave SDK onto hardware which supports Trusted Execution Environments (TEE).

The following pod specification demonstrates how you would schedule a pod to have access to a TEE by defining a limit on the specific EPC memory that is advertised to the Kubernetes scheduler by the device plugin available in preview.

How to schedule a pod to access a TEE

Now the pods in these clusters can run containers using secure enclaves and take advantage of confidential computing. There is no additional fee for running Kubernetes containers on top of the base DC-series cost.

The Open Enclave SDK was recently open sourced by Microsoft and made available to the Confidential Computing Consortium, under the Linux Foundation for standardization to create a single uniform API to use with a variety of hardware components and software runtimes across the industry landscape.

Try out confidential computing for Kubernetes with Azure today. Let us know what you think in our survey or on GitHub.

Microsoft Azure AI hackathon’s winning projects

We are excited to share the winners of the first Microsoft Azure AI Hackathon, hosted on Devpost. Developers of all backgrounds and skill levels were welcome to join and submit any form of AI project, whether using Azure AI to enhance existing apps with pre-trained machine learning (ML) models or by building ML models from scratch. Over 900 participants joined in, and 69 projects were submitted. A big thank you to all who participated and many congratulations to the winners.

First place—Trashé


Submitted by Nathan Glover and Stephen Mott, Trashé is a SmartBin that aims to help people make more informed recycling decisions. What I enjoyed most was watching the full demo of Trashé in action! It’s powerful when you see not just the intelligence, but the end-to-end scenario of how it can be applied in a real-world environment.

This team used many Azure services to connect the hardware, intelligence, and presentation layers—you can see this is a well-researched architecture that is reusable in multiple scenarios. Azure Custom Vision was a great choice in this case, enabling the team create a well performing model with very little training data. The more we recycle, the better the model will get. It was great to see the setup instructions included to build unique versions of Trashé so users can contribute to helping the environment by recycling correctly within their local communities—this community approach is incredibly scalable.

Second place—AfriFarm


Niza Siwale’s app recognizes crop diseases from images using Azure Machine Learning service and publicly publishes the findings so anyone can track disease breakouts. This also provides a real-time update for government agencies to act quickly and provide support to affected communities. As quoted by Niza, this project has an incredible reach to a possible 200 million farmers whose livelihoods depend on farming in Africa.

Creating a simple Android application where users can take photos of crops to analyze if each farmer is getting information when they need it, users can also contribute their own findings back to the community around them keeping everyone more informed and connected. Using the popular Keras framework along with the Azure Machine Learning service, this project built and deployed a good plant disease recognition model which could be called from the application. Any future work or improved versions of models can be monitored and deployed in a development cycle. From this, the progression of the model can be tracked over time.

Third place—Water Level Anomaly Detector


Roy Kincaid’s project identifies drastic changes in water levels using an ultrasonic sensor that could be useful for detecting potential floods and natural disasters. This information can then be used to provide adequate warning for people to best prepare to major changes in their environment. Water Level Anomaly Detector could also be beneficial for long-term analysis of the effects of climate change. This is another great example of an end-to-end intelligent solution.

Roy is well skilled in the hardware and connection parts of this project, so it was brilliant to see the easy integration of the Anomaly Detector API from Azure Cognitive Services and to hear how quickly Roy could start using the service. Many IoT scenarios have a similar need for detecting rates and levels, and I hope to see Roy’s hinted at coffee level detector in the future (sign me up for one of those!). In a world where we all want to do our part to help the environment, it’s a great example of how monitoring means we can measure changes over time and be alerted when issues arise.

Getting involved

We’d like to thank everyone involved in the Azure AI Hackathon, including our participants, judges, and Devpost. We’re very eager to watch how these projects continue to progress and look forward to possibly hosting another Hackathon in the future.

Get involved with the Global AI Community and join our upcoming Global AI Bootcamp.

For more information, please reference Microsoft Azure AI Hackathon Official Rules.

Microsoft and SWIFT extend partnership to make cloud native payments a reality

This blog post is co-authored by George Zinn, Corporate VP, Microsoft Treasurer.

This week at Sibos, the world’s largest financial services event, Microsoft and SWIFT are showcasing the evolution of the cloud-native proof of concept (POC) announced at last year’s event. Building off the relationship between Microsoft Azure, SWIFT, and the work with Microsoft treasury, the companies are entering a long-term strategic partnership to bring to market SWIFT Cloud Connect on Azure. Together we have built out an end-to-end architecture that utilizes various Azure services to ensure SWIFT Cloud Connect achieves the resilience, security, and compliance demands for material workloads in the financial services industry. Microsoft is the first cloud provider working with SWIFT to build public cloud connectivity and will soon make this solution available to the industry. 

SWIFT is the world’s leading provider of secure financial messaging services used and trusted by more than 11,000 financial institutions in more than 200 countries and territories. Today, enterprises and banks conduct these transactions by sending payment messages over the highly secure SWIFT network, leveraging on-premises installations of SWIFT technology. SWIFT Cloud Connect creates a bank-like wire transfer experience with the added operational, security, and intelligence benefits the Microsoft cloud offers.

To demonstrate the potential of the production-ready service, Microsoft Treasury has successfully run test payment transactions through the SWIFT production network to their counterparty Bank of New York-Mellon (BNY Mellon) for payment confirmations through SWIFT on Azure. BNY Mellon is a global investments company dedicated to helping its clients manage and service their financial assets throughout the investment lifecycle. The company’s Treasury Services group, which delivers high-quality performance in global payments, trade services and cash management, provides payments services for Microsoft Treasury.

“At BNY Mellon, we focus on delivering world class solutions that exceed our clients’ expectations,” said Bank of New York Mellon Treasury Services CEO Paul Camp. “Together with SWIFT, we continuously work to enhance the payments experience for clients around the world. We’re excited to join now with our Microsoft Treasury client and with SWIFT to help make Cloud Connect real, leveraging Microsoft’s cloud expertise to expand the frontiers of financial technology. Building on the positive experience with Cloud Connect, we look forward to exploring additional opportunities with Microsoft Treasury to advance their digital payments strategy.”

In response to the rapidly increasing cyber threat landscape, SWIFT introduced the customer security program (CSP). This introduces a set of mandatory security controls for which many financial institutions have a significant challenge to implement in their on-premise environment. To simplify and support control implementation and enable continuous monitoring and audit, Microsoft has developed a blueprint for the CSP framework. Azure Blueprint is a free service that enables customers to define a repeatable set of Azure resources and policies that implement and adhere to standards, patterns and control requirements.  Azure Blueprints allow customers to set up governed Azure environments at scale to aid secure and compliant production implementations. The SWIFT CSP Blueprint is now available in preview.

Microsoft treasury has performed their testing with SWIFT by leveraging the Azure Logic Apps service to process payment transactions. Such an implementation used to take months but instead was completed in just a few weeks. Treasury integrated their backend SAP systems via Logic Apps to SWIFT to process payment transactions and business acknowledgments. As part of this processing, the transactions are validated and checked for duplicates or anomalies using the rich capabilities of Logic Apps.

Logic Apps is Microsoft Azure’s integration platform as a service (iPaaS) and now provides native understanding of SWIFT messaging, enabling customers to accelerate the modernization of their payments infrastructure by leveraging the cloud. With hybrid VNet-connected integration capabilities to on-premises applications as well as a wide array of Azure services, Logic Apps provides more than 300 connectors for intelligent automation, integration, data movement, and more to harness the power of Azure.

Microsoft treasury is able to quickly leverage the power of Azure to enable a seamless transfer of payment transactions. With Azure Monitor and Log Analytics they are also able to monitor, manage, and correlate their payment transactions for full end-to-end process visibility.

We are thrilled to extend our partnership with SWIFT as we believe this will become an integral offering for the industry. We thank BNY Mellon for their part in confirming the potential of SWIFT Cloud Connect. To see it in action, stop by the Microsoft booth in the North Event Hall, Z131.

Improved developer experience for Azure Blockchain development kit

As digital transformation expands beyond the walls of one company and into processes shared across organizations, businesses are looking to blockchain as a way to share workflow data and logic.

This spring we introduced Azure Blockchain Service, a fully-managed blockchain service that simplifies the formation, management, and governance of consortium blockchain networks. With a few simple clicks, users can create and deploy a permissioned blockchain network and manage consortium membership using an intuitive interface in the Azure portal.

To help developers building applications on the service, we also introduced our Azure Blockchain development kit for Ethereum. Delivered via Visual Studio Code, the dev kit runs on all major operating systems, and brings together the best of Microsoft and open source blockchain tooling, including deep integration with leading OSS tools from Truffle. These integrations enable developers to create, compile, test, and manage smart contract code before deploying it to a managed network in Azure.

We’re constantly looking and listening to feedback for areas where we can lean in and help developers go further, faster. This week for TruffleCon, we’re releasing some exciting new features that make it easier than ever to build blockchain applications:

  • Interactive debugger: Debugging of Ethereum smart contracts, has been so far, a challenging effort. While there are some great command line tools (e.g., Truffle Debugger), these tools aren’t integrated into integrated development environments (IDE) like Visual Studio Code. Native integration of the Truffle Debugger into Visual Studio Code brings all the standard debugging features developers have come to rely on (e.g, breakpoints, step in/over/out, call stacks, watch windows, and Intellisense pop ups) that let developers quickly identify, debug, and resolve issues.
  • Auto-generated prototype UI: The dev kit now generates a UI that is rendered and activated inside of Visual Studio Code. This allows developers to interact with their deployed contracts, directly in the IDE environment without having to build other UI or custom software simply to test out basic functionality of their contracts. Having a simple, graphical user interface (GUI) driven interface that allows developers to interact and test out basic functionality of their contracts inside the IDE, without writing code, is a huge improvement in productivity.

Interactive contract UI in Visual Studio Code

With the addition of these new debugger capabilities, we are bringing all the major components of software development, including build, debug, test, and deploy, for Smart Contracts into the popular Visual Studio Code developer environment.

If you’re in Redmond, Washington this weekend, August 2-4, 2019, come by TruffleCon to meet the team or head to the Visual Studio Marketplace to try these new features today!

Microsoft Azure welcomes customers, partners, and industry leaders to Siggraph 2019!

SIGGRAPH is back in Los Angeles and so is Microsoft Azure! I hope you can join us at Booth #1351 to hear from leading customers and innovative partners.

Teradici, Bebop, Support Partners, Blender, and more will be there to showcase the latest in cloud-based rendering and media workflows:

  • See a real-time demonstration of Teradici’s PCoIP Workstation Access Software, showcasing how it enables a world-class end-user experience for graphics-accelerated applications on Azure’s NVIDIA GPUs.
  • Experience a live demonstration of industry-standard visual effects, animation, and other post-production tools on the BeBop platform. It is the leading solution for cloud-based media and entertainment workflows, creativity, and collaboration.
  • Learn more about how cloud-integrator Support Partners enables companies to run complex and exciting hybrid workflows in Azure.
  • Be the first to hear about Azure’s integration with Blender’s render manager Flamenco and how users can easily deploy a completely virtual render farm and file server. The Azure Flamenco Manager will be freely available on GitHub, and we can’t wait to hear how it is being used and get your feedback.

We’re also demonstrating how you can simplify the creation and management of hybrid cloud rendering environments, get the most of your on-prem investments while bursting to the cloud for scale on demand and increase your output with high performance GPUs. Microsoft Avere, HPC, and Batch teams will be onsite to answer your questions about these new technologies, which are all generally available at SIGGRAPH 2019.

  • Azure Render Hub simplifies the creation and management of hybrid cloud rendering environments in Azure, providing integration with your existing AWS Thinkbox Deadline or PipelineFX Qube! render farm, Tractor and OpenCue are coming soon. It also orchestrates infrastructure setup and provides pay per use licensing and governance controls, including detailed cost tracking. The Azure Render Hub web app is available from GitHub where we welcome feedback and feature requests.
  • Maximize your resources pools by integrating your existing network attached storage (NAS) and Azure Blog Storage using Azure FXT Edge Filer. This on-premises caching appliance optimizes access to data in your datacenter, in Azure, and across a wide-area network (WAN). A combination of software and hardware, Microsoft Azure FXT Edge Filer delivers high throughput and low latency for hybrid storage infrastructure supporting large rendering workloads. You can learn more by visiting the Azure FXT Edge Filer product page.
  • Support powerful remove visualization workloads and other graphics-intensive applications, using Azure NV-series VMs, backed by the NVIDIA GPUs. Large memory, support for premium disks, and hyper-threading means these VMs offer double the number of vCPUs compared to the previous generation. Learn more about the NVIDIA and Azure partnership.

The Microsoft team and our partners will also be in room #512 for our Azure Customer Showcase and Training Program.

  • Tuesday and Wednesday morning Azure engineers, software partners, and top production companies will share unique insights on cloud enabled workflows that can help you improve efficiency and lower production costs.
  • Then, in the afternoon, we have a three hour deep dive into studio workflows on Azure. This will cover everything from Azure infrastructure, networking, and storage capabilities to how to enable Avere caching technology and set up burst render environments with popular render farm managers. At the end of every training session, industry leaders will join us for a fireside chat to talk about the cloud. Seating is first-come-first-serve so get there early! Full schedule below.
    • Tuesday, July 30:  2pm – 5pm
    • Wednesday, July 31:  2pm – 5pm
    • thursday, August 1: 10am – 1pm

If you’re curious about our Xbox Adaptive Controllers, come and check them out at the Adaptive Tech area of the Experience Hall and dive deep into new technologies by adding the following Tech Talks to your agenda:

  • Monday, July 29, 2019 from 12:30pm – 2pm room 504: Living in a Virtual World – With the VFX and animation industry moving into a new frontier of studio infrastructure and pipeline, join us as we delve into the best practices of moving your studio into a virtual environment securely, efficiently and economically.
  • Tuesday, July 30, 2019 from 2pm – 3:30pm room 503: Going Cloud Native – Join a continued discussion with key representatives from the graphics community who will compare experiences and explore techniques related to pushing the production pipeline and correlated resources toward the cloud.
  • Wednesday, July 31, 2019 from 12pm – 1pm room 309: Volumetric Video Studios – Volumetric Video providers gather to discuss their experiences, challenges, and opportunities in the early days of this new medium. Where is the market now, and where will it go? Topics include successes and lessons learned so far, most/least active scenarios, creator and consumer perceptions, technology evolution, trends in the market, and predictions for the years ahead.
  • Wednesday, July 31, 2019 from 2pm to 4pm room 406A: Volumetric Video Creators – Content creators discuss the advantages of using volumetric video captures as a way to tell stories, entertain and educate, as well as lessons learned along the way. Topics covered including the funding landscape, best methods of reaching audiences, most effective storytelling methods and future creative directions.

If you haven’t registered yet or are looking for a pass, you can register now for a free guest pass using code MICROSOFT19.

We hope to see you at the show and will look forward to learning more about your projects and requirements!

Highlights from SIGMOD 2019: New advances in database innovation

The emergence of the cloud and the edge as the new frontiers for computing is an exciting direction—data is now dispersed within and beyond the enterprise, on-premises, in the cloud, and at the edge. We must enable intelligent analysis, transactions, and responsible governance for data everywhere, from creation through to deletion (through the entire lifecycle of ingestion, updates, exploration, data prep, analysis, serving, and archival).

Our commitment to innovation is reflected in our unique collaborative approach to product development. Product teams work in synergy with research and advanced development groups, including Cloud Information Services Lab, Gray Systems Lab, and Microsoft Research, to push boundaries, explore novel concepts and challenge hypotheses.

The Azure Data team continues to lead the way in on-premises and cloud-based database management. SQL Server has been identified as the top DBMS by Gartner for four consecutive years.  Our aim is to re-think and redefine data management by developing optimal ways to capture, store and analyze data.

I’m especially excited that this year we have three teams presenting their work: “Socrates: The New SQL Server in the Cloud,” “Automatically Indexing Millions of Databases in Microsoft Azure SQL Database,” and the Gray Systems Lab research team’s “Event Trend Aggregation Under Rich Event Matching Semantics.” 

The Socrates paper describes the foundations of Azure SQL Database Hyperscale, a revolutionary new cloud-native solution purpose-built to address common cloud scalability limits. It enables existing applications to elastically scale without fixed limits without the need to rearchitect applications, and with storage up to 100TB.

Its highly scalable storage architecture enables a database to expand on demand, eliminating the need to pre-provision storage resources, providing flexibility to optimize performance for workloads. The downtime to restore a database or to scale up or down is no longer tied to the volume of data in the database and database point-in-time restores are very fast, typically in minutes rather than hours or even days. With read-intensive workloads, Hyperscale provides rapid scale-out by provisioning additional read replicas instantaneously without any data copy needed.

Learn more about Azure SQL Database Hyperscale.

Azure SQL Database also introduced a new serverless compute option: Azure SQL Database serverless. Serverless allows compute and memory to scale independently and on-demand based on the workload requirements. Compute is automatically paused and resumed, eliminating the requirements of managing capacity and reducing cost, and is an efficient option for applications with unpredictable or intermittent compute requirements.

Learn more about Azure SQL Database serverless.

Index management is a challenging task even for expert human administrators. The ability to create efficiencies and fully automate the process is of critical significance to business, as discussed in the Data team’s presentation on the auto-indexing feature in Azure SQL Database.

This, coupled with the identification of how to achieve optimal query performance for complex real-world applications, underpins the auto-indexing feature.

The auto-indexing feature is generally available and generates index recommendations for every database in Azure SQL Database. If the customer chooses, it can automatically implement index changes on their behalf and validate these index changes to ensure that performance improves. This feature has already significantly improved the performance of hundreds of thousands of databases.

Discover the benefits of the auto-tuning feature in Azure SQL Database.

In the world of streaming systems, the key challenges are supporting rich event matching semantics (e.g. Kleene patterns to capture event sequences of arbitrary lengths), and scalability (i.e. controlling memory pressure and latency at very high event throughputs). 

The advanced research team focused on supporting this class of queries at a very high scale and compiled their findings in Event Trend Aggregation Under Rich Event Matching Semantics. The key intuition is to incrementally maintain the coarsest grained aggregates that can support a given query semantics, enabling control of memory pressure and attainment of very good latency at scale. By carefully implementing this insight, a research prototype was built that achieves six orders of magnitude speed-up and up to seven orders of magnitude memory reduction compared to state-of-the-art approaches.

Microsoft has the unique advantage of a world-class data management system in SQL Server and a leading public cloud in Azure. This is especially exciting at a time when cloud-native architectures are revolutionizing database management.

There has never been a better time to be part of database systems innovation at Microsoft, and we invite you to explore the opportunities to be part of our team.

Enjoy SIGMOD 2019; it’s a fantastic conference! 

Azure Security Expert Series: Learn best practices and Customer Lockbox general availability


With more computing environments moving to the cloud, the need for stronger cloud security has never been greater. But what constitutes effective cloud security, and what best practices should you be following?

While Microsoft Azure delivers unmatched built-in security, it is important that you understand the breadth of security controls and take advantage of them to protect your workloads.

We launched the Azure Security Expert Series, which will provide on-going virtual content to help security professionals protect hybrid cloud environments. Ann Johnson, CVP of Cybersecurity Solutions Group at Microsoft, kicked off the series and shared five cloud security best practices:

  1. Strengthen Access Control
  2. Increase your security posture
  3. Secure apps and data
  4. Manage networking
  5. Mitigate threats

Make sure you are up to speed with each of these important best practices as you secure your own organization.

Customer Lockbox for Microsoft Azure

During Ann’s main talk, she announced the general availability of Customer Lockbox for Microsoft Azure. Customer Lockbox for Azure extends our commitment to customer privacy while also giving you help when you need it most. With Customer Lockbox for Microsoft Azure, customers can review and approve or reject requests from Microsoft engineers to access their data during a support case. Access is granted only if approved and the entire process is audited with records stored in the Activity Logs.

Customer Lockbox is now generally available and currently enabled for remote desktop access to virtual machines. To learn more, please go to Customer Lockbox for Microsoft documentation.

What will you learn?

Missed the broadcast or want to dive deeper into SIEM, IOT, Networking or Security Center?

Check out the Azure Security Expert series which includes the best practice session with Ann, and additional drill-down sessions including:

  • Get started with Azure Sentinel a cloud-native SIEM
  • What is cloud-native Azure Network Security?
  • Securing the hybrid cloud with Security Center
  • What makes IoT Security different?

Until June 26th, 2019, you will have a chance to win a Microsoft Xbox One S. To enter, watch the sessions and complete the knowledge check on the entry form and submit the entry.**

‘Ask Us Anything’ with Azure security experts

Have more questions? The Azure security team will be hosting an ‘Ask Us Anything’ session on Twitter on Monday June 24, 2019 from 10 am – 11:30 am PT (1 pm – 2:30 pm ET). Our product and engineering teams will be available to answer questions about Azure security services.

Post your questions to Twitter by mentioning @AzureSupport and using the hashtag #AzureSecuritySeries.

If there are follow-ups or additional questions that come up after the Twitter session, no problem! We’re happy to continue the dialogue afterward through Twitter or send your questions to [email protected].

Save the date

How do I learn more about Azure security and connect with the tech community?

There are several ways to stay connected and access new executive talks, on-demand sessions, or other types of valuable content covering a range of cloud security topics to help you get started or accelerate your cloud security plan.

**The Sweepstakes will run exclusively between June 19 – June 26 11:59 PM Pacific Time. No purchase necessary. To enter, you must be a legal resident of the 50 United States (including the District of Columbia), and be 18 years of age or older. You will need to complete all the Knowledge Check questions in the entry form to qualify for the sweepstakes. Please refer to our official rules for more details.

Azure Security Expert Series: Best practices from Ann Johnson

June 19th 10 am – 11 am PT (1 pm – 2 pm ET)

With more computing environments moving to the cloud, the need for stronger cloud security has never been greater. But what constitutes effective cloud security, and what best practices should you be following?

We are excited to launch the Azure security expert series on June 19th, for security operations and IT professionals. Kicking off with Ann Johnson, CVP of Cybersecurity for Microsoft, and other industry experts in discussions on a wide range of cloud security topics.

Save the date


What is included in the event?

You’ll hear from Ann Johnson on the following:

  • Cloud security best practices with Hayden Hainsworth, GM and Partner, Cybersecurity Engineering at Microsoft.
  • The latest Azure security innovations.
  • Partnership with Ran Nahmias, Head of Cloud Security at Check Point Software Technologies.
  • Security principles from a Microsoft enterprise customer.

During the streaming event, you’ll have the opportunity to participate in a live chat with Microsoft cloud security experts.

You will also have access to watch our new on-demand sessions at your own pace, all led by Microsoft security product experts, to gain practical knowledge from these topics:

  • Get started with Azure Sentinel a cloud-native SIEM.
  • What is cloud-native Azure Network Security?
  • Securing the hybrid cloud with Security Center .
  • What makes IoT Security different?

That’s not all! You will have a chance to win a Microsoft Xbox One S after the event. All you have to do is: watch all the sessions, complete the knowledge check questions on the sweepstakes form, submit – and you are entered!**

Tune in

So mark your calendars today, and we’ll see you online on Wednesday June 19th at 10 am PT (1 pm ET).

Ask Us Anything with Azure security experts

Have more questions? Azure security team will be hosting an ‘Ask Us Anything’ session on Twitter, on Monday June 24, 2019 from 10 am – 11:30 am PT (1 pm – 2:30 pm ET).  Members of the product and engineering teams will be available to answer questions about Azure security services.

Post your questions to Twitter by mentioning @AzureSupport and using the hashtag #AzureSecuritySeries.

If there are follow-ups or additional questions that come up after the Twitter session, no problem! We’re happy to continue the dialogue afterwards through Twitter or send your questions to [email protected].

How do I learn more about Azure security and connect with the tech community?

There are several ways to stay connected and access new executive talks, on-demand sessions, or other types of valuable content covering a range of cloud security topics to help you get started or accelerate your cloud security plan.

  • Watch for content on: Azure Security Expert Series Page.
  • Visit Microsoft Azure for product details.
  • Follow the social channel for Azure security news and updates on Microsoft Azure Twitter.
  • Join our Security Community to connect with the engineering teams and participate in previews, group discussions, give feedback etc.
  • Accelerate your knowledge on security capabilities within Azure with hands-on training courses on Microsoft Learn (watch out for new security training sessions in the coming months).
  • Attend Microsoft Ignite for specialized security learning paths to learn from the experts and connect with your peers.

**The Sweepstakes will run exclusively between June 19 – June 26 11:59 PM Pacific Time. No purchase necessary. To enter, you must be a legal resident of the 50 United States (including the District of Columbia), and be 18 years of age or older. You will need to complete all the Knowledge Check questions in the entry form to quality for the sweepstakes. Please refer to our official rules for more details.

5 cloud sessions from Google I/O ’19, from basic to advanced

Our goal is to make Google Cloud the best place for developers, and Google I/O is one of our favorite ways to spend quality time with the developer community to better understand your needs and challenges. During I/O, we provided a number of breakout sessions aimed at supporting you as you build on Google Cloud, and these are all recorded so that anyone—not just I/O attendees—can learn more and uplevel their skills.

Below are five of our favorite Google Cloud sessions from this year. We’ve ordered these from introductory to advanced, so you can move at your own pace. Start with the basics, then work up to expert topics like building your own machine learning model.

1. Google Cloud Platform (GCP) Essentials
From compute to storage to databases, to say nothing of things like continuous integration tools, DevOps, and machine learning, Google Cloud provides so many options, but not everyone knows where to begin. This session gives you a complete overview of GCP and will leave you with an understanding of the tools available to meet your needs and how to get started.

2. Code, Build, Run, and Observe with Google Cloud
Creating great backend services requires great tools and infrastructure, and our goal with GCP has always been to give developers the resources they need to build. This session offers an overview of GCP products that make it easy to code, build, run, and observe your applications and services with Google Cloud.

3. Making the Right Decisions for Your Serverless Architecture
Chomping at the bit to build a complete end-to-end service entirely on serverless technologies? There are many things you might want to keep in mind as you’re building. This session explains the thought process and methodology we use inside Google, and introduces the constraints of working in environments without persistence.

4. Train Custom Machine Learning Models with No Data Science Expertise
Want to create high quality custom machine learning models but are not an ML expert? Cloud AutoML leverages Google’s state-of-the-art neural architecture search technology to help you do exactly that. Learn how to build and deploy with AutoML Tables, AutoML Video Intelligence, and AutoML Natural Language—and even see how AutoML would fare if it were to participate in data science competitions.

5. Live Coding a Machine Learning Model from Scratch
Far and away our most popular cloud session at this year’s I/O, developer advocate Sara Robinson takes you from an empty Colab notebook to using TensorFlow and Keras to code a model, then training, deploying to Cloud AI Platform for serving, and generating predictions. This is an excellent session for anyone interested in building a machine learning model using Jupyter notebook, and serving the model in production with ease.

Want more? You can find recordings of all our Google Cloud sessions at I/O here.

Join Microsoft at ISC2019 in Frankfurt

The world of computing goes deep and wide in regards to working on issues related to our environment, economy, energy, and public health systems. These needs require modern, advanced solutions that can be hard to scale, take a long time to deliver, and were traditionally limited to a few organizations. Microsoft Azure delivers high-performance computing (HPC) capability and tools to power solutions that address these challenges integrated into a global-scale cloud platform.

Join us in Frankfurt, Germany from June 17–19, 2019 at the world’s second-largest supercomputing show, ISC High Performance 2019. Learn how Azure customers combine the flexibility and elasticity of the cloud and how to integrate both our specialized compute virtual machines (VMs), as well as bare-metal offerings from Cray.

Microsoft booth presentations and topics include:

  • How to achieve high-performance computing on Azure
  • Cray Supercomputing on Azure
  • Cray ClusterStor on Azure with H-Series VMs
  • AI and HPC with NVIDIA
  • Autonomous driving
  • Live demos
  • Case studies from partners and customers
  • More about our recently launched HB and HC virtual machines

To learn more, please come by the Microsoft booth, K-530, and say “hello” on June 17 and June 19.

Microsoft, AMD, and Cray breakfast at ISC

Please join us for a co-hosted breakfast with Microsoft, AMD, and Cray on June 19, 2019 where we will discuss how to successful support your large scale HPC jobs in the cloud. In this session we will discuss our recently launched offers with Cray in Azure, as well as the Azure HB-series VMs optimized for applications driven by memory bandwidth, all powered by AMD EPYC processors. The breakfast is at the Frankfurt Marriott in Gold I-III (1st Floor) from 7:45 AM – 9:00 AM. Please feel free to register for this event.

Supercomputing in the cloud

Building on our strong relationship with Cray, we’re excited to showcase our three new dedicated offerings at ISC. We look forward to showcasing our accelerated innovation and delivery of next generation HPC and AI technologies to Azure customers.

We’re looking forward to seeing you at ISC.

Microsoft’s ISC schedule

Tuesday, June 18, 2019

10:30 AM –

10:50 AM


Burak Yenier, CEO, TheUberCloud Inc.


UberCloud and Microsoft are helping customers move their engineering workload to Azure

11:30 AM –

11:50 AM


Mohammad Zamaninasab, AI TSP GBB, Microsoft


Artificial intelligence with Azure Machine Learning, Cognitive Services, DataBricks

12:30 PM –

12:50 PM


Dr. Ulrich Knechtel, CSP Manager – EMEA, NVIDIA


Accelerate your HPC workloads with NVIDIA GPUs on Azure

1:30 PM –

1:50 PM


Uli Plechschmidt, Storage Marketing, Cray


Why moving large scale, extremely I/O intensive HPC applications to Microsoft Azure is now possible

2:30 PM –

2:50 PM


Joseph George, Executive Director, Cray Inc.


5 reasons why you can maximize your manufacturing environment with Cray in Azure

3:30 PM –

3:50 PM


Martin Hilgeman, Senior Manager, AMD HPC Centre of Excellence


Turbocharging HPC in the cloud with AMD EPYC

4:30 PM –

4:50 PM


Evan Burness, Principal Program Manager, Azure HPC, Microsoft


HPC infrastructure in Azure

5:30 PM –

5:50 PM


Niko Dukic, Senior Program Manager for Azure Storage, Microsoft


Azure Storage ready for HPC

Wednesday, June 19, 2019

10:30 AM –

10:50 AM


Gabriel Broner, Vice President and General Manager of HPC, Rscale Inc.


Rescale HPC platform on Microsoft Azure

11:30 AM –

11:50 AM


Martijn de Vries, CTO, Bright Computing


Enabling hybrid clusters that span on-premises and Microsoft Azure

12:30 PM –

12:50 PM


Rob Futrik, Program Manager, Microsoft


HPC Clustermanagement in Azure via Microsoft programs: CycleCloud / Azure Batch / HP Pack

1:30 PM –

1:50 PM


Christopher Woll, CTO, GNS Systems


Digital Engineering Center – The HPC Workplace of tomorrow already today

2:30 PM –

2:50 PM


Addison Snell, CEO, Intersect360 Research


HPC and AI market update

3:30 PM –

3:50 PM


Rick Watkins, Director of Appliance and Cloud Solutions, Altair


Altair HyperWorks Unlimited Virtual Appliance (HWUL-VA) –  Easy to use HPC-Powered CAE solvers running on Azure

4:30 PM –

4:50 PM


Gabriel Sallah, PSE GBB, Microsoft


Deploying autonomous driving on Azure

5:30 PM –

5:50 PM


Brock Taylor, Engineering Director and HPC Solutions Architect


HPC as a service on-premises and off-premises considerations for the cloud

Microsoft hosts HL7 FHIR DevDays

This blog post was co-authored by Greg Moore, Corporate Vice President, Microsoft Healthcare and Peter Lee, Corporate Vice President, Microsoft Healthcare.

DevDays FHIRcatOne of the largest gatherings of healthcare IT developers will come together on the Microsoft campus next week for HL7 FHIR DevDays, with the goal of advancing the open standard for interoperable health data, called HL7® FHIR® (Fast Healthcare Interoperability Resources, pronounced “fire”). Microsoft is thrilled to host this important conference on June 10-12, 2019 on our Redmond campus, and engage with the developer community on everything from identifying immediate use cases to finding ways for all of us to hack together in ways that help advance the FHIR specification.

We believe that FHIR will be an incredibly important piece of the healthcare future. Its modern design enables a new generation of AI-powered applications and services, and it provides an extensible, standardized format that makes it possible for all health IT systems to not only share data so that it can get to the right people where and when they need it, but also turn that data into knowledge. While real work has been underway for many years on HL7 FHIR, today it has become one of the most critical technologies in health data management, leading to major shifts in both the technology and policy of healthcare. 

Given the accelerating shift of healthcare to the cloud, FHIR in the cloud presents a potentially historic opportunity to advance health data interoperability. For this reason, last summer in Washington, DC, we stood with leaders from AWS, Google, IBM, Oracle, and Salesforce to make a joint pledge to adopt technologies that promote the interoperability of health data. But we all know that FHIR is not magic. To make the liberation of health data a reality, developers and other stakeholders will need to work together, and so this is why community events like HL7 FHIR DevDays are so important. They allow us to try out new ideas in code and discuss a variety of areas, from the basics of FHIR, to its use with medical devices, imaging, research, security, privacy, and patient empowerment.

The summer of 2019 may indeed be the coming of age for FHIR, with the new version of the standard called “FHIR release 4” (R4) reaching broader adoption, new product updates from Microsoft, and new interop policies from the US government that will encourage the industry to adopt FHIR more broadly.

New FHIR standard progressing quickly

Healthcare developers can start building with greater confidence that FHIR R4 will help connect people, data, and systems. R4 is the first version to be “normative,” which means that it’s an official part of the future specification so that all future versions will be backward compatible.

Microsoft adding more FHIR functionality to Azure

Microsoft is doing its part to realize benefits of health data interop with FHIR, and today we’re announcing that our open source FHIR Server for Azure will support FHIR R4 and is available today.

We have added a new data persistence provider implementation to the open source FHIR Server for Azure. The new SQL persistence provider enables developers to configure their FHIR server instance to use either an Azure Cosmos DB backed persistence layer, or a persistence layer using a SQL database, such as Azure SQL Database. This will make it easier for customers to manage their healthcare applications by adding more capabilities for their preferred SQL provider. It will extend the capability of a FHIR server in Azure to support key business workloads with new features such as chained queries and transactions.

Growing ecosystem of customers and partners

Our Azure API for FHIR already has a broad partner ecosystem in place and customers using the preview service to centralize disparate data.

Northwell Health, the largest employer in New York state with 23 hospitals and 700 practices, is using the Azure API for FHIR to build interoperability into its data flow solution to reduce excess days for patients. This ensures the patient only stays for the period that is required for clinical care and there are no other non-clinical reasons are occurring for delays in discharging the patient.

Our open source implementation of FHIR Server for Azure is already creating a tighter feedback loop with developers and partners for our products who have quickly innovated on top of this open source project.

Darena Solutions used the open source FHIR Server for Azure to develop its Blue Button application with a content management system (CMS) called BlueButtonPRO. This will allow patients to import their data from CMS (through Blue Button). More importantly, it allows patients a simple and secure way to download, view, manage, and share healthcare data from any FHIR portals that they have access to.

US Health IT Policy proposal to adopt FHIR

The DevDays conference also comes on the heels of the US government’s proposed ruling to improve interoperability of health data embodied in the 21st Century Cures Act, which includes the use of FHIR.

Microsoft supports the focus in these proposed rules on reducing barriers to interoperability because we are confident that the result will be good for patients. Interoperability and the seamless flow of health data will enable a more informed and empowered consumer. We expect the health industry will respond with greater efficiency, better care, and cost savings.

We’re at a pivotal moment for health interoperability, where all the bottom-up development in the FHIR community is meeting top-down policy decision at the federal level.

Health data interoperability at Microsoft

Integrating health data into our platforms is a huge commitment for Microsoft, and Azure with FHIR is just the start. Now that FHIR is baked into the core of Azure, the Microsoft cloud will natively speak FHIR as the language for health data as we plan for all our services to inherit that ability.

Healthcare today and into the future will demand a broad perspective and creative, collaborative problem-solving. Looking ahead, Microsoft intends to continue an open, collaborative dialogue with the industry and community, from FHIR DevDays to the hallways of our customers and partners.

FHIR is a part of our healthcare future, and FHIR DevDays is a great place to start designing for that future.