New study: The state of AI in the enterprise

Editor’s note: Today we hear from one of our Premier partners, Deloitte.  Deloitte’s recent report, The State of AI in the Enterprise, 2nd Edition, examines how businesses are thinking about—and deploying—AI services.

From consumer products to financial services, AI is transforming the global business landscape. In 2017, we began our relationship with Google Cloud to help our joint customers deploy and scale AI applications for their businesses. These customers frequently tell us they’re seeing steady returns on their investments in AI, and as a result, they’re interested in more ways to increase those investments.

We regularly conduct research on the broader market trends for AI, and in November of 2018, we released our second annual “State of AI in the Enterprise” study. It showed that industry trends at large reflect what we hear from our customers: the business community remains bullish on AI’s impact.

In this blog post, we’ll examine some of the key takeaways from our survey of 1,100 IT and line-of-business executives and discuss how these findings are relevant to our customers.

Enterprises are doubling down on AI—and seeing financial benefits

More than 95 percent of respondents believe that AI will transform both their businesses and their industries. A majority of survey respondents have already made large-scale investments in AI, with 37 percent saying they have committed $5 million or more to AI-specific initiatives. Nearly two-thirds of respondents (63 percent) feel AI has completely upended the marketplace and they need to make large-scale investments to catch up with rivals—or even to open a narrow lead.

A surprising 82 percent of our respondents told us they’ve already gained a financial return from their AI investments. But that return is not equal across industries. Technology, media, and telecom companies, along with professional services firms, have made the biggest investments and realized the highest returns. In contrast, the public sector and financial services, with lower investments, lag behind. With 88 percent of surveyed companies planning to increase AI spending in the coming year, there’s a significant opportunity to increase both revenue and cost savings across all industries. However, like past transformative technologies, selecting the right AI use cases will be key to recognizing near and long-term benefits.

Enterprises are using a broad range of AI technologies, increasingly in the cloud

Our findings show that enterprises are employing a wide variety of AI technologies. More than half of respondents say their businesses are using statistical machine learning (63 percent), robotic process automation (59 percent), or natural language processing and generation (53 percent). Just under half (49 percent) are still using expert or rule-based systems, and 34 percent are using deep learning.

When asked how they were accessing these AI capabilities, 59 percent said they relied on enterprise software with AI capabilities (much of which is available in the cloud) and 49 percent said, “AI as a service” (again, presumably in the cloud). Forty-six percent, a surprisingly high number, said they were relying upon automated machine learning—a set of capabilities that are only available in the cloud. It’s clear, then, that the cloud is already having a major effect on AI use in these large enterprises.

These trends suggest that public cloud providers can become the primary way businesses access AI services. As a result, we believe this could lower the cost of cloud services and enhance its capabilities at the same time. In fact, our research shows that AI technology companies are investing more R&D dollars into enhancing cloud native versions of AI systems. If this trend continues, it seems likely that enterprises seeking best-of-breed AI solutions will increasingly need to access them from cloud providers.

There are still challenges to overcome

Given the enthusiasm surrounding AI technologies, it is not surprising that organizations also need to supplement their investments in talent. Although 31 percent of respondents listed “lack of AI skills” as a top-three concern—below such issues as implementation, integration, and data—HR teams need to look beyond technology skills to understand their organization’s pain points and end goals. Companies should try to secure teams that bring a mix of business and technology experience to help fully realize their AI project potential.

Our respondents also had concerns about AI-related risks. A little more than half are worried about cybersecurity issues around AI (51 percent), and are concerned about “making the wrong strategic decisions based on AI recommendations” (43 percent). Companies have also begun to recognize ethical risks from AI, the most common being “using AI to manipulate information and create falsehoods” (43 percent).

In conclusion

Despite some challenges, our study suggests that enterprises are enthusiastic about AI, have already seen value from their investments, and are committed to expanding those investments. Looking forward, we expect to see substantial growth in AI and its cloud-based implementations, and that businesses will increasingly turn to public cloud providers as their primary method of accessing them.

Deloitte was proud to be named Google Cloud’sGlobal Services Partner of the Year for 2017–in part due to our joint investments in AI. To learn more about how we can help you accelerate your organization’s AI journey, contact [email protected].

As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting.

New device modeling experience in Azure IoT Central

On the Azure IoT Central team, we are constantly talking with our customers to understand how we can continue to provide more value. One of our top pieces of product feedback has been for a clearer device modeling experience that separates the device instance from the device template. Previously, viewing the device and editing the device template took place on the same page through an “Edit Template” button. This caused a lack of clarity between when you were making a change that applied to the device or if your changes were getting applied to all devices in that template. Recently we’ve begun a flighted rollout of a new device modeling experience that begins to directly address this feedback.

Device modeling experience in Azure IoT Central

For app builder roles, we have introduced a new “Device Templates” navigation tab that replaces the existing “Application Builder” tab, as well as updated the pattern in which you view or edit your device templates. To edit your device templates, you can visit the “Device Templates” tab to make changes. To view or interact with your device instance, you can still find this under the “Explorer” tab. We’re excited to get the first set of changes in your hands so that device templates and device explorer can continue to evolve independently from one another in order to best support how our users interact with their devices. These changes will both optimize the operator experience of viewing or interacting with devices, as well as streamline the builder workflow of creating or modifying a template.

These changes are an important first step towards continuing to optimize your device workflow for easier management and clarity. Please leave us feedback at Azure IoT Central UserVoice, as we continue to invest in understanding and solving our customer needs.

To learn more, please visit our documentation, “Set up a device template.”

Creating IoT applications with Azure Database for PostgreSQL

There are numerous IoT use cases in different industries, with common categories like predictive maintenance, connected vehicles, anomaly detection, asset monitoring, and many others. For example, in water treatment facilities in the state of California, IoT devices can be installed in water pumps to measure horse power, flow rate, and electric usage of the water pumps. The events emitted from these devices get sent to an IoT hub every 30 seconds for aggregation and processing. A water treatment facility company could build a dashboard to monitor the water pumps and build notifications to alert the maintenance team when the event data is beyond a certain threshold. They could then alert the maintenance team to repair the water pump if the flow rate is dangerously low. This is a very typical proactive maintenance IoT use case.

Azure IoT is a complete stack of IoT solutions. It’s a collection of Microsoft managed cloud services that connect, monitor, and control billions of IoT assets. The common set of components in the Azure IoT core subsystem include:

  • IoT devices that stream the events
  • Cloud gateway, where Azure IoT is most often used to enable communication to and from devices and edge devices
  • Stream processing that ingests events from the device and triggers actions based on the output of the analysis. A common workflow is the input telemetry encoded in Avro that may return output telemetry encoded in JSON for storage
  • Storage, that’s usually a database used to store IoT event data for reporting and visualization purposes

Let’s take a look at how we implement an end to end Azure IoT solution and use Azure Database for PostgreSQL to store IoT event data in the JSONB format. Using PostgreSQL as the NoSQL data store has its own advantages with its strong native JSON processing, indexing capabilities, and plv8 extension that further enhances it by integrating the JavaScript v8 engine with SQL. Besides the managed services capabilities and lower cost, one of the key advantages of using Azure Database for PostgreSQL is its native integration with the Azure ecosystem that enables modern applications with improved developer productivity.

In this implementation, we use Azure Database for PostgreSQL with the plv8 extension as a persistent layer for IoT telemetry stream for storage, analytics, and reporting. The high-speed streaming data is first loaded into the PostgreSQL database (master server) as a persistent layer. The master server is used for high speed data ingestion and the read replicas are leveraged for reporting and downstream data processing to take data-driven actions. You can leverage the Azure IoT Hub as the event processing hub and Azure Function to trigger the processing steps and extract what’s needed from emitted events to store them in Azure Database for PostgreSQL.

139 - Azure
 

In this post, we’ll walk through the high-level implementation to get you started. Our GitHub repository has sample applications and a detailed QuickStart tutorial with step-by-step instructions for implementing the solution below. The QuickStart uses Node.js applications to send telemetry to the IoT Hub.

Step 1: Create an Azure IoT Hub and register a device with the Hub

In this implementation, the IoT sensor simulators are constantly emitting temperature and humidity data back to the cloud. The first step would be creating an Azure IoT Hub in the Azure portal using these instructions. Next, you’ll want to register the device name in the IoT Hub so that the IoT Hub can receive and process the telemetry from the registered devices.

In GitHub, you will see sample scripts to register the device using CLI and export the IoT Hub service connection string.

Step 2: Create an Azure Database for PostgreSQL server and a database IoT demo to store the telemetry data stream

Provision an Azure Database for PostgreSQL with the appropriate size. You can use the Azure portal or the Azure CLI to provision the Azure Database for PostgreSQL.

In the database, you will enable the plv8 extension and create a sample plv8 function that’s useful for querying to extract a temperature column from the JSON documents. You can use the JSON table to store the IoT telemetry data. You can locate the script to create a database and table and enable the plv8 extension in GitHub.

Step 3: Create an Azure Function Event Hub and extract message and store in PostgreSQL

Next you will create a JavaScript Azure Function with Event Hub trigger bindings to Azure IoT Hub created in step 1. Use the JavaScript index.js sample to create this function. The function is triggered for each incoming message stream in the IoT Hub. It extracts the JSON message stream and inserts the data into the PostgreSQL database created in Step 2.

Getting started by running the IoT solution end to end

We recommend that you try and implement this solution using the sample application in our GitHub repository. In GitHub, you will find steps on running the node.js application to simulate the generation of event data, creating an IoT Hub with device registration, sending the event data to the IoT Hub, deploying Azure function to extract the data from JSON message, and inserting it in Azure Database for PostgreSQL.

At the end of implementing all the steps in GitHub, you will be able to query and analyze the data using reporting tools like Power BI that allow you to build real-time dashboards as shown below.

139 - Azure 2

We hope that you enjoy working with the latest features and functionality available in our Azure Database Service for PostgreSQL. Be sure to share your feedback via User Voice for PostgreSQL.

If you need any help or have questions, please check out the Azure Database for PostgreSQL documentation.

Acknowledgements

Special thanks to Qingqing Yuan, Bassu Hiremath, Parikshit Savjani, Anitah Cantele, and Rachel Agyemang for their contributions to this post.

How to Install Matomo Web Analytics on CentOS 7

Matomo (formerly Piwik) is a free and open source web analytics application developed by a team of international developers, that runs on a PHP/MySQL web server. This tutorial will show you how to install Matomo on a CentOS 7 system using Nginx as the web server and we will secure the website with a Let’s Encrypt SSL certificate.

Exploring container security: How DroneDeploy achieved ISO-27001 certification on GKE

Editor’s note: Aerial data mapping company DroneDeploy wanted to migrate its on-premises Kubernetes environment to Google Kubernetes Engine—but only if it would pass muster with auditors. Read on to learn how the firm leveraged GKE’s native security capabilities to smooth the path to ISO-27001 certification.

At DroneDeploy, we put a lot of effort into securing our customers’ data. We’ve always been proud of our internal security efforts, and receiving compliance certifications validates these efforts, helping us formalize our information security program, and keeping us accountable to a high standard. Recently, we achieved ISO-27001 certification— all from taking advantage of the existing security practices in Google Cloud and Google Kubernetes Engine (GKE). Here’s how we did it.

As a fast-paced, quickly growing B2B SaaS startup in San Francisco, our mission is to make aerial data accessible and productive for everyone. We do so by providing our users with image processing, automated mapping, 3D modeling, data sharing, and flight controls through iOS and Android applications. Our Enterprise Platform provides an admin console for role-based access and monitoring of flights, mapped routes, image capture, and sharing. We serve more than 4,000 customers across 180 countries in the construction, energy, insurance, and mining industries, and ingest more than 50 terabytes of image data from over 30,000 individual flights every month.

Many of our customers and prospects are large enterprises that have strict security expectations of their third-party service providers. In an era of increased regulation (such as Europe’s GDPR law) and data security concerns, the scrutiny on information security management has never been higher.. Compliance initiatives are one piece of the overall security strategy that help us communicate our commitment to securing customer data. At DroneDeploy, we chose to start our compliance story with ISO-27001, an international information security standard that is for recognized across a variety of industries.

DroneDeploy’s Architecture: Google Kubernetes Engine (GKE)

DroneDeploy was an early adopter of Kubernetes, and we have long since migrated all our workloads from virtual machines to containers orchestrated by Kubernetes. We currently run more than 150,000 Kubernetes jobs each month with run times ranging from a few minutes to a few days. Our tooling for managing clusters evolved over time, starting with hand-crafted bash and Ansible scripts, to the now ubiquitous (and fantastic) kops. About 18 months ago, we decided to re-evaluate our hosting strategy given the decreased costs of compute in the cloud. We knew that managing our own Kubernetes clusters was not a competitive advantage for our business and that we would rather spend our energy elsewhere if we could.

We investigated the managed Kubernetes offerings of the top cloud providers and did some technical due diligence before making our selection—comparing not only what was available at the time but also future roadmaps. We found that GKE had several key features that were missing in other providers such as robust Kubernetes-native autoscaling, a mature control plane, multi-availability zone masters, and extensive documentation. GKE’s ability to run on pre-emptible node pools for ephemeral workloads was also a huge plus.

Proving our commitment to security hardening

But if we were going to make the move, we needed to document our information security management policies and process and prove that we were following best practices for security hardening.

Specifically, when it comes to ISO-27001 certification, we needed to follow the general process:

  1. Document the processes you perform to achieve compliance
  2. Prove that the processes convincingly address the compliance objectives
  3. Provide evidence that you are following the process
  4. Document any deviations or exceptions

While Google Cloud offers hardening guidance for GKE and several GCP blogs to guide our approach, we still needed to prove that we had security best practices in place for our critical systems. With newer technologies, though, it can be difficult to provide clear evidence to an auditor that those best practices are in place; they often live in the form of blog posts by core contributors and community leaders versus official, documented best practices. Fortunately, standards have begun to emerge for Kubernetes. The Center for Internet Security (CIS) recently published an updated compliance benchmark for Kubernetes 1.11 that is quite comprehensive. You can even run automated checks against the CIS benchmark using the excellent open source project kube-bench. Ultimately though, it was the fact that Google manages the underlying GKE infrastructure that really helped speed up the certification process.  

Compliance with less pain thanks to GKE

As mentioned, one of the main reasons we switched from running Kubernetes in-house to GKE was to reduce our investment in manually maintaining and upgrading our Kubernetes clusters— including our compliance initiatives. GKE reduces the overall footprint that our team has to manage since Google itself manages and documents much of the underlying infrastructure. We’re now able to focus on improving and documenting the parts of our security procedures that are unique to our company and industry, rather than having to meticulously document the foundational technologies of our infrastructure.

For Kubernetes, here’s a snippet of how we documented our infrastructure using the four steps described above:

  1. We implemented security best practices within our Kubernetes clusters by ensuring all of them are benchmarked using the Kubernetes CIS guide. We use kube-bench for this process, which we run on our clusters once every quarter.
  2. A well respected third-party authority publishes this benchmark, which confirms that our process addresses best practices for using Kubernetes securely.
  3. We provided documentation that we assessed our Kubernetes clusters against the benchmark, including the tickets to track the tasks.
  4. We provided the results of our assessment and documented any policy exceptions and proof that we evaluated those exceptions against our risk management methodology.

Similarly to the physical security sections of the ISO-27001 standard, the CIS benchmark has large sections dedicated to security settings for Kubernetes masters and nodes. Because we run on GKE, Google handled 95 of the 104 line items in the benchmark applicable to our infrastructure. For those items that could not be assessed against the benchmark (because GKE does not expose the masters), we provided links to Google’s security documentation on those features (see Cluster Trust and Control Plane Security). Some examples include:

Beyond GKE, we were also able to take advantage of many other Google Cloud services that made it easier for us to secure our cloud footprint (although the shared responsibility model for security means we can’t rely on Google Cloud alone):

  • For OS level security best practices, we we able to document strong security best practices for our OS security because we use Google’s Container-Optimized OS (COS), which provides many security best practices by default by using things such as a read-only file system. All that was left for us to do was was follow best practices to help secure our workloads.
  • We use node auto-upgrade on our GKE nodes to handle patch management at the OS layer for our nodes. For the level of effort, we found that node auto-upgrade provides a good middle ground patching and stability. To date, we have not had any issues with our software as a result of node auto-upgrade.
  • We use Container Analysis (which is built into Google Container Registry) to scan for known vulnerabilities in our Docker images.
  • ISO-27001 requires that you demonstrate the physical security of your network infrastructure. Because we run our entire infrastructure in the cloud, we were able to directly rely on Google Cloud’s physical and network security for portions of the certification (Google Cloud is ISO-27001 certified amongst other certifications).

DroneDeploy is dedicated to giving our customers access to aerial imaging and mapping technologies quickly and easily. We handles vast amounts of sensitive information on behalf of our customers, and we want them to know that we are following best security practices even when the underlying technology gets complicated, like in the case of Kubernetes. For DroneDeploy, switching to GKE and Google Cloud has helped us reduce our operational overhead and increased the velocity with which we achieve key compliance certifications. To learn more about DroneDeploy, and our experience using Google Cloud and GKE, feel free to reach out to us.

Recursion Pharmaceuticals accelerates drug discovery with Google Cloud

Despite advances in scientific research and medical technology, the process of drug discovery has become increasingly slower and more expensive over the last decade. While the pharmaceutical industry has spent more money on research and development each year, this has not resulted in an increase in the number of FDA-approved new medicines. Recursion, headquartered in Salt Lake City, is looking to address this declining productivity by combining rich biological datasets with the latest in machine learning to reinvent the drug discovery and development process.

Today, Recursion has selected Google Cloud as their primary public cloud provider as they build a drug discovery platform that combines chemistry, automated biology, and cloud computing to reveal new therapeutic candidates, potentially cutting the time to discover and develop a new medicine by a factor of 10.

In order to fulfill their mission, Recursion developed a data pipeline that incorporates image processing, inference engines and deep learning modules, supporting bursts of computational power that weigh in at trillions of calculations per second. In just under two years, Recursion has created hundreds of disease models, generated a shortlist of drug candidates across several diseases, and advanced drug candidates into the human testing phase for two diseases.

Starting with wet biology—plates of glass-bottom wells containing thousands of healthy and diseased human cells—biologists run experiments on the cells, applying stains that help characterize and quantify the features of the cellular samples: their roundness, the thickness of their membrane, the shape of their mitochondria, and other characteristics. Automated microscopes capture this data by snapping high-resolution photos of the cells at several different light wavelengths. The data pipeline, which sits on top ofGoogle Kubernetes Engine (GKE) and Confluent Kafka, all running on GCP, extracts and analyzes cellular features from the images. Then, data are processed by deep neural networks to find patterns, including those humans might not recognize. The neural nets are trained to compare healthy and diseased cell signatures with those of cells before and after a variety of drug treatments. This process yields promising new potential therapeutics.

To train its deep learning models, Recursion uses on-premises GPUs, then they use GCP CPUs to perform inference on new images in the pipeline using these models. Recursion is currently evaluating cloud-based alternatives including using Cloud TPU technology to accelerate and automate image processing. Since Recursion is already using TensorFlow to train its neural networks in its proprietary biological domains, Cloud TPUs are a natural fit. Additionally, Recursion is exploring using GKE On-Prem, the foundation of Cloud Services Platform, to manage all of their Kubernetes clusters from a single, easy-to-use console.

We’re thrilled to collaborate with Recursion in their quest to more rapidly and inexpensively discover new medicines for dozens of diseases, both rare and common. Learn more about how Recursion is using Google Cloud solutions to better execute its mission of “decoding biology to radically improve lives” here. You can also learn more about solutions for life sciences organizations and our Google Cloud for Startups Program.

Everyday AI: beyond spell check, how Google Docs is smart enough to correct grammar

Written communication is at the heart of what drives businesses. Proposals, presentations, emails to colleagues—this all keeps work moving forward. This is why we’ve built features into G Suite to help you communicate effectively, like Smart Compose and Smart Reply, which use machine learning smarts to help you draft and respond to messages quickly. More recently, we’ve introduced machine translation techniques into Google Docs to flag grammatical errors within your documents as you draft them.

If you’ve ever questioned whether to use “a” versus “an” in a sentence, or if you’re using the correct verb tense or preposition, you’re not alone. Grammar is nuanced and tricky, which makes it a great problem to solve with the help of artificial intelligence. Here’s a look at how we built grammar suggestions in Docs.

The gray areas of grammar

Although we generally think of grammar as a set of rules, these rules are often complex and subjective. In spelling, you can reference a resource that tells you whether a word exists or how it’s spelled: dictionaries (Remember those?).

Grammar is different. It’s a harder problem to tackle because its rules aren’t fixed. It varies based on language and context, and may change over time, too. To make things more complicated, there are many different style books—whether it be MLA, AP or some other style—which makes consistency a challenge.

Given these nuances, even the experts don’t always agree on what’s correct. For our grammar suggestions, we worked with professional linguists to proofread sample sentences to get a sense of the true subjectivity of grammar. During that process, we found that linguists disagreed on grammar about 25 percent of the time. This raised the obvious question: how do we automate something that doesn’t run on definitive rules?

Where machine translation makes a mark

Much like having someone red-line your document with suggestions on how to replace “incorrect” grammar with “correct” grammar, we can use machine translation technology to help automate that process. At a basic level, machine translation performs substitution and reorders words from a source language to a target language, for example, substituting a “source” word in English (“hello!”) for a “target” word in Spanish (¡hola!). Machine translation techniques have been developed and refined over the last two decades throughout the industry, in academia and at Google, and have even helped power Google Translate.

Along similar lines, we use machine translation techniques to flag “incorrect” grammar within Docs using blue underlines, but instead of translating from one language to another like with Google Translate, we treat text with incorrect grammar as the “source” language and correct grammar as the “target.”

gsuite_grammar.jpg

Working with the experts

Before we could train models, we needed to define “correct” and “incorrect” grammar. What better way to do so than to consult the experts? Our engineers worked with a collection of computational and analytical linguists, with specialties ranging from sociology to machine learning. This group supports a host of linguistic projects at Google and helps bridge the gap between how humans and machines process language (and not just in English—they support over 40 languages and counting).

For several months, these linguists reviewed thousands of grammar samples to help us refine machine translation models, from classic cases like “there” versus “their” versus “they’re,” to more complex rules involving prepositions and verb tenses. Each sample received close attention—three linguists reviewed each case to identify common patterns and make corrections. The third linguist served as the “tie breaker” in case of disagreement (which happened a quarter of the time).

gsuite_grammar_suggestion.jpg

Once we identified the samples, we then fed them into statistical learning algorithms—along with “correct” text gathered from high-quality web sources (billions of words!)—to help us predict outcomes using stats like the frequency at which we’ve seen a specific correction occur. This process helped us build a basic spelling and grammar correction model.

We iterated over these models by rolling them out to a small portion of people who use Docs, and then refined them based on user feedback and interactions. For example, in earlier models of grammar suggestions, we received feedback that suggestions for verb tenses and the correct singular or plural form of a noun or verb were inaccurate. We’ve since adjusted the model to solve for these specific issues, resulting in more precise suggestions. Although it’s impossible to catch 100 percent of issues, we’re constantly evaluating our models at Google to ensure bias does not surface in results such as these.

Better grammar. No ifs, ands or buts.

So if you’ve ever asked yourself “how does it know what to suggest when I write in Google Docs,” these grammar suggestion models are the answer.  They’re working in the background to analyze your sentence structure, and the semantics of your sentence, to help you find mistakes or inconsistencies. With the help of machine translation, here are some mistakes that Docs can help you catch:

gsuite_grammar_suggestion_models_1.jpg
gsuite_grammar_suggestion_models_2.jpg
gsuite_grammar_suggestion_models_3.jpg

Evolving grammar suggestions, just like language

When it comes to grammar, we’re constantly improving the quality of each suggestion to make corrections as useful and relevant as possible. With our AI-first approach, G Suite is in the best position to help you communicate smarter and faster, without sweating the small stuff. Learn more.

Announcing the general availability of Azure Lab Services

Today, we are very excited to announce the general availability of Azure Lab Services – your computer labs in the cloud.

With Azure Lab Services, you can easily set up and provide on-demand access to preconfigured virtual machines (VMs) to teach a class, train professionals, run hackathons or hands-on labs, and more. Simply input what you need in a lab and let the service roll it out to your audience. Your users go to a single place to access all their VMs across multiple labs, and connect from there to learn, explore, and innovate.

Since our preview announcement, we have had many customers use the service to conduct classes, training sessions, boot camps, hands on labs, and more! For classroom or professional training, you can provide students with a lab of virtual machines configured with exactly what you need for class and give each student a specified number of hours to use the VMs for homework or personal projects. You can run a hackathon or a hands-on lab at conferences or events and scale up to hundreds of virtual machines for your attendees. You can also create an invite-only private lab of virtual machines installed with your prerelease software to give preview customers access to early trials or set up interactive sales demos.

Top three reasons customers use Azure Lab Services

Automatic management of Azure infrastructure and scale

Azure Lab Services is a managed service, which means that provisioning and management of a lab’s underlying infrastructure is handled automatically by the service. You can just focus on preparing the right lab experience for your users. Let the service handle the rest and roll out your lab’s virtual machines to your audience. Scale your lab to hundreds of virtual machines with a single click.

Publishing a template in Azure Lab Services

Simple experience for your lab users

Users who are invited to your lab get immediate access to the resources you give them inside your labs. They just need to sign in to see the full list of virtual machines they have access to across multiple labs. They can click on a single button to connect to the virtual machines and start working. Users don’t need Azure subscriptions to use the service.

Screen shot displaying a sample virtual machine in Azure Lab Services

Cost optimization and tracking 

Keep your budget in check by controlling exactly how many hours your lab users can use the virtual machines. Set up schedules in the lab to allow users to use the virtual machines only during designated time slots or set up reoccurring auto-shutdown and start times. Keep track of individual users’ usage and set limits.

An example screen shot of how to set up scheduled in Azure Lab Services

Get started now

Try Azure Lab Services today! Get started by creating a lab account for your organization or team. All labs are managed under a lab account. You can give permissions to people in your organization to create labs in your lab account.

To learn more, visit the Azure Lab Services documentation. Ask any questions you have on Stack Overflow. Last of all, don’t forget to subscribe to our Service Updates and view other Azure Lab Services posts on the Azure blog to get the latest news.

General availability pricing

Azure Lab Services GA pricing goes into effect on May 1, 2019. Until then, you will continue to be billed based on the preview pricing. Please see the Azure Lab Services pricing page for complete details.

What’s next

We continue to listen to our customers to prioritize and ship new features and updates. Several key features will be enabled in the coming months:

  • Ability to reuse and share custom virtual machine images across labs
  • Feature to enable connections between a lab and on-premise resources
  • Ability to create GPU virtual machines inside the labs

We always welcome any feedback and suggestions. You can make suggestions or vote on priorities on our UserVoice feedback forum.