5 need-to-know networking sessions at Next ‘19

Google Cloud Next ‘19 has everything you need to navigate all the networking products, services, and innovations GCP has to offer. With almost 20 networking sessions at Google Cloud Next this year, we have something for you, whether you’re just starting to move data to Google Cloud or you’re looking to modernize your traffic management using the latest advancements in networking. Here are five sessions that you definitely shouldn’t miss.

1. A Year in GCP Networking
Ferris Bueller said it best, “[Networking] moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” This session provides a 360-degree view of the advancements we have made in Networking over the past year across the 20+ networking products in our portfolio along the pillars of connect, secure, optimize, scale and modernize your network.  But, we just don’t talk about these advancements in theory. U.S. retailer Target will share how they are using some of the latest networking products and services, right now, to advance their business objectives. Learn more.

2. The High-Performance Network
Google’s network backbone has thousands of miles of fiber optic cable, uses advanced software-defined networking, and provides edge caching services to deliver fast, consistent, and scalable performance. Get an inside look at this premium global network—built around the world and under the sea—and see how Google’s software innovations are designed to make the internet faster. Learn more.

3. Think Big, Think Global
If you’re a global organization, or you want to be one, Google’s global Virtual Private Cloud (VPC) offers the flexibility to scale and control how workloads connect regionally and globally. Learn the advantages of multi-region deployments and check out tips and tricks to keep your VPC secure, how to extend it to on-prem, how to deploy highly available services, and much more. Learn more.

4. Traffic Director and Envoy-Based L7 ILB for Production-Grade Service Mesh and Istio
Service mesh is one of the most important networking paradigms to emerge for delivering multi-cloud applications and (micro)services. Istio is a leading open-source service mesh built using open proxies like Envoy. Be one of the first people to get a close look at Traffic Director, our new GCP-managed service that provides configuration and traffic control service for service mesh. Also get a preview of L7 internal load balancing, which is essentially fully managed Traffic Director and Envoy proxies under the hood but looks like a traditional load balancer, making it easier to bring the benefits of service mesh to brownfield environments. Learn more.

5. Open Systems: Key to Unlocking Multi-Cloud and New Business With Lyft, Juniper, Google
Hear directly from leaders at Juniper, Google and Lyft as they unpack what “open” means to them and how open source, open interfaces, and open systems are paving the path to seamless multi-cloud services and new business models. You will also hear in-depth about several open-source projects including Kubernetes, gRPC, Envoy, Traffic Director, and Tungsten Fabric (Open Contrail) and get a chance to ask questions about bringing these technologies to your own environments. Learn more.

While these five sessions are certainly highlights, it doesn’t end there. From network security, visibility, and monitoring to partner and third-party services discussions, Google Cloud Next ‘19 has the information you need to help you get the most from your network. Be sure to check out the session list here, and register here.

Good vibes only—don’t miss these Cloud Next ‘19 sessions on inclusivity, sustainability

At Google Cloud, we’re excited to join with our customers and build a world that works for everyone. Technology and innovation can help businesses grow sustainably, create richer, more interactive learning experiences, power economies where more people have an opportunity to thrive, and advance inclusion for all. We’ve picked a few of the sessions at Next 2019 that focus on building a cleaner, accessible and more inclusive future for generations to come. These are the ideas that give us good vibes about building with Google Cloud, so be sure not to miss them!   

1. Prioritizing diversity and inclusion
Workforce diversity starts with recruiting and hiring diversity. In Inclusive by Design: Engage and Recruit Diverse Talent with AI, you’ll hear how companies like Cox are reaching a larger and more diverse talent pool and making racial, ethnic and gender diversity a key driver of innovation and growth. It takes a more diverse workforce for companies to truly build for all customers. It also makes business sense. In The Business Case for Product Inclusion, you’ll hear from a panel of Google leaders and Google Cloud customers about demonstrating the business value of inclusive products. And in the Chief Diversity Officer Panel: Building Dynamic Inclusive Cultures, you’ll hear from a panel of Chief Diversity Officers about how they are advancing vibrant, inclusive cultures across their organizations.

Another amazing panel during Next ‘19 will share the stories of female technical leaders across Google. In Women of Cloud: How to Grow our Clout 2.0, senior women will discuss their past year in both the field and their careers, and answer your questions about career development and company culture. They’ll likely touch on allyship, or advocating for groups who have been historically excluded from the tech industry. If you want to learn more about allyship, and practice it, Allyship: The Fundamentals will run on both Wednesday and Thursday. This session will give you the chance to practice identity-based leadership by examining your position in social struggles and putting yourself in another’s shoes.

2. Building a more sustainable future
If you’re curious about your environmental impact as a cloud user, join us for Building Sustainability Into Our Infrastructure, Your Goals and New Products. We’ll share what it took to build a cloud with sustainability built-in, and National Geographic will share how they incorporate a corporate focus on the earth into an IT one. SunPower will also join for an exciting announcement as part of their journey toward making home solar accessible to all.  

Making renewable energy like solar a primary energy source for the globe is a big challenge, and so are the challenges faced in global ocean exploitation. The oceans are big—140 million square miles big, or about 70% of the earth’s surface. But less than 5% has been explored. That presents a problem for sustainable fishing management, particularly with dark vessels that do not have any associated location data and could be fishing illegally. In Making Planet-Scale GIS Possible with Google Earth Engine and BigQuery GIS, Google Cloud customer Global Fishing Watch will share how they use Google Earth Engine to automatically extract vessel locations from massive amounts of radar imagery, then use BigQuery GIS to elucidate the dark vessels.

Overfishing is just one example of the natural resource challenges we face. In 2018, the global demand for resources was 1.7 times what the earth can support in one year. Google Cloud and SAP came together to help address this challenge by hosting a sustainability contest for social entrepreneurs. In Circular Economy 2030: Cloud Computing for a Sustainable Revolution, you can learn more about how cloud computing can be mobilized for a sustainable future with responsible consumption and production, and hear the anticipated announcement of the five finalists of Circular Economy 2030.

3. Nonprofit organizations making a positive impact
Global nonprofit organizations are tackling big challenges with Google technology. One area with a promising future is using data analytics and other new technologies to solve some of the world’s greatest challenges, such as unemployment or sustainable development. In Data for Good: Driving Social and Environmental Impact with Big Data Solutions, you’ll hear about how Google Cloud is working to empower nonprofits around the world, and how we’ve collaborated with organizations like the Global Partnership for Sustainable Development Data (GPSDD) to mobilize data for sustainable development across our Data Solutions for Change, Visualize 2030, and Circular Economy 2030 initiatives. In Empowering Global Nonprofits to Drive Impact with G Suite, we’ll talk about how nonprofits are embracing technology to improve how they collaborate, engage with their community, and fundraise for their cause. You’ll hear from two Bay Area organizations making a positive impact on the lives of local youth.

4. Using technology to help the visually impaired
As part of a strategic initiative by the Library of Congress to support users who are visually impaired, one Google Cloud customer is building an app to make reading books more easily available. In Making Books Accessible to the Visually Impaired, SpringML will share how users can now search for and play an audiobook from a Google Home device. Hear about the development process SpringML went through to make almost 1 TB of audio content available via their application. In addition to partnering with our customers on applications like SpringML is building, we’re working on improving accessibility with Google Cloud products. In Empowering Entrepreneurs and Employees With Disabilities Using G Suite, the Blind Institute of Technology will share how they used G Suite to establish workflows that are effective and efficient for their employees, some who happen to be visually impaired. There’s more on this in the G Suite and Chrome Accessibility Features for an Inclusive Organization session, which will go into depth on the built-in accessibility features of G Suite and Chromebooks.

And finally, a session focused on those who will be building this new world in a few years. Did you know that more than half of school-aged children in the U.S. use Google in their classrooms? Join For Parents and Guardians: How Your Child Uses Google in Class to learn about the tools that are transforming learning outcomes, curriculum, and opportunities for children across the nation.

For more on what to expect at Google Cloud Next ‘19, take a look at the session list here, and register here if you haven’t already. We’ll see you there.

Taking charge of your data: Understanding re-identification risk and quasi-identifiers with Cloud DLP

Preventing the exposure of personally identifiable information, a.k.a. PII, is a big concern for organizations—and not so easy to do. Google’s Cloud Data Loss Prevention (DLP) can help, with a variety of techniques to identify and hide PII that are exposed via an intuitive and flexible platform.

In previous “Taking charge of your data” posts, we talked about how to use Cloud DLP to gain visibility into your data and how to protect sensitive data with de-identification, obfuscation, and minimization techniques. In this post, we’re going to talk about another kind of risk: re-identification, and how to measure and reduce it.

A recent Google Research paper defines re-identification risk as “the potential that some supposedly anonymous or pseudonymous data sets could be de-anonymized to recover the identities of users.” In other words, data that can be connected to an individual can expose information about them and this can make the data more sensitive. For example, the number 54,392 alone isn’t particularly sensitive. However, if you learned this was someone’s salary alongside other details about them (e.g., their gender, zip code, alma mater), the risk of associating that data with them goes up.

Thinking about re-identification risks

There are various factors that can increase or decrease re-identification risks and these factors can shift over time as data changes. In this blog post, we present a way to reason about these risks using a systematic and measurable approach.

Let’s say you want to share data with an analytics team and you want to ensure lower risk of re-identification; there are two main types of identifiers to consider:

  • Direct identifiers – These are identifiers that directly link to and identify an individual. For example, a phone number, email address, or social security number usually qualify as direct identifiers since they are typically associated with a single individual.
  • Quasi-identifiers – These are identifiers that do not uniquely identify an individual in most cases but can in some instances or when combined with other quasi-identifiers. For example, data like someone’s job title may not identify most users in a population since many people might share these job title. But some values like “CEO” or “Vice President” may only be present for a small group or single individual.

When assessing re-identification risk you want to consider how to address both direct and quasi identifiers. For direct identifiers you can consider options like redaction or replacement with a pseudonym or token. To identify risk in quasi-identifiers, one approach is to measure the statistical distribution to find any unique values. For example, take the data point “age 27”. How many people in your dataset are age 27? If there are very few people of “age 27” in your data set, there’s a higher potential risk of re-identification, whereas if there are a lot of people aged 27, the risk is reduced.

Understanding k-anonymity

K-anonymity is a property that indicates how many individuals share the same value or set of values. Continuing with the example above, imagine you have 1M rows of data including a column of ages, and in that 1M rows only one person has the age=27. In that case, the “age” column has a k value of 1. If there are at least 10 people for every age, then you have a k value of 10. You can measure this property across a single column, like age, or across multiple columns like age+zip-code. If there is only one person age 27 in zip code 94043 then that group (27, 94043) has a k value of 1.

Understanding the lowest k value for a set of columns is important, but you also want to know the distribution of those k values. That is, does 10% of your data have a low k value or does 90% of your data have a low k value? In other words, can you simply drop the rows that have low k values or do you need to fix it another way? A technique called generalization can be helpful here by allowing you to retain more rows at the cost of revealing less information per row; for example, “bucketing” ages into five-year spans would replace age=27 with age=”26-30”, allowing you to retain utility in the data but make it less distinguishing.

values not k-anonymous.png

Understanding how much of your data is below a certain k threshold, and whether you drop the data or “generalize” the data, are all forms of measuring the re-identification risk vs. the data loss and utility value in the data. In this trade off you are asking questions like:

  • What k threshold is acceptable for this use case?
  • Am I okay to drop the percentage of data that is below that threshold?
  • Does generalization allow me to retain more data value compared to dropping rows?

Let’s walk through one more example

Imagine you have a database that contains users’ age and zip code and you want to ensure that no combination of age + zip is identifying below a certain threshold (like k=10). You can use Cloud DLP to measure this distribution and use Cloud Data Studio to visualize it (how-to guide here). Below is what this looks like on our sample dataset:

risk analysis 1.png

This shows the percentage of rows (blue) and unique values (red) that correlate to a k-value. In the example above, we see that 100% of the data maps to fewer than 10 people. To fix this, without dropping 100% of rows, we applied generalization to convert ages to age ranges. Here is the graph after the transform:

risk analysis 2.png

Now only 3.9% of the rows and 21.15% of the unique values fall below the k=10 threshold. So as a result, we reduced the re-identifiability while preserving much of the data utility, dropping only 3.9% of rows.

All hands on deck to prevent data loss

Of course, k-anonymity is just one way to assess quasi-identifiers and your risk of re-identification. Cloud DLP, for example, lets you assess other properties like l-diversity, k-map, and delta-presence. To learn more, check out this resource.


In addition, we plan to present a research paper on Estimating Reidentifiability and Joinability of Large Data at Scale at the IEEE conference in May, covering techniques for doing this kind of analysis at incredibly large scale. We also explore how these techniques can be used to understand additional use cases around join-ability and data flow. These techniques are very useful for data owners who want to have a risk-based approach towards anonymization, while gaining insights into their data. Hope to see you there!

Accelerate Java application development on GCP with Micronaut

Editor’s note: Want to develop microservices in Java? Today we hear from Object Computing, Inc. (OCI), a Google Cloud partner that is also the driving force behind the Micronaut JVM framework. Here, OCI senior software engineer Sergio del Amo talks about how to use Micronaut on GCP to build serverless applications, and walks you through an example.

Traditional application architectures are being replaced by new patterns and technologies. Organizations are discovering great benefits to breaking so-called monolithic applications into smaller, service-oriented applications that work together in a distributed system. The new architectural patterns introduced by this shift call for the interaction of numerous, scope-limited, independent applications: microservices.

To support microservices, modern applications are built on cloud computing technologies, such as those provided by Google Cloud. Rather than managing the health of servers and data centers, organizations can deploy their applications to platforms where the details of servers are abstracted away, and services can be scaled, redeployed, and monitored using sophisticated tooling and automation.

In a cloud-native world, optimizing how a Java program’s logic is interpreted and run on cloud servers via annotations and other compilation details takes on new importance. Additionally, serverless computing adds incentive for applications to be lightweight and responsive and to consume minimal memory. Today’s JVM frameworks need to ease not just development, as they have done over the past decade, but also operations.  

Enter Micronaut. Last year, a team of developers at OCI  released this open source JVM framework that was designed to simplify developing and deploying microservices and serverless applications.

Micronaut comes with built-in support for GCP services and hosting. Then, in addition to out-of-the-box auto-configurations, job scheduling, and myriad security options, Micronaut provides a suite of built-in cloud-native features, including:

  • Service discovery. Service discovery means that applications are able to find each other (and make themselves findable) on a central registry, eliminating the need to look up URLs or hardcode server addresses in configuration. Micronaut builds service-discovery support directly into the @Client annotation, meaning that performing service discovery is as simple as supplying the correct configuration and then using the “service ID” of the desired service.
  • Load balancing. When multiple instances of the same service are registered, Micronaut provides a form of “round-robin” load-balancing, cycling requests through the available instances to ensure that no one instance is overwhelmed or underutilized. This is a form of client-side load-balancing, where each instance either accepts a request or passes it along to the next instance of the service, spreading the load across available instances automatically.
  • Retry mechanism and circuit breakers. When interacting with other services in a distributed system, it’s inevitable that at some point, things won’t work out as planned—perhaps a service goes down temporarily or drops a request. Micronaut offers a number of tools to gracefully handle these mishaps. Retry provides the ability to invoke failed operations. Circuit breakers protect the system from repetitive failures.

As a result of this natively cloud-native construction, you can use Micronaut in scenarios that would not be feasible with a traditional Model-View-Controller framework in the JVM, including low-memory microservices, Android applications, serverless functions, IoT deployments, and CLI applications.

Micronaut also provides a reactive HTTP server and client based on Netty, an asynchronous networking framework that offers high performance and a reactive, event-driven programming model.

Sample App: Google Cloud Translate API

To see how easy it is to integrate a Micronaut application with Google Cloud services, review this tutorial for building a sample application that consumes the Google Cloud Translation API.

Step 1: Install Micronaut

You can build Micronaut from the source on [Github]() or download it as a binary and install it on your shell path. However the recommended way to install Micronaut is via SDKMAN!. If you do not have SDKMAN! installed already, you can do so in any Unix-based shell with the following commands:

You can now install Micronaut itself with the following SDKMAN! command (use sdk list micronaut to view available versions; at the time of this writing, the latest is 1.0.3):

Confirm that you have installed Micronaut by running _mn -v_:

Step 2: Create the project

The mn command serves as Micronaut’s CLI. You can use this command to create your new Micronaut project. 

For this exercise, we will create a stock Java application, but you can also choose Groovy or Kotlin as your preferred language by supplying the -lang flag (-lang groovy or -lang kotlin).

The `mn` command accepts a features flag, where you can specify features that add support for various libraries and configurations in your project. You can view available features by running mn profile-info service.

We’re going to use the spock feature to add support for the Spock testing framework to our Java project. Run the following command:

Note that we can supply a default package prefix (example.micronaut) to the project name (translator). If we did not do so, the project name would be used as a default package. This package will contain the Application class and any classes generated using the CLI commands (as we will do shortly).

By default the create-app command generates a Gradle build. If you prefer Maven as your build tool, you can do so using the -build flag (e.g., -build maven). This exercise uses the default Gradle project.

At this point, you can run the application using the Gradle run task.

TIP: If you would like to run your Micronaut project using an IDE, be sure that your IDE supports Java annotation processors and that this support is enabled for your project. In the IntelliJ IDEA, the relevant setting can be found under Preferences -> Build, Execution, Deployment -> Compiler -> Annotation Processors -> Enabled.

Step 3: Create a simple interface

Create a Java interface to define the translation contract:

If I want to translate Hello World to Spanish, you can invoke any available implementations of the previous interface with translationService.translate( “Hello World”, “en”, “es”).

We create a POJO to encapsulate the translation result.

Step 4: Expose an endpoint

Similar to other MVC frameworks such as Grails or Spring Boot, you can expose an endpoint by creating a controller.

The endpoint, which we will declare in a moment, consumes a JSON payload that encapsulates the translation request. We can map such JSON payload with a POJO.


Please note that the previous class uses the annotation @ javax.validation.constraint.NotBlank to declare text, source, and target as required. Micronaut’s validation is built in with the standard framework – JSR 380, also known as Bean Validation 2.0.

Hibernate Validator is a reference implementation of the validation API. You need an implementation of the validation API in the classpath. Thus, add the next snippet to _build.gradle_

Next, create a controller:

src/ main/java/example/micronaut/TranslationController.java

There are several things worth mentioning about the previous code listing:

  • The Controller exposes a _/translate_ endpoint which could be invoked with a POST request.
  • The value of [email protected]_ and [email protected]_ annotations is a RFC-6570 URI template.
  • Via constructor injection, Micronaut supplies a collaborator; _TranslatorService_.
  • Micronaut controllers consume and produce JSON by default.
  • [email protected]_ indicates that the method argument is bound from the HTTP body.
  • To validate the incoming request, you need to annotate your controller with [email protected]_ and the binding POJO with [email protected]_.

In addition to constructor injection, as illustrated in the previous snippet, Micronaut supports the following types of dependency injection: Field injection, JavaBean property injection or Method parameter injection.

Integrate with Google Cloud Translation API

Now you want to add a dependency to Google Cloud Translate library:

Micronaut implements the JSR 330 specification for Java dependency injection, which provides a set of semantic annotations under the javax.inject package (such as @Inject and @Singleton) to express relationships between classes within the DI container.

Create a singleton implementation of _TranslationService_ that uses the Google Cloud Translation API.

Here are a few things to mention about the above  code:

  • [email protected]_ annotation is used to declare the class as a Singleton.

  • A method annotated with [email protected]_ will be invoked once the object is constructed and fully injected.

Test the app Thanks to Micronaut’s fast startup time, it is easy to write functional tests with it.

Here’s how to write a functional test that verifies the behavior of the whole application.

Here are a few things to note about the above code:

  • It’s easy to run the application from a test with the _EmbeddedServer_ interface.
  • You can easily create an HTTP Client bean to consume the embedded server.
  • Micronaut HTTP Client makes it easy to parse JSON into Java objects.
  • Creating HTTP Requests is easy thanks to Micronaut’s fluid API.
  • We verify the that server responds 400 (Bad request status code) when the validation of the incoming JSON payload fails.

Deploy to Google Cloud

There are multiple ways to deploy a Micronaut application to Google Cloud. You may choose to containerize your app or deploy it as a FAT jar. Check out these tutorials to learn more:

Deploy a Micronaut application to Google Cloud App Engine

Deploy a Micronaut application containerized with Jib to Google Kubernetes Engine

Micronaut performance

In addition to its cloud-native features, Micronaut also represents a significant step forward in microservice frameworks for the JVM, by supporting common Java framework features such as dependency injection (DI) and aspect-oriented programming (AOP), without compromising startup time, performance, and memory consumption.

Micronaut features a custom-built DI and AOP model that does not use reflection. Instead, an abstraction over the Java annotation processor tool (APT) API and Groovy abstract syntax tree (AST) lets developers build efficient applications without giving up features they know and love.

By moving the work of the DI container to the compilation phase, there is no longer a link between the size of the codebase and the time needed to start an application or the memory required to store reflection metadata. As a result, Micronaut applications written in Java typically start within a second.

This approach has opened doors to a variety of framework features that are more easily achieved with AOT compilation and that are unique to Micronaut.


Cloud-native development is here to stay, and Micronaut was built with this landscape in mind. Like the cloud-native architecture that motivated its creation, Micronaut’s flexibility and modularity allows developers to create systems that even its designers could not have foreseen.

To learn more about using Micronaut for your cloud-based projects, check out the Micronaut user guide. Learn how to use Micronaut in concert with Google Cloud Platform services, such as Cloud SQL, Kubernetes, and Google’s Instance Metadata Server in our upcoming webinar. There’s also a small but growing selection of step-by-step tutorials, including guides for all three of Micronaut’s supported languages: Java, Groovy, and Kotlin.

Finally, the Micronaut community channel on Gitter is an excellent place to meet other developers who are already building applications with the framework and interact directly with the core development team.

Exploring container security: the shared responsibility model in GKE

Editor’s note: This post is part of our blog post series on container security at Google.

Security in the cloud is a shared responsibility between the cloud provider and the customer. Google Cloud is committed to doing its part to protect the underlying infrastructure, like encryption at rest by default, and in providing capabilities you can use to protect your workloads, like access controls in Cloud Identity and Access Management (IAM). As newer infrastructure models emerge, though, it’s not always easy to figure out what you’re responsible for versus what’s the responsibility of the provider. In this blog post, we aim to clarify for Google Kubernetes Engine (GKE) what we do and don’t do—and where to look for resources to lock down the rest.

Google Cloud’s shared responsibility model

The shared responsibility model depends on the workload—the more we manage, the more we can protect. This starts from the bottom of the stack and moves upwards, from the infrastructure as a service (IaaS) layer where only the hardware, storage, and network are the provider’s responsibility, up to software as a service (SaaS) where almost everything except the content and its access are up to the provider. (For a deep dive check out the Google Infrastructure Security Design Overview whitepaper). Platform as a service (PaaS) layers like GKE fall somewhere in the middle, hence the ambiguity that arises.

Security in GKE.png

For GKE, at a high level, we are responsible for protecting:

  • The underlying infrastructure, including hardware, firmware, kernel, OS, storage, network, and more. This includes encrypting data at rest by default, encrypting data in transit, using custom-designed hardware, laying private network cables, protecting data centers from physical access, and following secure software development practices.
  • The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.
  • The Kubernetes distribution. GKE provides the latest upstream versions of Kubernetes, and supports several minor versions. Providing updates to these, including patches, is our responsibility.
  • The control plane. In GKE, we manage the control plane, which includes the master VMs, the API server and other components running on those VMs, as well as the etcd database. This includes upgrades and patching, scaling, and repairs, all backed by an SLO.
  • Google Cloud integrations, for IAM, Cloud Audit Logging, Stackdriver, Cloud Key Management Service, Cloud Security Command Center, etc. These enable controls available for IaaS workloads across Google Cloud on GKE as well.

Conversely, you are responsible for protecting:

  • The nodes that run your workloads, including VM images and their configurations. This includes keeping your nodes updated, as well as leveraging Compute Engine features and other Google Cloud products to help protect your nodes. Note that we already manage the containers that are necessary to run GKE, and provide patches for your OS—you’re just responsible for upgrading.
  • The workloads themselves, including your application code, dockerfiles, container images, data, RBAC/IAM policy, and containers and pods that you are running. This means leveraging GKE features and other Google Cloud products to help protect your containers.

Hardening the control plane is Google’s responsibility

Google is responsible for making the control plane more secure – which is the component of Kubernetes that manages how Kubernetes communicates with the cluster, and applies the user’s desired state. The control plane includes the master VM, API server, scheduler, controller manager, cluster CA, root-of-trust key material, IAM authenticator and authorizer, audit logging configuration, etcd, and various other controllers. All of your control plane components run on Compute Engine instances that we own and operate. These instances are single tenant, meaning each instance runs the control plane and its components for only one customer. (You can learn more about GKE control plane security here.)

We make changes to the control plane to further harden these components on an ongoing basis—as attacks occur in the wild, when vulnerabilities are announced, or when new patches are available. For example, we updated clusters to use RBAC rather than ABAC by default, and locked down and eventually disable the Kubernetes dashboard.

How we respond to vulnerabilities depends on which component the vulnerability is found in:

  • The kernel or an operating system: We apply the patch to affected components, including obtaining and applying the patch to the host images for Kubernetes, COS and Ubuntu. We automatically upgrade the master VMs, but you are responsible for upgrading nodes. Spectre/Meltdown and L1TF are examples of such vulnerabilities.
  • Kubernetes: With Googlers on the Kubernetes Product Security Team, we often help develop and test patches for Kubernetes vulnerabilities when they are discovered. Since GKE is an official distribution, we receive the patch as part of the Private Distributors’ List. We’re responsible for rolling out these changes to the master VMs, but you are responsible for upgrading your nodes. Take a look at these security bulletins for the latest examples of such vulnerabilities, CVE-2017-1002101, CVE-2017-1002102, and CVE-2018-1002015.
  • Component used in Kubernetes Engine’s default configuration, like Calico components for Network Policy, or etcd: We don’t control the open-source projects used in GKE, however, we select open-source projects that have demonstrated robust security practices and that take security seriously. For these projects, we may receive a patch from upstream Kubernetes, a partner, or the distributor list of another open-source project. We are responsible for rolling out these changes, and/or notifying you if there is action required. TTA-2018-001 is an example of such a vulnerability that we patched automatically.
  • GKE: If a vulnerability is discovered in GKE, for example through our Vulnerability Reward Program, we are responsible for developing and applying the fix.

In all of these cases, we make these patches available as part of general GKE releases (patch releases and bug fixes) as soon as possible given the level of risk, embargo time, and any other contextual factors.

We do most of the hard work to protect nodes, but it’s your responsibility to upgrade and reap the benefits

Your worker nodes in Kubernetes Engine consist of a few different surfaces that need to be protected, including the node OS, the container runtime, Kubernetes components like the kubelet and kube-proxy, and Google system containers for monitoring and logging. We’re responsible for developing and releasing patches for these components, but you are responsible for upgrading your system to apply these patches.

Kubernetes components like kube-proxy and kube-dns, and Google-specific add-ons to provide logging, monitoring, and other services run in separate containers. We’re responsible for these containers’ control plane compatibility, scalability, upgrade testing, as well as security configurations. If these need to be patched, it’s your responsibility to upgrade to apply these patches.

To ease patch deployment, you can use node auto-upgrade. Node auto-upgrade applies updates to nodes on a regular basis, including updates to the operating system and Kubernetes components from the latest stable version. This includes security patches. Notably, if a patch contains a critical fix and can be rolled out before the public vulnerability announcement without breaking embargo, your GKE environment will be upgraded before the vulnerability is even announced.

Protecting workloads is still your responsibility

What we’ve been talking about so far is the underlying infrastructure that runs your workload—but you of course still have the workload itself. Application security and other protections to your workload are your responsibility.

You’re also responsible for the Kubernetes configurations that pertain to your workloads. This includes setting up a NetworkPolicy to restrict pod to pod traffic and using a PodSecurityPolicy to restrict pod capabilities. For an up-to-date list of the best practices we recommend to protect your clusters, including node configurations, see Hardening your cluster’s security.

If there is a vulnerability in your container image, or application, however, it is also fully your responsibility to patch it. However, there are tools you can use to help:

Incident response in GKE

So what if you’ve done your part, we’ve done ours, and your cluster is still attacked? Damn! Don’t panic.

Google Cloud takes the security of our infrastructure—including where user workloads run—very seriously, and we have documented processes for incident response. Our security team’s job is to protect Google Cloud from potential attacks and protect the components outlined above. For the pieces you’re responsible for, if you’re looking to further protect yourself from potential container-specific attacks, Google Cloud already has a range of container security partners integrated with the Cloud Security Command Center.

If you are responding to an incident, you can leverage Stackdriver Incident Response & Management (alpha) to help you reduce your time to incident mitigation, refer to sample queries for Kubernetes audit logs, and check out the Cloud Forensics 101 talk from Next ‘18 to learn more about conducting forensics.

What’s the tl;dr of GKE security? For GKE, we’re responsible for protecting the control plane, which includes your master VM, etcd, and controllers; and you’re responsible for protecting your worker nodes, including deploying patches to the OS, runtime and Kubernetes components, and of course securing your own workload. An easy way to do  your part is to

  1. use node-autoupgrade
  2. protect your workload from common image and application vulnerabilities, and
  3. follow the Google Kubernetes Engine hardening guide.

If you follow those three steps, together we can build GKE environments that are resilient to attacks and vulnerabilities, to deliver great uptime and performance.

Amazon Aurora with PostgreSQL Compatibility Supports Logical Replication

Amazon Aurora with PostgreSQL compatibility now supports logical replication. With logical replication, you can replicate data changes from your Aurora PostgreSQL database to other databases using native PostgreSQL replication slots, or data replication tools such as the AWS Database Migration Service (DMS). Logical replication is supported with Aurora PostgreSQL versions 2.2.0 and 2.2.1, compatible with PostgreSQL 10.6.

To enable logical replication with Aurora PostgreSQL, set up logical replication slots on your instance and stream changes from the database through these slots. This is enabled by setting the parameter rds.logical_replication to 1; you can set this parameter in just a few clicks in the Amazon RDS Management Console. The rds_replication role, assigned to the master user by default, can be used to grant permissions to manipulate and stream data through replication slots. This feature also enables Aurora PostgreSQL to be used as a source for AWS Database Migration Service (DMS). You can learn more about using logical replication with Aurora PostgreSQL in the Aurora documentation.

Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. For more information, please visit the Amazon Aurora product page, and see the AWS Region Table for complete regional availability.


Resource governance in Azure SQL Database

This blog post continues the Azure SQL Database architecture series where we share background on how we run the service, as described by the architects who originally created the service. The first two posts covered data integrity in Azure SQL Database and how cloud speed helps SQL Server database administrators. In this blog post, we will talk about how we use governance to help achieve a balanced system.

Allocated and governed resources

When you choose a specific Azure SQL Database service tier, you are selecting a pre-defined set of allocated resources across several dimensions such as CPU, storage type, storage limit, memory, and more. Ideally you will select a service tier that meets the workload demands of your application, however if you over or under-size your selection you can easily scale up or down accordingly.

With each service tier selection, you are also inherently selecting a set of resource usage boundaries or limits. For example, a business critical, Gen 4 database with eight cores has the following resource allocations and associated limits:

Compute sizeBC_Gen4_8
Memory (GB)56
In-memory OLTP storage (GB)8
Storage typeLocal SSD
Max data size (GB)650
Max log size (GB)195
TempDB size (GB)256
IO latency (approximate)

Target IOPS (64KB)

1-2 millisecond (write)
1-2 millisecond (read)
Log rate limits (MBps)48
Max concurrent workers (requests)1600
Max concurrent logins (requests)1600
Max allowed sessions30000
Number of replicas4

As you increase the resources in your tier, you may also see changes in limits up to a certain threshold. Furthermore, these limits can be automatically relaxed over time, but never further restricted without penalty to the customer.

We document resource allocation by service tier and also the associated resource governance limits in the following resources:

While the resource allocation by service tier is intuitive to customers because the more you pay, the more resources you get, resource governance and boundaries has historically been less clear of a subject with customers. While we are increasing transparency around these governing mechanisms, it is important to understand the broader purposes behind resource governance in a database as a service (DBaaS). For this, we’ll talk next about what it takes to achieve a balanced system.

Providing a balanced database as a service (DBaaS)

For the context of this blog post, we define a system as balanced if all resources are sufficiently maximized without encountering bottlenecks. This balance includes an interplay of resources such as CPU, IO, memory, network paired with an application’s workload characteristics, maximum tolerated latency, and desired throughput.

With Azure SQL Database, our view of a balanced system must also take a broad and comprehensive perspective in order to meet articulated DBaaS requirements and customer expectations.

Azure SQL Database surfaces a familiar and popular database ecosystem with the intent of giving customers the following additional benefits:

  • Elasticity of scale – Customers can provision a database based on the throughput requirements of their application. As throughput requirements change, the customer can easily scale up or down.
  • Automated backups with self-service restore to any point in time – Database backups are automatically handled by the service, with log backups generally occurring every five to ten minutes.
  • High availability – Azure SQL Database supports a differentiated availability SLA with a maximum of 99.995 percent, backed by availability zone resilience to infrastructure failures.
  • Predictable performance – Customers on the same provisioned resource level always get the same performance with the same workload.
  • Predictable scalability – Customers using the hyperscale service tier can rely on predictable latency of the online scaling operations backed by a verifiable scaling SLA. This gives the customer a reliable tool to react to, changing compute capacity demands in a timely manner.
  • Automatic upgrades – Azure SQL Database is designed to facilitate transparent hardware, software upgrades, and periodic, lightweight software updates.
  • Global scale – Customers can deploy databases around the world and easily provision geographically distributed database replicas enabling regional data access and disaster recovery solutions. These solutions are backed by strong geo-replication and failover SLAs.

For the Azure SQL Database engineering team, providing a balanced DBaaS system for customers goes well beyond simply providing the purchased CPU, IO, memory, and storage. We must also honor all aforementioned factors and aim to balance these key DBaaS factors along with overall performance requirements.

The following figure shows some of the key resources that are governed within the service.

Image list of governed resources in Azure SQL DatabaseFigure 1: Governed resources in Azure SQL Database

We need to provide this balanced system in such a way that allows us to continually improve the service over time. This requirement for continual improvement implies a necessary level of component abstraction and over-arching governance. Governance in Azure SQL Database ensures that we properly balance requirements around scale, high availability, recoverability, disaster recovery, and predictable performance.

To illustrate, let’s use transaction log rate governance as an example of why we actively manage in order to provide a balanced DBaaS. Transaction log governance is a process in Azure SQL Database used to limit high ingestion rates for workloads such as bulk insert, select into, and index builds.

Why govern this type of activity? Consider the following dimensions and the impact of transaction log generation rate.


Log generation rate impact

Database recoverability

We make guarantees around the maximum window of possible data loss based on transaction log backup frequency.

High availability

Local replicas must remain within a recoverability and availability (up-time) range that aligns with our SLAs.

Disaster recovery

Globally distributed replicas must remain within a recoverability range that minimizes data loss.

Predictable performance

Log generation rates must not over-saturate the system or create unpredictable performance.

Log rates are set such that they can be achieved and sustained in a variety of scenarios, while the overall system can maintain its functionality with minimized impact to the user load. Log rate governance ensures that transaction log backups stay within published recoverability SLAs and prevents an excessive backlog on secondary replicas. We have similar impact and interdependencies across other governed areas including CPU, memory, and data IOPs.

How we govern resources in Azure SQL Database

While we use a multi-faceted approach to governance, today we do rely primarily on three main technologies, Job Objects, File Server Resource Manager (FSRM), and SQL Server Resource Governor.

Job Objects

Azure SQL Database leverages multiple mechanisms for governing overall performance for a database. One of the features we leverage is Windows Job Objects, which allows a group of processes to be managed and governed as a unit.   We use this functionality to govern file virtual memory commit, working set caps, CPU affinity, and rate caps. We onboard new governance capabilities as the Windows team releases them.

File Source Resource Manager (FSRM)

Available in Windows Server, we use FSRM to govern file directory quotas.

SQL Server Resource Governor

A SQL Server instance has multiple consumers of resources, including user requests and system tasks. SQL Server Resource Governor was introduced to ensure fair sharing of resources and prevent out-of-control requests from starving other requests. This feature was introduced in SQL Server years ago and over time was extended to help govern several resources including CPU, physical IO, memory, and more for a SQL Server instance. We use this functionality in Azure SQL Database as well to help govern IOPs both local and remote, CPU caps, memory, worker counts, session counts, memory grant limits, and the maximum number of concurrent requests.

Beyond the three main technologies, we also created additional mechanisms for governing transaction log rate.

Configurations for safe and predictable operations

Consider all the settings one must configure for a well-tuned on-premises SQL Server instance, including database file settings, max memory, max degree of parallelism, and more. In Azure SQL Database we pre-configure several settings based on similar best practices. And as mentioned earlier, we pre-configure SQL Server Resource Governor, FSRM, and Job Objects to deliver fairness and prevent starvation. The reasoning behind this is to aim for safe and predictable operation. We can also provide varying settings for customers based on their workload and specific needs, assuming it conforms to safety limits defined for the service.

Improvements over time

Sometimes we deploy software changes that improve the performance and scalability of specific operations. Customers benefit automatically and we might exceed the defined limits and/or increase them for all customers in the future. Furthermore, as we enhance the hardware of machines, storage, and network, these benefits may also be transparently available to an application. This is because we have defined this DBaaS abstraction layer instead of just providing a specific physical machine.

Evolving governance

The Azure SQL Database engineering team regularly enhances governance capabilities used in the service. We continually review our models based on feedback and production telemetry and we modify our limits to maximize available resources, increase safety, and reduce the impact of system tasks.

If you have feedback to share, we would like to hear from you. To contact the engineering team with feedback or comments on this subject, please email [email protected].

Umanis lifts the hood on their AI implementation methodology

Microsoft creates deep, technical content to help developers enhance their proficiency when building solutions using the Azure AI Platform. Our preferred training partners redeliver our LearnAI Bootcamps for customers around the globe on topics including Azure Databricks, Azure Machine Learning service, Azure Search, and Cognitive Services. Umanis, a systems integrator and preferred AI training partner based in France, has been innovating in Big Data and Analytics in numerous verticals for more than 25 years and has developed an effective methodology for guiding customers into the Intelligent Cloud. Here, Philippe Harel, the AI Practice Director at Umanis, describes this methodology and shares lessons learned to empower customers to do more with data and AI.

2019 is the year when artificial intelligence (AI) and machine learning (ML) are shifting from being mere buzzwords to real-world adoption and rollouts across the enterprise. This year reminds us of the cloud adoption curve a few years ago, when it was no longer an option to stay on-premises alone, but a question of how to make the shift. As you draw up plans on how to best use AI, here are some learnings and methodologies that Umanis is following.

Given the ever-increasing speed of change in technology, along with the variety of sectors and industries Umanis works in, they focused on building a methodology that could be standardized across AI implementations from project to project. This methodology follows an iterative cycle: assimilate, learn, and act, with the goal of adding value with each iteration.

The Azure platform acts as an enabler of this methodology as seen in the image below.


In most data and artificial intelligence (AI) projects implemented at Umanis, several trends are gaining momentum and are likely to amplify in 2019:

  • More unstructured, big, and real-time data.
  • An increased need for fast and reliable AI solutions to scale up.
  • Increasing expectations from customers.

In this blog post, we will explain how you can address these kinds of projects, and how Umanis maps their approach to the Azure offering to deliver solutions that are easy to use, operationalize, and maintain.

The 3 phases of the AI implementation methodology

1. Assimilate

In this initial phase, you can be hit by anything. From the good to the big, bad, and ugly: databases, text, logs, telemetry, images, videos, social networks, and more are flowing in. The challenge is to make sense of everything, so you can serve the next phase (Learn) successfully. By assimilating, we mean:

  • Ingest: The performance of an algorithm depends on the quality of the data. We consider “ingesting” to be checking the quality of the data, the quality of the transmission, and building the pipelines to feed the subsequent parts.
  • Store: Since the data will be used by highly demanding algorithms (I/O, processing power) that will mix data from various sources, you need to store the data in the most efficient way for future access by algorithms or data visualizations.
  • Structure: Finally, you’ll need to prepare the data for an algorithms’ consumption and execute as many transformations, preprocessing, and cleaning tasks as you can to speed up the data scientists’ activities and algorithms.

2. Learn

This is the heart of any AI project: Creating, deploying, and managing models.

  • Create: Data scientists use available data to design algorithms, train their models, and compare the results. There are two key points to this:
  1. Don’t make them wait for results! Data scientists are rare resources and their time is precious.
  2. Allow any language or combination of languages. On that perspective, Azure Databricks is a great solution as it addresses this natively by allowing different languages to be used in a single block of code.
  • Use: Once algorithms are deployed as APIs and consumed, the need for parallelization goes up. SLAs and testing the performance of the sending, processing, and receiving pipeline is crucial.
  • Refine: Refining the quality of algorithms ensures reliable results over time. The easy part of this activity is automatic re-training on a regular basis. The less obvious one is what we call the “human in the loop” activity. In short, a Power BI report showing the results of predictions that a human can re-classify quickly as needed, and the machine uses this human expertise to get better at its task.

3. Act

All of the above phases are useless unless you actually make good use of the algorithm’s added value.

  • Inform: Any mistake in code, misunderstanding in requirements, or bug can be devastating as first user impressions are crucial. Therefore, instead of a “big bang” of visualizations, start very small, iterate very quickly, and make a few key users on-board to secure adoption before widening the audience.
  • Connect: Systems that use the information from algorithms need to be plugged in. This is called RPA, IPA, or automation in general, and the architectures can vary greatly on each project. Don’t overlook the need for human monitoring of this activity. Consider the impact of the most wrong answer from an algorithm, and you will get a good feel of the need for human supervision.
  • Dialog: When dealing with human interaction, so much comes into play that to be successful, the scope of the interaction needs to be narrowed down to the actions that really add value and are not trivial. (This is not easily possible via classic interfaces.)


This methodology will certainly change and adapt overtime. Nevertheless, Umani has found it to be a robust way of rolling out end-to-end data and AI projects while minimizing friction and risk. By using this approach to present a Data & AI project to both customers and internal teams, everyone can get a good feeling of what activities, technologies, and challenges are involved. It’s one way to address the “Urgent need to build shared context, trust, and credibility with your team” as Satya Nadella states in his book, Hit Refresh. This methodology, is a great way to build trust in your relationships.

If you want more information about the methodology used by Umanis, you can find them at upcoming conferences in the next two months (in French) discussing this topic in Luxembourg, Paris, and Nantes.

Learn More

Learn more about the Azure Machine Learning service

Get started with a free trial of Azure Machine Learning service

AWS Key Management Service Increases API Requests Per Second Limits

AWS Key Management Service (KMS) has increased the limits for a set of KMS resources, including Customer Master Keys (CMKs), Aliases, and Grants per CMK. The limits have been increased from 1,000 to 10,000 for customer-managed CMKs, from 1,100 to 10,000 for Aliases, and from 2,500 to 10,000 for Grants per CMK in all regions where KMS is available. These limit increases make it easier for you to scale your KMS operations.

AWS Makes it Easier for You to Discover Relevant Products in AWS Marketplace

AWS Marketplace, a curated digital catalog, has announced an easier way to discover relevant products. With this new feature, you can see products related to the one you are looking at under a “Related Products” section that’s made available on the detail page. Relevant products are provided based on correlations between thousands of products in AWS Marketplace. You can learn more about a related product by clicking into its detail page directly from the page you’re on.

Announcing the Ability to Pick the Time for Amazon EC2 Scheduled Events

Today we are announcing the ability to pick the time at which an Amazon EC2 scheduled event in your account will be implemented, providing you more flexibility when managing your EC2 instances.

After you are notified about a scheduled event, you can pick the time for the event through the AWS Management Console, API, and CLI. This feature is now available in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), EU(Stockholm), South America (São Paulo), China (Beijing), and China (Ningxia) regions.

To learn more about scheduled events and how to pick the time for scheduled events, see the user guide for EC2 Scheduled Events.

Amazon Transcribe enhances custom vocabulary with custom pronunciations and display forms

Amazon Transcribe is a fully-managed automatic speech recognition (ASR) service that makes it easy for you to add a speech-to-text capability to your applications. Amazon Transcribe now supports custom pronunciations and display forms, augmenting the capability of the custom vocabulary feature.

You can give Amazon Transcribe more information about how to process speech in your input audio or video file by creating a custom vocabulary. A custom vocabulary is a list of specific words that you want Amazon Transcribe to recognize in your audio input. These are generally domain-specific words and phrases, words that Amazon Transcribe isn’t recognizing, or proper nouns.

Now, with the use of characters from the International Phonetic Alphabet (IPA), you can enhance each custom terminology with corresponding custom pronunciations. Alternatively, you can also use the standard orthography of the language to mimic the way that the word or phrase sounds.

Additionally, you can now designate exactly how a customer terminology should be displayed when it is transcribed (e.g. “Street” as “St.” versus “ST”).

The custom pronunciation and display forms enhancements to custom vocabulary are available in all regions where Amazon Transcribe is available. Try out the new custom vocabulary features via the Amazon Transcribe console or use the Command Line Interface (CLI) and AWS SDKs. For more information, visit this documentation page.