A new open source content library from Google


Posted by Sebastian Trzcinski-Clément, Program Manager, Developer Relations

Developers around the world are constantly creating open source tools and tutorials but have a hard time getting them discovered. The content published often spanned many different sites – from GitHub to Medium. Therefore we decided to create a space where we can highlight the best projects related to Google technologies in one place – introducing the Developer Library.

GIF scrolling through Developer Library

The platform showcases blog posts and open source tools with easy-to-use navigation. Content is categorized by product areas; Machine Learning, Flutter, Firebase, Angular, Cloud, Android, with more to come.

What makes the Developer Library unique is that each piece featured on the site is reviewed, in detail, by a team of Google experts for accuracy and relevancy, so you know when you view the content on the site it has the stamp of approval from Google.

To demonstrate the breadth of content on the site here are some examples of published content pieces and video interviews with the developers who authored these posts:

There are two ways you can help us grow the Developer Library.

Firstly, If you have great content that you would like to see published on the Developer Library, please submit it for review here.

Secondly, the team welcomes feedback, so if you have anything you’d like to see added or changed on the Developer Library site, do complete this short feedback form or just file an issue on GitHub.

We can’t wait to see what you build together!

Simpler Google Pay integration for React and web developers

Posted by Soc Sieng, Developer Advocate

The Google Pay API enables fast, simple checkout for your website.

The Google Pay JavaScript library does not depend on external libraries or frameworks and will work regardless of which framework your website uses (if it uses any at all). While this ensures wide compatibility, we know that it doesn’t necessarily make it easier to integrate when your website uses a framework. We’re doing something about it.

Introducing the Google Pay button for React

React is one of the most widely-used tools for building web UI’s, so we are launching the Google Pay Button for React to provide a streamlined integration experience. This component will make it easier to incorporate Google Pay into your React website whether you are new to React or a seasoned pro, and similarly, if this is your first Google Pay integration or if you’ve done this before.

We’re making this component available as an open source project on GitHub and publishing it to npm. We’ve authored the React component with TypeScript to bring code completion to supported editors, and if your website is built with TypeScript you can also take advantage of type validation to identify common issues as you type.

Get real time code completion and validation as you integrate with supported editors.

Getting started

The first step is to install the Google Pay button module from npm:

npm install @google-pay/button-react

Adding and configuring the button

The Google Pay button can be added to your React component by first importing it:

import GooglePayButton from '@google-pay/button-react';

And then rendering it with the necessary configuration values:

<GooglePayButton
environment="TEST"
paymentRequest={{ ... }}
onLoadPaymentData={() => {}}
/>

Try it out for yourself on JSFiddle.

Refer to component documentation for a full list of supported configuration properties.

Note that you will need to provide a Merchant ID in paymentRequest.merchantInfo to complete the integration. Your Merchant ID can be obtained from the Google Pay Business Console.

Your Merchant ID can be found in the Google Pay Business Console.

Support for other frameworks

We also want to provide an improved developer experience for our developers using other frameworks, or no framework at all. That’s why we are also releasing the Google Pay button Custom Element.

Custom elements are great because:

Like the React component, the Google Pay button custom element is hosted on GitHub and published to npm. In fact, the React component and the custom element share the same repository and large portion of code. This ensures that both versions maintain feature parity and receive the same level of care and attention.

Try it out on JSFiddle.

Google Pay JavaScript library

There’s no change to the existing Google Pay JavaScript library, and if you prefer, you can continue to use this directly instead of the React component or custom element. Both of these components provide a convenience layer over the Google Pay JavaScript library and make use of it internally.

Your feedback

This is the first time that we (the Google Pay team) have released a framework specific library. We would love to hear your feedback.

Aside from React, most frameworks can use the Web Component version of the Google Pay Button. We may consider adding support for other frameworks based on interest and demand.

If you encounter any problems with the React component or custom element, please raise a GitHub issue. Alternatively, if you know what the problem is and have a solution in mind, feel free to raise a pull request. For other Google Pay related requests and questions, use the Contact Support option in the Google Pay Business Console.

What do you think?

Do you have any questions? Let us know in the comments below or tweet using #AskGooglePayDev.

Simplifying service mesh with Istio 1.4

Istio, the open-source service mesh that we created with IBM and Lyft, is now at version 1.4, and we’re very excited by how quickly the project is evolving and being adopted by end users. 

When we released Istio 1.1 in March, we announced that we would move to quarterly releases to get functionality out faster, and with this fourth release of the year, we’re happy to be fulfilling that promise.

Much of the work we are doing in open source Istio comes from what we’ve learned working with users of Google’s Anthos and Anthos Service Mesh, the hybrid application deployment platform and Istio-based service mesh that we released earlier this year to help enterprises monitor, secure and manage traffic in complex deployments. 

Working with Anthos users, we saw that we needed to focus on Istio usability and performance. In Istio 1.4 we are particularly excited about the advances in “mixerless telemetry”—a simplified architecture that allows full fidelity and pluggability of L7 telemetry, with a much smaller CPU footprint. Istio’s Envoy proxies can now send telemetry to Prometheus or Stackdriver without first having to install, run and scale Mixer instances.

“Many of the customers I talk to love the observability that they get with Istio but didn’t love the amount of resources that Mixer consumed,” said Mandar Jog, lead for the Istio Policies and Telemetry working group. “Istio’s goal is to be both feature-rich and performant, and we’re well on the way with this release.”

We also noticed that Anthos Service Mesh users often use it to enforce access policies among their services. To help with that, we redesigned Istio’s authorization APIs, simplifying them and making them easier to use.

It’s also getting easier for operators to install and upgrade Istio, thanks to simpler configuration options via the Kubernetes Operator mechanism. This will help not only Anthos customers but all open source Istio users—that includes Google Kubernetes Engine (GKE) customers who use the Istio on GKE add-on to install open-source Istio in their GKE clusters.

Accelerating Istio for all

As we increased our contributions to Istio, the whole community grew as well. In fact, GitHub recently noted that Istio is in the top five projects in contributor growth over the last year—across all projects in GitHub! Of course, the success of an open source project is as much about building an ecosystem as it is about building a community, and that’s been happening too, with the arrival of Istio-based service mesh products from companies small and large—from Aspen Mesh and Banzai Cloud to Mulesoft to VMware.

Finally, we’re happy to see people talking about their own journey to service mesh with Istio. AutoTrader UK announced recently that Istio and GKE have let them migrate 300 services from VMs in a data center to the cloud. And at KubeCon this week, we heard from the likes of ING Bank, Freddie Mac and Yahoo! about how they’re using Istio. 

Onwards to Istio 1.5!

The Go language turns 10: A Look at Go’s Growth in the Enterprise

Posted by Steve Francia, Go TeamGo's gopher mascot

The Go gopher was created by renowned illustrator Renee French. This image is adapted from a drawing by Egon Elbre.

November 10 marked Go’s 10th anniversary—a milestone that we are lucky enough to celebrate with our global developer community.

The Gopher community will be celebrating Go’s 10th anniversary at conferences such as Gopherpalooza in Mountain View and KubeCon in San Diego, and dozens of meetups around the world.

In recognition of this milestone, we’re taking a moment to reflect on the tremendous growth and progress Go (also known as golang) has made: from its creation at Google and open sourcing, to many early adopters and enthusiasts, to the global enterprises that now rely on Go everyday for critical workloads.

New to Go?

Go is an open-source programming language designed to help developers build fast, reliable, and efficient software at scale. It was created at Google and is now supported by over 2100 contributors, primarily from the open-source community. Go is syntactically similar to C, but with the added benefits of memory safety, garbage collection, structural typing, and CSP-style concurrency.

Most importantly, Go was purposefully designed to improve productivity for multicore, networked machines and large codebases—allowing programmers to rapidly scale both software development and deployment.

Millions of Gophers!

Today, Go has more than a million users worldwide, ranging across industries, experience, and engineering disciplines. Go’s simple and expressive syntax, ease-of-use, formatting, and speed have helped it become one of the fastest growing languages—with a thriving open source community.

As Go’s use has grown, more and more foundational services have been built with it. Popular open source applications built on Go include Docker, Hugo, Kubernetes. Google’s hybrid cloud platform, Anthos, is also built with Go.

Go was first adopted to support large amounts of Google’s services and infrastructure. Today, Go is used by companies including, American Express, Dropbox, The New York Times, Salesforce, Target, Capital One, Monzo, Twitch, IBM, Uber, and Mercado Libre. For many enterprises, Go has become their language of choice for building on the cloud.

An Example of Go In the Enterprise

One exciting example of Go in action is at MercadoLibre, which uses Go to scale and modernize its ecommerce ecosystem, improve cost-efficiencies, and system response times.

MercadoLibre’s core API team builds and maintains the largest APIs at the center of the company’s microservices solutions. Historically, much of the company’s stack was based on Grails and Groovy backed by relational databases. However this big framework with multiple layers was soon found encountering scalability issues.

Converting that legacy architecture to Go as a new, very thin framework for building APIs streamlined those intermediate layers and yielded great performance benefits. For example, one large Go service is now able to run 70,000 requests per machine with just 20 MB of RAM.

“Go was just marvelous for us,” explains Eric Kohan, Software Engineering Manager at MercadoLibre. “It’s very powerful and very easy to learn, and with backend infrastructure has been great for us in terms of scalability.”

Using Go allowed MercadoLibre to cut the number of servers they use for this service to one-eighth the original number (from 32 servers down to four), plus each server can operate with less power (originally four CPU cores, now down to two CPU cores). With Go, the company obviated 88 percent of their servers and cut CPU on the remaining ones in half—producing a tremendous cost-savings.

With Go, MercadoLibre’s build times are three times (3x) faster and their test suite runs an amazing 24 times faster. This means the company’s developers can make a change, then build and test that change much faster than they could before.

Today, roughly half of Mercadolibre’s traffic is handled by Go applications.

“We really see eye-to-eye with the larger philosophy of the language,” Kohan explains. “We love Go’s simplicity, and we find that having its very explicit error handling has been a gain for developers because it results in safer, more stable code in production.”

Visit go.dev to Learn More

We’re thrilled by how the Go community continues to grow, through developer usage, enterprise adoption, package contribution, and in many other ways.

Building off of that growth, we’re excited to announce go.dev, a new hub for Go developers.

There you’ll find centralized information for Go packages and modules, a wealth of learning resources to get started with the language, and examples of critical use cases and case studies of companies using Go.

MercadoLibre’s recent experience is just one example of how Go is being used to build fast, reliable, and efficient software at scale.

You can read more about MercadoLibre’s success with Go in the full case study.

Kubernetes development, simplified—Skaffold is now GA

Back in 2017, we noticed that developers creating Kubernetes-native applications spent a long time building and managing container images across registries, manually updating their Kubernetes manifests, and redeploying their applications every time they made even the smallest code changes. We set out to create a tool to automate these tasks, helping them focus on writing and maintaining code rather than managing the repetitive steps required during the edit-debug-deploy ‘inner loop’. From this observation, Skaffold was born.

Today, we’re announcing our first generally available release of Skaffold. Skaffold simplifies common operational tasks that you perform when doing Kubernetes development, letting you focus on your code changes and see them rapidly reflected on your cluster. It’s the underlying engine that drives Cloud Code, and a powerful tool in and of itself for improving developer productivity.

Skaffold’s central command, skaffold dev, watches local source code for changes, and rebuilds and redeploys applications to your cluster in real time. But Skaffold has grown to be much more than just a build and deployment tool—instead, it’s become a tool to increase developer velocity and productivity.

Feedback from Skaffold users bears this out. “Our customers love [Kubernetes], but consistently gave us feedback that developing on Kubernetes was cumbersome. Skaffold hit the mark in addressing this problem,” says Warren Strange, Engineering Director at ForgeRock. “Changes to a Docker image or a configuration that previously took several minutes to deploy now take seconds. Skaffold’s plugin architecture gives us the ability to deploy to Helm or Kustomize and use various Docker build plugins such as Kaniko. Skaffold replaced our bespoke collection of utilities and scripts with a streamlined tool that is easy to use.”

A Kubernetes developer’s best friend

Skaffold is a command line tool that saves developers time by automating most of the development workflow from source to deployment in an extensible way. It natively supports the most common image-building and application deployment strategies, making it compatible with a wide variety of both new and pre-existing projects. Skaffold also operates completely on the client-side, with no required components on your cluster, making it super lightweight and high-performance.

Skaffolds inner development loop.png
Skaffold’s inner development loop

By taking care of the operational tasks of iterative development, Skaffold removes a large burden from application developers and substantially improves productivity.

Over the last two years, there have been more than 5,000 commits from nearly 150 contributors to the Skaffold project, resulting in 40 releases, and we’re confident that Skaffold’s core functionality is mature. To commemorate this, let’s take a closer look at some of Skaffold’s core features.

Fast iterative development
When it comes to development, skaffold dev is your personal ops assistant: it knows about the source files that comprise your application, watches them while you work, and rebuilds and redeploys only what’s necessary. Skaffold comes with highly optimized workflows for local and remote deployment, giving you the flexibility to develop against local Kubernetes clusters like Minikube or Kind, as well as any remote Kubernetes cluster.

“Skaffold is an amazing tool that simplified development and delivery for us,” says Martin Höfling, Principal Consultant at TNG Technology Consulting GmbH. “Skaffold hit our sweet spot by covering two dimensions: First, the entire development cycle from local development, integration testing to delivery. Second, Skaffold enabled us to develop independently of the platform on Linux, OSX, and Windows, with no platform-specific logic required.”

Skaffold’s dev loop also automates typical developer tasks. It automatically tails logs from your deployed workloads, and port-forwards the remote application to your machine, so you can iterate directly against your service endpoints. Using Skaffold’s built-in utilities, you can do true cloud-native development, all while using a lightweight, client-side tool.

Production-ready CI/CD pipelines
Skaffold can be used as a building block for your production-level CI/CD pipelines. Taylor Barrella, Software Engineer at Quora, says that “Skaffold stood out as a tool we’d want for both development and deployment. It gives us a common entry point across applications that we can also reuse for CI/CD. Right now, all of our CI/CD pipelines for Kubernetes applications use Skaffold when building and deploying.”

Skaffold can be used to build images and deploy applications safely to production, reusing most of the same tooling that you use to run your applications locally. skaffold run runs an entire pipeline from build to deploy in one simple command, and can be decomposed into skaffold build and skaffold deploy for more fine-tuned control over the process. skaffold render can be used to build your application images, and output templated Kubernetes manifests instead of actually deploying to your cluster, making it easy to integrate with GitOps workflows.

Profiles let you use the same Skaffold configuration across multiple environments, express the differences via a Skaffold profile for each environment, and activate a specific profile using the current Kubernetes context. This means you can push images and deploy applications to completely different environments without ever having to modify the Skaffold configuration. This makes it easy for all members of a team to share the same Skaffold project configuration, while still being able to develop against their own personal development environments, and even use that same configuration to do deployments to staging and production environments.

On-cluster application debugging
Skaffold can help with a whole lot more than application deployment, not least of which is debugging. Skaffold natively supports direct debugging of Golang, NodeJS, Java, and Python code running on your cluster!

The skaffold debug command runs your application with a continuous build and deploy loop, and forwards any required debugging ports to your local machine. This allows Skaffold to automatically attach a debugger to your running application. Skaffold also takes care of any configuration changes dynamically, giving you a simple yet powerful tool for developing Kubernetes-native applications. skaffold debug powers the debugging features in Cloud Code for IntelliJ and Cloud Code for Visual Studio Code.

google cloud code.png

Cloud Code: Kubernetes development in the IDE

Cloud Code comes with tools to help you write, deploy, and debug cloud-native applications quickly and easily. It provides extensions to IDEs such as Visual Studio Code and IntelliJ to let you rapidly iterate, debug, and deploy code to Kubernetes. If that sounds similar to Skaffold, that’s because it is—Skaffold powers many of the core features that make Cloud Code so great! Things like local debugging of applications deployed to Kubernetes and continuous deployment are baked right into the Cloud Code extensions with the help of Skaffold.

To get the best IDE experience with Skaffold, try Cloud Code for Visual Studio Code or IntelliJ IDEA!

What’s next?

Our goal with Skaffold and Cloud Code is to offer industry-leading tools for Kubernetes development, and since Skaffold’s inception, we’ve engaged the broader community to ensure that Skaffold evolves in line with what users want. There are some amazing ideas from external contributors that we’d love to see come to fruition, and with the Kubernetes development ecosystem still in a state of flux, we’ll prioritize features that will have the most impact on Skaffold’s usefulness and usability. We’re also working closely with the Cloud Code team to surface Skaffold’s capabilities inside your IDE.

With the move to general availability, it’s never been a better time to start using (or continue to use) Skaffold, trusting that it will provide an excellent and production-ready development experience that you can rely on.

For more detailed information and docs, check out the Skaffold webpage, and as always, you can reach out to us on Github and Slack.


Special thanks to all of our contributors (you know who you are) who helped make Skaffold the awesome tool it is today!

Enabling developers and organizations to use differential privacy

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office

Whether you’re a city planner, a small business owner, or a software developer, gaining useful insights from data can help make services work better and answer important questions. But, without strong privacy protections, you risk losing the trust of your citizens, customers, and users.

Differentially-private data analysis is a principled approach that enables organizations to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual’s data to be distinguished or re-identified. This type of analysis can be implemented in a wide variety of ways and for many different purposes. For example, if you are a health researcher, you may want to compare the average amount of time patients remain admitted across various hospitals in order to determine if there are differences in care. Differential privacy is a high-assurance, analytic means of ensuring that use cases like this are addressed in a privacy-preserving manner.

Today, we’re rolling out the open-source version of the differential privacy library that helps power some of Google’s core products. To make the library easy for developers to use, we’re focusing on features that can be particularly difficult to execute from scratch, like automatically calculating bounds on user contributions. It is now freely available to any organization or developer that wants to use it.

A deeper look at the technology

Our open source library was designed to meet the needs of developers. In addition to being freely accessible, we wanted it to be easy to deploy and useful.

Here are some of the key features of the library:

  • Statistical functions: Most common data science operations are supported by this release. Developers can compute counts, sums, averages, medians, and percentiles using our library.
  • Rigorous testing: Getting differential privacy right is challenging. Besides an extensive test suite, we’ve included an extensible ‘Stochastic Differential Privacy Model Checker library’ to help prevent mistakes.
  • Ready to use: The real utility of an open-source release is in answering the question “Can I use this?” That’s why we’ve included a PostgreSQL extension along with common recipes to get you started. We’ve described the details of our approach in a technical paper that we’ve just released today.
  • Modular: We designed the library so that it can be extended to include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management.

Investing in new privacy technologies

We have driven the research and development of practical, differentially-private techniques since we released RAPPOR to help improve Chrome in 2014, and continue to spearhead their real-world application.

We’ve used differentially private methods to create helpful features in our products, like how busy a business is over the course of a day or how popular a particular restaurant’s dish is in Google Maps, and improve Google Fi.

Screen recording on phone checking popular times of restaurant

This year, we’ve announced several open-source, privacy technologies—Tensorflow Privacy, Tensorflow Federated, Private Join and Compute—and today’s launch adds to this growing list. We’re excited to make this library broadly available and hope developers will consider leveraging it as they build out their comprehensive data privacy strategies. From medicine, to government, to business, and beyond, it’s our hope that these open-source tools will help produce insights that benefit everyone.

Acknowledgements

Software Engineers: Alain Forget, Bryant Gipson, Celia Zhang, Damien Desfontaines, Daniel Simmons-Marengo, Ian Pudney, Jin Fu, Michael Daub, Priyanka Sehgal, Royce Wilson, William Lam

How I replicated an $86 million project in 57 lines of code

When an experiment with existing open source technology does a “good enough” job

The Victoria Police are the primary law enforcement agency of Victoria, Australia. With over 16,000 vehicles stolen in Victoria this past year — at a cost of about $170 million — the police department is experimenting with a variety of technology-driven solutions to crackdown on car theft. They call this system BlueNet.

To help prevent fraudulent sales of stolen vehicles, there is already a VicRoads web-based service for checking the status of vehicle registrations. The department has also invested in a stationary license plate scanner — a fixed tripod camera which scans passing traffic to automatically identify stolen vehicles.

Don’t ask me why, but one afternoon I had the desire to prototype a vehicle-mounted license plate scanner that would automatically notify you if a vehicle had been stolen or was unregistered. Understanding that these individual components existed, I wondered how difficult it would be to wire them together.

But it was after a bit of googling that I discovered the Victoria Police had recently undergone a trial of a similar device, and the estimated cost of roll out was somewhere in the vicinity of $86,000,000. One astute commenter pointed out that the $86M cost to fit out 220 vehicles comes in at a rather thirsty $390,909 per vehicle.

Surely we can do a bit better than that.

Existing stationary license plate recognition systems

The Success Criteria

Before getting started, I outlined a few key requirements for product design.

Requirement #1: The image processing must be performed locally

Streaming live video to a central processing warehouse seemed the least efficient approach to solving this problem. Besides the whopping bill for data traffic, you’re also introducing network latency into a process which may already be quite slow.

Although a centralized machine learning algorithm is only going to get more accurate over time, I wanted to learn if an local on-device implementation would be “good enough”.

Requirement #2: It must work with low quality images

Since I don’t have a Raspberry Pi camera or USB webcam, so I’ll be using dashcam footage — it’s readily available and an ideal source of sample data. As an added bonus, dashcam video represents the overall quality of footage you’d expect from vehicle mounted cameras.

Requirement #3: It needs to be built using open source technology

Relying upon a proprietary software means you’ll get stung every time you request a change or enhancement — and the stinging will continue for every request made thereafter. Using open source technology is a no-brainer.

My solution

At a high level, my solution takes an image from a dashcam video, pumps it through an open source license plate recognition system installed locally on the device, queries the registration check service, and then returns the results for display.

The data returned to the device installed in the law enforcement vehicle includes the vehicle’s make and model (which it only uses to verify whether the plates have been stolen), the registration status, and any notifications of the vehicle being reported stolen.

If that sounds rather simple, it’s because it really is. For example, the image processing can all be handled by the openalpr library.

This is really all that’s involved to recognize the characters on a license plate:

A Minor Caveat
Public access to the VicRoads APIs is not available, so license plate checks occur via web scraping for this prototype. While generally frowned upon — this is a proof of concept and I’m not slamming anyone’s servers.

Here’s what the dirtiness of my proof-of-concept scraping looks like:

Results

I must say I was pleasantly surprised.

I expected the open source license plate recognition to be pretty rubbish. Additionally, the image recognition algorithms are probably not optimised for Australian license plates.

The solution was able to recognise license plates in a wide field of view.

Annotations added for effect. Number plate identified despite reflections and lens distortion.

Although, the solution would occasionally have issues with particular letters.

Incorrect reading of plate, mistook the M for an H

But … the solution would eventually get them correct.

A few frames later, the M is correctly identified and at a higher confidence rating

As you can see in the above two images, processing the image a couple of frames later jumped from a confidence rating of 87% to a hair over 91%.

I’m confident, pardon the pun, that the accuracy could be improved by increasing the sample rate, and then sorting by the highest confidence rating. Alternatively a threshold could be set that only accepts a confidence of greater than 90% before going on to validate the registration number.

Those are very straight forward code-first fixes, and don’t preclude the training of the license plate recognition software with a local data set.

The $86,000,000 Question

To be fair, I have absolutely no clue what the $86M figure includes — nor can I speak to the accuracy of my open source tool with no localized training vs. the pilot BlueNet system.

I would expect part of that budget includes the replacement of several legacy databases and software applications to support the high frequency, low latency querying of license plates several times per second, per vehicle.

On the other hand, the cost of ~$391k per vehicle seems pretty rich — especially if the BlueNet isn’t particularly accurate and there are no large scale IT projects to decommission or upgrade dependent systems.

Future Applications

While it’s easy to get caught up in the Orwellian nature of an “always on” network of license plate snitchers, there are many positive applications of this technology. Imagine a passive system scanning fellow motorists for an abductors car that automatically alerts authorities and family members to their current location and direction.

Teslas vehicles are already brimming with cameras and sensors with the ability to receive OTA updates — imagine turning these into a fleet of virtual good samaritans. Ubers and Lyft drivers could also be outfitted with these devices to dramatically increase the coverage area.

Using open source technology and existing components, it seems possible to offer a solution that provides a much higher rate of return — for an investment much less than $86M.

Part 2 — I’ve published an update, in which I test with my own footage and catch an unregistered vehicle, over here:

Remember the $86 million license plate scanner I replicated? I caught someone with it.


How I replicated an $86 million project in 57 lines of code was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

Understand GCP Organization resource hierarchies with Forseti Visualizer

Google Cloud Platform (GCP) includes a powerful resource hierarchy that establishes who owns a specific resource, and through which you can apply access controls and organizational policies. But understanding the GCP resource hierarchy can be hard. For example, what does a GCP Organization “look” like? What networks exist within it? Do specific resources violate established security policies? To which service accounts and groups visualizing do you have access?

To help answer those questions, as well as others, we recently open-sourcedForseti Visualizer, which lets you, er, visualize and interact with your GCP Organization. Built on top of the open-source Forseti Security, we also used our colleague Mike Zinni’s post, Visualizing GCP Architecture using Forseti 2.0 and D3.js, as inspiration.

1 Forseti Visualizer.gif

Forseti Visualizer does a number things:

1. Dynamically renders your entire GCP Organization. Forseti Visualizer leverages Forseti Security’s Inventory via connectivity to CloudSQL / MySQL database, so it’s always up-to-date with the most recent inventory iteration.

2. Finds all networks or a given set of resource types across an Organization. Again using Forseti Inventory, Visualizer tackles dynamic data processing and filtering of resources. Through a simple series of clicks on filtered resource types AND expanding the tree structure, we can quickly find all Networks.

2 filter by networks.gif

3. Finds violations. Using Forseti Scanner, Visualizer quickly shows you when a given resource is in violation of one of your Forseti policies.

3 find violations.gif

4. Displays access permissions. With the help of Forseti IAM Explain and Visualizer, you can quickly figure out whether or not you have access to a given resource—a question that’s otherwise difficult to answer, particularly if you have multiple projects. 

The future for Forseti Visualizer

These are powerful features in and of themselves, but we’re just getting started with Forseti Visualizer. Here’s a sampling of other extensions and features that could be useful:

  • Visualization Scaling – Internal performance testing shows degradation when over 500 resources are open and rendered on the page. An extension to limit the total number of resources and dynamically render content while scrolling through the visualization would help prevent this.

  • Visualization spacing for vertical / horizontal / wide-view

  • Multiple sub-visualizations

  • Full Forseti Explain functionality

  • More detailed GCP resource metadata

When it comes to Forseti Visualizer, the sky’s the limit. To get started with Forseti Visualizer, check the getting started pages. If you have feedback or suggestions on the visualization, interactivity, future features, reach out to me on our Forseti Slack channel.

Happy birthday Knative! Celebrating one year of portable serverless computing

Today marks the one-year anniversary of Knative, an open-source project initiated by Google that helps developers build, deploy and manage modern serverless workloads. What started as a Google-led project now has a rich ecosystem with partners from around the world, and together, we’ve had an amazing year! Here are just a few notable stats and milestones that Knative has achieved this year:

  • Seven releases since launch
  • A thriving, growing ecosystem: over 3,700 pull requests from 400+ contributors associated with over 80 different companies, including industry leaders like IBM, Red Hat, SAP, TriggerMesh and Pivotal. 
  • Addition of non-Google contributors at the approver, lead, and steering committee level
  • 20% monthly growth in contributions

With all this momentum for the project, we thought now would be a good time to reflect on why we initially created Knative, the project’s ecosystem, and how it relates to Google Cloud’s serverless vision.

Why we created Knative
Serverless computing provides developers with a number of benefits: the ability to run applications without having to worry about managing the underlying infrastructure, to execute code only when needed, to autoscale workloads from zero to N depending on traffic, and many more. But while traditional serverless offerings provide the velocity that developers love, they have a lack of flexibility. Serverless traditionally requires developers to use specific languages and proprietary tools. It also locks developers into a cloud provider and prevents them from being able to easily move their workloads to other platforms. 

In other words, most serverless offerings force developers to choose between the velocity and simple developer experience of serverless, and the flexibility and portability of containers. We asked ourselves, what if we could offer the best of both worlds?

Kubernetes has become the de facto standard for running containers. Even with all that Kubernetes offers, many platform providers and operators were implementing their own platforms to solve common needs like building code, scaling workloads, and connecting services with events. Not only was this a duplicative effort for everyone, it lead to vendor lock in and proprietary systems for developers. And thus, Knative was born.

What is Knative?
Knative offers a set of components that standardize mundane but difficult tasks such as building applications from source code to container images, routing and managing traffic during deployment, auto-scaling of workloads, and binding running services to a growing ecosystem of event sources. 

Idiomatic developer experience
Knative provides an idiomatic developer experience: Developers can use any language or framework, such as Django, Ruby on Rails, Spring and many more; common development patterns such as GitOps, DockerOps, or ManualOps;  and easily plug into existing build and CI/CD toolchains.

Knative.png

A growing Knative ecosystem
When we first announced Knative, it included three main components: build, eventing, and serving, all of which have received significant investment and adoption from the community. Recently the build component has been spun out of Knative into a new project, Tekton. Tekton focuses on solving a much broader set of continuous integration use-cases than was Knative’s original intent. 

But perhaps the biggest indicator of Knative’s momentum is the increase in commercial Knative-based products on the market. Our own Cloud Run is based on Knative, and several members of the community also have products based on Knative, including IBM, Red Hat, SAP, TriggerMesh and Pivotal

“We are excited to be partnering with Google on the Knative project. Knative enables us to build new innovative managed services in the cloud, easily, without having to recreate the essential building blocks. Knative is a game-changer, finally making serverless workload portability  a reality.” – Sebastien Goasguen, Co-Founder, TriggerMesh

“Red Hat has been working alongside the community and innovators like Google on Knative since its inception. By adding the Knative APIs to Red Hat OpenShift, our enterprise Kubernetes platform, developers have the ability to build portable serverless applications. We look forward to enabling more serverless workloads with Red Hat OpenShift Serverless based on Knative as the project nears general availability. This has the potential to improve the general ease of Kubernetes for developers, helping teams to run modern applications across hybrid architectures.” – William Markito Oliveira, senior principal product manager, Red Hat 

To learn more about Knative and the community look out for an upcoming interview with Evan Anderson, Google Cloud engineer, and a Knative technical lead on the SAP Customer Experience Labs podcast. 

Knative: the basis of Google Cloud Run
At Google Cloud Next 2019, we announced Cloud Run, our newest serverless compute platform that lets you run stateless request-driven containers without having to worry about the underlying infrastructure—no more configuration, provisioning, patching and managing servers. Cloud Run autoscales your application from zero to N depending on traffic and you only pay for the resources that you use. Cloud Run is available both as a fully managed offering and also as an add-on in Google Kubernetes Engine (GKE). 

We believe Cloud Run is the best way to use Knative. With Cloud Run, you choose how to run  your serverless workloads: fully managed on Google Cloud or on GKE. You can even choose to move your workloads on-premises running on your own Kubernetes cluster or to a third-party cloud. Knative makes it easy to start with Cloud Run and later move to Cloud Run on GKE, or start in your own Kubernetes cluster and migrate to Cloud Run in the future. Because it uses Knative as the underlying platform, you can move your workloads freely across platforms, while significantly reducing switching costs. 

Customers such as Percy.io use both Cloud Run and Cloud Run on GKE and love the fact they can leverage the same experience and UI wherever they need.

“We first started running our workloads on Cloud Run as fully managed on Google Cloud, but then wanted to leverage some of the benefits of Google Kubernetes Engine (GKE), so we decided to move some services to Cloud Run on GKE. The fact we can seamlessly move from one platform to another by just changing the endpoint is amazing, and that they both have the same UI and interface makes it extremely easy to manage.” – David Jones, Director of Engineering, Percy.io

Get started with Knative today!
Knative brings portability to your serverless workloads and the simple and easy developer experience to your Kubernetes platform. It is truly the best of both worlds. If you operate your own Kubernetes environment, check out Knative today. If you’re a developer, check out Cloud Run as an easy way to experience the benefits of Knative. Get started with your free trial on Google Cloud—we can’t wait to see what you will build.

Building the cloud-native future at Google Cloud

From its first open-source commit five years ago to now, Kubernetes has become the industry standard for modern application architecture. It was built on over a decade of Google’s experience as the world’s largest containerized application user. And it’s from this deep and continued investment that Google Cloud provides industry-leading solutions for running workloads at enterprise scale.

One of the most exciting outcomes of this shift toward cloud-native computing is the innovation built on top of Kubernetes. At Google, we love to solve challenging problems, and then share our experiences at scale with the world. This ethos is what brought Kubernetes to life, and it’s also the force behind Knative, Istio, gVisor, Kubeflow, Tekton, and other cloud-native open-source projects that we lead.

We think of it as our job to not only dream about the future, but also to design and implement it. Here’s an overview of open-source projects tied to Kubernetes that we’re working on. We know that speculating about the future can be tricky, but these projects offer a glimpse into how we’re building a cloud-native future. Let’s take a look.

kubecon.png

Start with Kubernetes

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It is the industry’s de facto container orchestrator, and is the heart of the cloud-native movement.

We’re proud of our contributions to the Kubernetes project, as we serve the community in many important ways. Google remains the top technical contributor to the project, as well as being actively involved in nearly all special interest groups (SIGs), subprojects, the steering committee, and as code approvers and reviewers. We constantly integrate our real-world experience at scale into the project, just as we have from the beginning.

When we look at the future of Kubernetes, we see the API extension ecosystem maturing and growing even further. We also see a more holistic approach to scalability, so it’s not just about how many nodes or pods are deployed, but how Kubernetes is used across real-world, production environments with widely-varying requirements. Improved reliability is another important facet of this work, as even more mission-critical workloads move to Kubernetes.

Istio

Istio is a service mesh that helps manage, secure and observe traffic between services. The project evolved out of the need for developers adopting microservices to help understand and control the traffic between those services without requiring code changes.

Istio uses the Envoy proxy as a sidecar to collect detailed network traffic statistics and other data from the co-located application, as well as provide logging and tracing. It optionally secures traffic using mTLS (and automatically generates and rotates certificates). Finally, it provides Kubernetes-style APIs to provide advanced networking functionality (for example, the ability to run canary tests, change retry policy at runtime, or add circuit-breaking).

The upcoming version, 1.2, will feature a new operator-based installer and numerous testing and quality improvements. For the rest of 2019, componentization and ease of use will take center stage, as well as architectural improvements that will increase modularity, allow powerful dataplane extensibility, and enhance reliability and performance.

Knative

Knative is a Kubernetes-based platform to build, deploy, and manage modern stateless workloads. Knative components abstract away the complexity and enable developers to focus on what matters to them—solving important business problems.

Just last week, the Knative team released the latest version, v0.6. Besides incremental reliability and stability enhancements, this release also exposes more powerful routing capabilities and improved support for GitOps-like operational use cases. Also, starting with this release, developers can now easily migrate simple apps from Kubernetes Deployments without changes, making service deployment easier for anyone who’s familiar with the Kubernetes resource model.

Since it was announced 10 months ago, a number of commercial offerings already use underlying Knative primitives. Today, the Knative community includes 400+ contributors associated with over 50 different companies, who with the v0.6 release have made 4,000+ pull requests. We are excited about this momentum and look forward to working with the community on further improving the developer experience on Kubernetes.

gVisor

gVisor is an open-source, OCI-compatible sandbox runtime that provides a virtualized container environment. It runs containers with a new user-space kernel, delivering a low-overhead container security solution for high-density applications. gVisor integrates with Docker, containerd and Kubernetes, making it easier to improve the security isolation of your containers while still using familiar tooling. Additionally, gVisor supports a variety of underlying mechanisms for intercepting application calls, allowing it to run in diverse host environments, including cloud-hosted virtual machines.

gVisor was open sourced in May 2018 at KubeCon EU. Since then, the gVisor team has added multi-container support for Kubernetes, released a suite of tests containing more than 1,500 individual tests, released a minikube add-on, integrated it with containerd, and further improved isolation and compatibility. The gVisor team recently began hosting community meetings and is working to grow the users and community around container isolation and gVisor.

Tekton

Tekton is a set of standardized Kubernetes-native primitives for building and running Continuous Delivery workflows. It allows users to express their Continuous Integration, Deployment and Delivery pipelines as Kubernetes CRDs, and run them in any Kubernetes cluster.

We started Tekton last year and donated it to the open Continuous Delivery Foundation earlier this year. Tekton APIs are still in alpha, but we look forward to stabilizing them and adding support for automated deployments, vendor-agnostic pull requests, GitOps workflows, automated compliance-as-code and more!

Forseti Security

Forseti Security is a collection of community-driven, open-source tools to help you expand upon the security of your Google Cloud Platform (GCP) environments. It takes a snapshot of your GCP resources metadata, audits those resources by comparing the configuration with the policies you defined, and notifies you of violations on an ongoing basis.

With Forseti, you can ensure your GKE clusters are provisioned with security and governance guardrails by scanning your GKE resource metadata and making sure the configurations are as expected. Forseti’s Validator Scanner lets you define custom security and governance constraints in Rego to check for violations in your GKE resource metadata.

In addition, you can reuse these constraints for pre-deployment checks with Terraform Validator. A set of canned constraints are available in the Policy Library. The Forseti community will continue contributing new constraints to harden your GKE environment. Get started with Forseti Validator Scanner here.

Kubeflow

Kubeflow is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Its goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML on a variety of infrastructures. The Kubeflow project is supported by 100+ contributors from 20+ organizations.

Kubeflow is on the road to 1.0, and we’re hard at work building a powerful development experience that will allow data scientists to build, train and deploy from notebooks, as well as the enterprise stability and features ML operations teams need to deploy and scale advanced data science workflows. Hear more about this effort in this session from KubeCon NA 2018, and follow us on Twitter @kubeflow.

Skaffold

Skaffold is a command line tool that makes it fast and easy to develop applications on Kubernetes. Skaffold automates the local development loop for you; skaffold dev rebuilds your images and redeploys your app to Kubernetes on every code change. You can also use Skaffold as a building block for CI/CD pipelines with skaffold run. It’s language-agnostic and has an increasing number of configurable, flexible image builders (jib, docker, bazel, kaniko), deployers (kustomize, kubectl, helm) and automated tagging policies, making it a great fit for more and more Kubernetes development workflows.

We use Skaffold under the hood for Cloud Code for IntelliJ and VSCode and also for Jenkins-X. Skaffold is currently in beta, and will soon graduate to 1.0.0.

Follow our progress on our GitHub repo, and share your thoughts with the #skaffold hashtag on Twitter!

Gatekeeper

Gatekeeper is a customizable admission webhook. It allows cluster administrators and security practitioners to develop, share and enforce policies and config validation via parameterized, easily configurable constraint CRDs. Constraints are portable and could also be used to validate commits to the source-of-truth repo in CI/CD pipelines.

With Gatekeeper, you can help developers comply with internal governance and best practices, freeing up your time and theirs. You can do things like require developers to set ownership labels, apply resource limits to their pods, or prohibit them from using the :latest tag. Using Gatekeeper’s audit functionality, you can easily find any pre-existing resources that are in violation of current best practices.

Google is proud to be collaborating with Microsoft and Styra (the creators of Open Policy Agent) on this project. Gatekeeper is currently in alpha and we welcome user feedback and contributions.

Krew

Krew is a plugin manager for kubectl that helps users discover and install kubectl plugins to improve their kubectl experiences. Originally developed at Google, Krew is now a part of Kubernetes SIG CLI.  

The future is now

Building cloud-native apps on top of Kubernetes isn’t some abstract, aspirational goal. The tools you need are here today, and they’re only getting better. To stay up to date on what else is happening in the cloud-native community, both from Google and beyond, we urge you to subscribe to the Kubernetes Podcast.

Bringing the best of open source to Google Cloud customers

Google’s belief in an open cloud stems from our deep commitment to open source. We believe that open source is the future of public cloud: It’s the foundation of IT infrastructure worldwide and has been a part of Google’s foundation since day one. This is reflected in our contributions to projects like Kubernetes, TensorFlow, Go, and many more.

Today, we’re taking our commitment to open source to the next level by announcing strategic partnerships with leading open source-centric companies in the areas of data management and analytics, including:

  • Confluent
  • DataStax
  • Elastic
  • InfluxData
  • MongoDB
  • Neo4j
  • Redis Labs

We’ve always seen our friends in the open-source community as equal collaborators, and not simply a resource to be mined. With that in mind, we’ll be offering managed services operated by these partners that are tightly integrated into Google Cloud Platform (GCP), providing a seamless user experience across management, billing and support. This makes it easier for our enterprise customers to build on open-source technologies, and it delivers on our commitment to continually support and grow these open-source communities.

Making open source even more accessible with a cloud-native experience

The open-source database market is big, and growing fast. According to SearchDataManagement.com, “more than 70% of new applications developed by corporate users will run on an open source database management system, and half of the existing relational database installations built on commercial DBMS technologies will be converted to open source platforms or [are] in the process of being converted.”

This mirrors what we hear from our customers—that you want to be able to use open-source technology easily and in a cloud-native way. The partnerships we are announcing today make this possible by offering an elevated experience similar to Google’s native services. It also means that you aren’t locked in or out when you are using these technologies—we think that’s important for our customers and our partners.

Here are some of the benefits these partnerships will offer:

  • Fully managed services running in the cloud, with best efforts made to optimize performance and latency between the service and application.
  • A single user interface to manage apps, which includes the ability to provision and manage the service from the Google Cloud Console.
  • Unified billing, so you get one invoice from Google Cloud that includes the partner’s service.
  • Google Cloud support for the majority of these partners, so you can manage and log support tickets in a single window and not have to deal with different providers.

To further our mission of making GCP the best destination for open source-based services, we will work with our partners to build integrations with native GCP services like Stackdriver for monitoring and IAM, validate these services for security, and optimize performance for users.

Partnering with leaders in open source

The partners we are announcing today include several of the top-ranked databases in their respective categories. We’re working alongside these creators and supporting the growth of these companies’ technologies to inspire strong customer experiences and adoption. These new partners include:

Confluent: Founded by the team that built Apache Kafka, Confluent builds an event streaming platform that lets companies easily access data as real-time streams. Learn more.  

DataStax: DataStax powers enterprises with its always-on, distributed cloud database built on Apache Cassandra and designed for hybrid cloud. Learn more.

Elastic: As the creators of the Elastic Stack, Elastic builds self-managed and SaaS offerings that make data usable in real time and at scale for search use cases, like logging, security, and analytics. Learn more.

InfluxData: InfluxData’s time series platform can instrument, observe, learn and automate any system, application and business process across a variety of use cases. InfluxDB (developed by InfluxData) is an open-source time series database optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, IoT sensor data, and real-time analytics. Learn more.

MongoDB: MongoDB is a modern, general-purpose database platform that brings software and data to developers and the applications they build, with a flexible model and control over data location. Learn more.

Neo4j: Neo4j is a native graph database platform specifically optimized to map, store, and traverse networks of highly connected data to reveal invisible contexts and hidden relationships. By analyzing data points and the connections between them, Neo4j powers real-time applications. Learn more.

Redis Labs: Redis Labs is the home of Redis, the world’s most popular in-memory database, and commercial provider of Redis Enterprise. It offers performance, reliability, and flexibility for personalization, machine learning, IoT, search, e-commerce, social, and metering solutions worldwide. Learn more.

As we look to an open source-powered cloud future, we’re pleased to bring these partner technologies to you. Partnering with the companies that invest in developing open-source technologies means you get benefits like expertise in operating these services at scale, additional enterprise features, and shorter cycles in bringing the latest innovation to the cloud.   

We’re looking forward to seeing what you build with these open source technologies. Learn more here about open source on GCP.

TensorFlow 2.0 and Cloud AI make it easy to train, deploy, and maintain scalable machine learning models

Since it was open-sourced in 2015, TensorFlow has matured into an entire end-to-end ML ecosystem that includes a variety of tools, libraries, and deployment options to help users go from research to production easily. This month at the 2019 TensorFlow Dev Summit we announced TensorFlow 2.0 to make machine learning models easier to use and deploy.

TensorFlow started out as a machine learning framework and has grown into a comprehensive platform that gives researchers and developers access to both intuitive higher-level APIs and low-level operations. In TensorFlow 2.0, eager execution is enabled by default, with tight Keras integration. You can easily ingest datasets via tf.data pipelines, and you can monitor your training in TensorBoard directly from Colab and Jupyter Notebooks. The TensorFlow team will continue to work on improving TensorFlow 2.0 alpha with a general release candidate coming later in Q2 2019.

tensorflow_2-0.gif

Making ML easier to use

The TensorFlow team’s decision to focus on developer productivity and ease of use doesn’t stop at iPython notebooks and Colab, but extends to make API components integrate far more intuitively with tf.keras (now the standard high level API), and to TensorFlow Datasets, which let users import common preprocessed datasets with only one line of code. Data ingestion pipelines can be orchestrated with tf.data, pushed into production with TensorFlow Extended (TFX), and scaled to multiple nodes and hardware architectures with minimal code change using distribution strategies.

The TensorFlow engineering team has created an upgrade tool and several migration guides to support users who wish to migrate their models from TensorFlow 1.x to 2.0. TensorFlow is also hosting a weekly community testing stand-up for users to ask questions about TensorFlow 2.0 and migration support. If you’re interested, you can find more information on the TensorFlow website.

Upgrading a model with the tf_upgrade_v2 tool.gif
Upgrading a model with the tf_upgrade_v2 tool.

Experiment and iterate

Both researchers and enterprise data science teams must continuously iterate on model architectures, with a focus on rapid prototyping and speed to a first solution. With eager execution a focus in TensorFlow 2.0, researchers have the ability to use intuitive Python control flows, optimize their eager code with tf.function, and save time with improved error messaging. Creating and experimenting with models using TensorFlow has never been so easy.

Faster training is essential for model deployments, retraining, and experimentation. In the past year, the TensorFlow team has worked diligently to improve training performance times on a variety of platforms including the second-generation Cloud TPU (by a factor of 1.6x) and the NVIDIA V100 GPU (by a factor of more than 2x). For inference, we saw speedups of over 3x with Intel’s MKL library, which supports CPU-based Compute Engine instances.

Through add-on extensions, TensorFlow expands to help you build advance models. For example, TensorFlow Federated lets you train models both in the cloud and on remote (IoT or embedded) devices in a collaborative fashion. Often times, your remote devices have data to train on that your centralized training system may not. We also recently announced the TensorFlow Privacy extension, which helps you strip personally identifiable information (PII) from your training data. Finally, TensorFlow Probability extends TensorFlow’s abilities to more traditional statistical use cases, which you can use in conjunction with other functionality like estimators.

Deploy your ML model in a variety ofenvironments and languages

A core strength of TensorFlow has always been the ability to deploy models into production. In TensorFlow 2.0, the TensorFlow team is making it even easier. TFX Pipelines give you the ability to coordinate how you serve your trained models for inference at runtime, whether on a single instance, or across an entire cluster. Meanwhile, for more resource-constrained systems, like mobile or IoT devices and embedded hardware, you can easily quantize your models to run with TensorFlow Lite. Airbnb, Shazam, and the BBC are all using TensorFlow Lite to enhance their mobile experiences, and to validate as well as classify user-uploaded content.

Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.gif
Exploring and analyzing data with TensorFlow Data Validation.

JavaScript is one of the world’s most popular programming languages, and TensorFlow.js helps make ML available to millions of JavaScript developers. The TensorFlow team announced TensorFlow.js version 1.0. This integration means you can not only train and run models in the browser, but also run TensorFlow as a part of server-side hosted JavaScript apps, including on App Engine. TensorFlow.js now has better performance than ever, and its community has grown substantially: in the year since its initial launch, community members have downloaded TensorFlow.js over 300,000 times, and its repository now incorporates code from over 100 contributors.

How to get started

If you’re eager to get started with TensorFlow 2.0 alpha on Google Cloud, start up a Deep Learning VM and try out some of the tutorials. TensorFlow 2.0 is available through Colab via pip install, if you’re just looking to run a notebook anywhere, but perhaps more importantly, you can also run a Jupyter instance on Google Cloud using a Cloud Dataproc Cluster, or launch notebooks directly from Cloud ML Engine, all from within your GCP project.

Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.gif
Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.

Along with announcing the alpha release of TensorFlow 2.0, we also announced new community and education partnerships. In collaboration with O’Reilly Media, we’re hosting TensorFlow World, a week-long conference dedicated to fostering and bringing together the open source community and all things TensorFlow. Call for proposals is open for attendees to submit papers and projects to be highlighted at the event. Finally, we announced two new courses to help beginners and learners new to ML and TensorFlow. The first course is deeplearning.ai’s Course 1 – Introduction to TensorFlow for AI, ML and DL, part of the TensorFlow: from Basic to Mastery series. The second course is Udacity’s Intro to TensorFlow for Deep Learning.

If you’re using TensorFlow 2.0 on Google Cloud, we want to hear about it! Make sure to join our Testing special interest group, submit your project abstracts to TensorFlow World, and share your projects in our #PoweredByTF Challenge on DevPost. To quickly get up to speed on TensorFlow, be sure to check out our free courses on Udacity and DeepLearning.ai.