Migrating from App Engine ndb to Cloud NDB

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Migrating to standalone services

Today we’re introducing the first video showing long-time App Engine developers how to migrate from the App Engine ndb client library that connects to Datastore. While the legacy App Engine ndb service is still available for Datastore access, new features and continuing innovation are going into Cloud Datastore, so we recommend Python 2 users switch to standalone product client libraries like Cloud NDB.

This video and its corresponding codelab show developers how to migrate the sample app introduced in a previous video and gives them hands-on experience performing the migration on a simple app before tackling their own applications. In the immediately preceding “migration module” video, we transitioned that app from App Engine’s original webapp2 framework to Flask, a popular framework in the Python community. Today’s Module 2 content picks up where that Module 1 leaves off, migrating Datastore access from App Engine ndb to Cloud NDB.

Migrating to Cloud NDB opens the doors to other modernizations, such as moving to other standalone services that succeed the original App Engine legacy services, (finally) porting to Python 3, breaking up large apps into microservices for Cloud Functions, or containerizing App Engine apps for Cloud Run.

Moving to Cloud NDB

App Engine’s Datastore matured to becoming its own standalone product in 2013, Cloud Datastore. Cloud NDB is the replacement client library designed for App Engine ndb users to preserve much of their existing code and user experience. Cloud NDB is available in both Python 2 and 3, meaning it can help expedite a Python 3 upgrade to the second generation App Engine platform. Furthermore, Cloud NDB gives non-App Engine apps access to Cloud Datastore.

As you can see from the screenshot below, one key difference between both libraries is that Cloud NDB provides a context manager, meaning you would use the Python with statement in a similar way as opening files but for Datastore access. However, aside from moving code inside with blocks, no other changes are required of the original App Engine ndb app code that accesses Datastore. Of course your “YMMV” (your mileage may vary) depending on the complexity of your code, but the goal of the team is to provide as seamless of a transition as possible as well as to preserve “ndb“-style access.

The difference between the App Engine ndb and Cloud NDB versions of the sample app

The “diffs” between the App Engine ndb and Cloud NDB versions of the sample app

Next steps

To try this migration yourself, hit up the corresponding codelab and use the video for guidance. This Module 2 migration sample “STARTs” with the Module 1 code completed in the previous codelab (and video). Users can use their solution or grab ours in the Module 1 repo folder. The goal is to arrive at the end with an identical, working app that operates just like the Module 1 app but uses a completely different Datastore client library. You can find this “FINISH” code sample in the Module 2a folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. Bonus content migrating to Python 3 App Engine can also be found in the video and codelab, resulting in a second FINISH, the Module 2b folder.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned! Developers should also check out the official Cloud NDB migration guide which provides more migration details, including key differences between both client libraries.

Ahead in Module 3, we will continue the Cloud NDB discussion and present our first optional migration, helping users move from Cloud NDB to the native Cloud Datastore client library. If you can’t wait, try out its codelab found in the table at the repo above. Migrations aren’t always easy; we hope this content helps you modernize your apps and shows we’re focused on helping existing users as much as new ones.

Migrating from App Engine webapp2 to Flask

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Migrating web framework

The Google Cloud team recently introduced a series of codelabs (free, self-paced, hands-on tutorials) and corresponding videos designed to help users on one of our serverless compute platforms modernize their apps, with an initial focus on our earliest users running their apps on Google App Engine. We kick off this content by showing users how to migrate from App Engine’s webapp2 web framework to Flask, a popular framework in the Python community.

While users have always been able to use other frameworks with App Engine, webapp2 comes bundled with App Engine, making it the default choice for many developers. One new requirement in App Engine’s next generation platform (which launched in 2018) is that web frameworks must do their own routing, which unfortunately, means that webapp2 is no longer supported, so here we are. The good news is that as a result, modern App Engine is more flexible, lets users to develop in a more idiomatic fashion, and makes their apps more portable.

For example, while webapp2 apps can run on App Engine, Flask apps can run on App Engine, your servers, your data centers, or even on other clouds! Furthermore, Flask has more users, more published resources, and is better supported. If Flask isn’t right for you, you can select from other WSGI-compliant frameworks such as Django, Pyramid, and others.

Video and codelab content

In this “Module 1” episode of Serverless Migration Station (part of the Serverless Expeditions series) Google engineer Martin Omander and I explore this migration and walk developers through it step-by-step.

In the previous video, we introduced developers to the baseline Python 2 App Engine NDB webapp2 sample app that we’re taking through each of the migrations. In the video above, users see that the majority of the changes are in the main application handler, MainHandler:

The diffs between the webapp2 and Flask versions of the sample app

The “diffs” between the webapp2 and Flask versions of the sample app

Upon (re)deploying the app, users should see no visible changes to the output from the original version:

VisitMe application sample output

VisitMe application sample output

Next steps

Today’s video picks up from where we left off: the Python 2 baseline app in its Module 0 repo folder. We call this the “START”. By the time the migration has completed, the resulting source code, called “FINISH”, can be found in the Module 1 repo folder. If you mess up partway through, you can rewind back to the START, or compare your solution with ours, FINISH. We also hope to one day provide a Python 3 version as well as cover other legacy runtimes like Java 8, PHP 5, and Go 1.11 and earlier, so stay tuned!

All of the migration learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. The next video (Module 2) will cover migrating from App Engine’s ndb library for Datastore to Cloud NDB. We hope you find all these resources helpful in your quest to modernize your serverless apps!

Introducing “Serverless Migration Station” Learning Modules

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Helping users modernize their serverless apps

Earlier this year, the Google Cloud team introduced a series of codelabs (free, online, self-paced, hands-on tutorials) designed for technical practitioners modernizing their serverless applications. Today, we’re excited to announce companion videos, forming a set of “learning modules” made up of these videos and their corresponding codelab tutorials. Modernizing your applications allows you to access continuing product innovation and experience a more open Google Cloud. The initial content is designed with App Engine developers in mind, our earliest users, to help you take advantage of the latest features in Google Cloud. Here are some of the key migrations and why they benefit you:

  • Migrate to Cloud NDB: App Engine’s legacy ndb library used to access Datastore is tied to Python 2 (which has been sunset by its community). Cloud NDB gives developers the same NDB-style Datastore access but is Python 2-3 compatible and allows Datastore to be used outside of App Engine.
  • Migrate to Cloud Run: There has been a continuing shift towards containerization, an app modernization process making apps more portable and deployments more easily reproducible. If you appreciate App Engine’s easy deployment and autoscaling capabilities, you can get the same by containerizing your App Engine apps for Cloud Run.
  • Migrate to Cloud Tasks: while the legacy App Engine taskqueue service is still available, new features and continuing innovation are going into Cloud Tasks, its standalone equivalent letting users create and execute App Engine and non-App Engine tasks.

The “Serverless Migration Station” videos are part of the long-running Serverless Expeditions series you may already be familiar with. In each video, Google engineer Martin Omander and I explore a variety of different modernization techniques. Viewers will be given an overview of the task at hand, a deeper-dive screencast takes a closer look at the code or configuration files, and most importantly, illustrates to developers the migration steps necessary to transform the same sample app across each migration.

Sample app

The baseline sample app is a simple Python 2 App Engine NDB and webapp2 application. It registers every web page visit (saving visiting IP address and browser/client type) and displays the most recent queries. The entire application is shown below, featuring Visit as the data Kind, the store_visit() and fetch_visits() functions, and the main application handler, MainHandler.

import os
import webapp2
from google.appengine.ext import ndb
from google.appengine.ext.webapp import template

class Visit(ndb.Model):
'Visit entity registers visitor IP address & timestamp'
visitor = ndb.StringProperty()
timestamp = ndb.DateTimeProperty(auto_now_add=True)

def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()

def fetch_visits(limit):
'get most recent visits'
return (v.to_dict() for v in Visit.query().order(

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(10)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

app = webapp2.WSGIApplication([
('/', MainHandler),
], debug=True)

Baseline sample application code

Upon deploying this application to App Engine, users will get output similar to the following:

image of a website with text saying VisitMe example

VisitMe application sample output

This application is the subject of today’s launch video, and the main.py file above along with other application and configuration files can be found in the Module 0 repo folder.

Next steps

Each migration learning module covers one modernization technique. A video outlines the migration while the codelab leads developers through it. Developers will always get a starting codebase (“START”) and learn how to do a specific migration, resulting in a completed codebase (“FINISH”). Developers can hit the reset button (back to START) if something goes wrong or compare their solutions to ours (FINISH). The hands-on experience helps users build muscle-memory for when they’re ready to do their own migrations.

All of the migration learning modules, corresponding Serverless Migration Station videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. While there’s an initial focus on Python 2 and App Engine, you’ll also find content for Python 3 users as well as non-App Engine users. We’re looking into similar content for other legacy languages as well so stay tuned. We hope you find all these resources helpful in your quest to modernize your serverless apps!

Modernizing your Google App Engine applications

Posted by Wesley Chun, Developer Advocate, Google Cloud

Modernizing your Google App Engine applications header

Next generation service

Since its initial launch in 2008 as the first product from Google Cloud, Google App Engine, our fully-managed serverless app-hosting platform, has been used by many developers worldwide. Since then, the product team has continued to innovate on the platform: introducing new services, extending quotas, supporting new languages, and adding a Flexible environment to support more runtimes, including the ability to serve containerized applications.

With many original App Engine services maturing to become their own standalone Cloud products along with users’ desire for a more open cloud, the next generation App Engine launched in 2018 without those bundled proprietary services, but coupled with desired language support such as Python 3 and PHP 7 as well as introducing Node.js 8. As a result, users have more options, and their apps are more portable.

With the sunset of Python 2, Java 8, PHP 5, and Go 1.11, by their respective communities, Google Cloud has assured users by expressing continued long-term support of these legacy runtimes, including maintaining the Python 2 runtime. So while there is no requirement for users to migrate, developers themselves are expressing interest in updating their applications to the latest language releases.

Google Cloud has created a set of migration guides for users modernizing from Python 2 to 3, Java 8 to 11, PHP 5 to 7, and Go 1.11 to 1.12+ as well as a summary of what is available in both first and second generation runtimes. However, moving from bundled to unbundled services may not be intuitive to developers, so today we’re introducing additional resources to help users in this endeavor: App Engine “migration modules” with hands-on “codelab” tutorials and code examples, starting with Python.

Migration modules

Each module represents a single modernization technique. Some are strongly recommended, others less so, and, at the other end of the spectrum, some are quite optional. We will guide you as far as which ones are more important. Similarly, there’s no real order of modules to look at since it depends on which bundled services your apps use. Yes, some modules must be completed before others, but again, you’ll be guided as far as “what’s next.”

More specifically, modules focus on the code changes that need to be implemented, not changes in new programming language releases as those are not within the domain of Google products. The purpose of these modules is to help reduce the friction developers may encounter when adapting their apps for the next-generation platform.

Central to the migration modules are the codelabs: free, online, self-paced, hands-on tutorials. The purpose of Google codelabs is to teach developers one new skill while giving them hands-on experience, and there are codelabs just for Google Cloud users. The migration codelabs are no exception, teaching developers one specific migration technique.

Developers following the tutorials will make the appropriate updates on a sample app, giving them the “muscle memory” needed to do the same (or similar) with their applications. Each codelab begins with an initial baseline app (“START”), leads users through the necessary steps, then concludes with an ending code repo (“FINISH”) they can compare against their completed effort. Here are some of the initial modules being announced today:

  • Web framework migration from webapp2 to Flask
  • Updating from App Engine ndb to Google Cloud NDB client libraries for Datastore access
  • Upgrading from the Google Cloud NDB to Cloud Datastore client libraries
  • Moving from App Engine taskqueue to Google Cloud Tasks
  • Containerizing App Engine applications to execute on Cloud Run


What should you expect from the migration codelabs? Let’s preview a pair, starting with the web framework: below is the main driver for a simple webapp2-based “guestbook” app registering website visits as Datastore entities:

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(LIMIT)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

A “visit” consists of a request’s IP address and user agent. After visit registration, the app queries for the latest LIMIT visits to display to the end-user via the app’s HTML template. The tutorial leads developers a migration to Flask, a web framework with broader support in the Python community. An Flask equivalent app will use decorated functions rather than webapp2‘s object model:

def root():
'main application (GET) handler'
store_visit(request.remote_addr, request.user_agent)
visits = fetch_visits(LIMIT)
return render_template('index.html', visits=visits)

The framework codelab walks users through this and other required code changes in its sample app. Since Flask is more broadly used, this makes your apps more portable.

The second example pertains to Datastore access. Whether you’re using App Engine’s ndb or the Cloud NDB client libraries, the code to query the Datastore for the most recent limit visits may look like this:

def fetch_visits(limit):
'get most recent visits'
query = Visit.query()
visits = query.order(-Visit.timestamp).fetch(limit)
return (v.to_dict() for v in visits)

If you decide to switch to the Cloud Datastore client library, that code would be converted to:

def fetch_visits(limit):
'get most recent visits'
query = DS_CLIENT.query(kind='Visit')
query.order = ['-timestamp']
return query.fetch(limit=limit)

The query styles are similar but different. While the sample apps are just that, samples, giving you this kind of hands-on experience is useful when planning your own application upgrades. The goal of the migration modules is to help you separate moving to the next-generation service and making programming language updates so as to avoid doing both sets of changes simultaneously.

As mentioned above, some migrations are more optional than others. For example, moving away from the App Engine bundled ndb library to Cloud NDB is strongly recommended, but because Cloud NDB is available for both Python 2 and 3, it’s not necessary for users to migrate further to Cloud Datastore nor Cloud Firestore unless they have specific reasons to do so. Moving to unbundled services is the primary step to giving users more flexibility, choices, and ultimately, makes their apps more portable.

Next steps

For those who are interested in modernizing their apps, a complete table describing each module and links to corresponding codelabs and expected START and FINISH code samples can be found in the migration module repository. We are also working on video content based on these migration modules as well as producing similar content for Java, so stay tuned.

In addition to the migration modules, our team has also setup a separate repo to support community-sourced migration samples. We hope you find all these resources helpful in your quest to modernize your App Engine apps!

Now, you can explore Google Cloud APIs with Cloud Code

Applications often rely on external services to provide capabilities such as data storage, messaging and networking with the help of APIs. Google Cloud offers a wide array of such APIs—covering everything from translating text, building AI/ML models, managing database operations, through to secret management and storage. But adding an APIs to your application often means performing a number of somewhat repetitive steps outside of the integrated development environment (IDE), across different websites. 

We are pleased to announce that we have streamlined this process and made it easy to add Google Cloud APIs to your project and start using them without leaving the IDE, with the help of a new API manager in Cloud Code.

Cloud Code is our set of extensions for VS Code and the JetBrains family of integrated development environments (IDEs). With extensions to VSCode, IntelliJ, Goland, PyCharm, and WebStorm, Cloud Code can help you develop, deploy, and debug Kubernetes applications.

The Cloud Code API manager further enhances the existing Cloud Code feature set by providing several features directly within your favorite IDE that you can use to Google Cloud APIs to your application, whether it runs on Kubernetes or otherwise: 

  1. Browse and enable Google Cloud APIs
  2. Install corresponding client libraries, with support for Java, NodeJS, Python and Go
  3. Access detailed API documentation

Each of these reduces the amount of “context switching” that you need to do and let’s you spend more time focused on writing code. Let’s look at each of these Cloud Code features in a little more depth. 

Browse and enable Google Cloud APIs

Finding the right API to add to your application can take time. For example, even if you develop a simple app like “bookshelf” getting started app, you need to enable Cloud Storage, Logging and Error Reporting APIs. For more complex applications that use more services, it’s even more difficult. The API browser in Cloud Code lets you browse all the Google Cloud APIs, which have been categorized into logical groups and provided in an easy to view format, from within the IDE. You can sort and search for your favorite Google Cloud API and click on it to view more details. In the details page, you can also view the status of a Google Cloud API and enable it for a GCP project.

Google Cloud API.gif
Here, you can see how to navigate between various Google Cloud APIs, view the status of an API, enable an API and automatically add Maven dependency for Java Maven projects in the IntelliJ IDEA.

Install client libraries

In addition to showing the information about Google Cloud API in the details page, Cloud Code provides instructions for installing client libraries. Client libraries allow  you to consume your preferred Cloud API in the programming language of your choice, instead of directly consuming low-level REST APIs or protobufs. Currently, installation instructions are available for Java, NodeJS, Python and Go. If you are using Java, diamond dependencies are handled automatically through libraries bom.

Cloud Code.gif
With Cloud Code, you can now browse the Google Cloud APIs, view documentation for and the status of an API, and copy installation instructions

Access detailed documentation

So far, the API manager has made it easy to discover and add an API into your code base. When it comes to using the API, you may also often need to refer to the reference documentation. Cloud Code’s API manager brings all of the critical links right into context inside the IDE so that you can easily find examples, review the structure of the overall API and discover details about pricing and additional detailed use cases.

Dialogflow API.jpg

Get started

Cloud Code helps you get started on various Google Cloud APIs in a seamless manner from within your favorite IDE. To learn more, check out the documentation for Cloud Code for VS Code and JetBrains IDEs. If you are new to Cloud Code, start by learning how to install Cloud Code.

New Application Manager brings GitOps to Google Kubernetes Engine

Kubernetes is the de facto standard for managing containerized applications, but developers and app operators often struggle with end-to-end Kubernetes lifecycle management—things like authoring, releasing and managing Kubernetes applications. 

To simplify the management of application lifecycle and configuration, today we are launching Application Manager, an application delivery solution delivered as an add-on to Google Kubernetes Engine (GKE). Now available in beta, Application Manager allows developers to easily create a dev-to-production application delivery flow, while incorporating Google’s best practices for managing release configurations. Application Manager lets you get your applications running in GKE efficiently, securely and in line with company policy, so you can succeed with your application modernization goals. 

Addressing the Kubernetes application lifecycle

The Kubernetes application lifecycle consists of three main stages: authoring, releasing and managing. Authoring includes writing the application source code and app-specific Kubernetes configuration. Releasing includes making changes to code and/or config, then safely deploying those changes to different release environments. The managing phase includes operationalizing applications at scale and in production. Currently, there are no well defined standards for these stages and users often ask us for best practices and recommendations to help them get started.

In addition, Kubernetes application configurations can be too long and complex to manage at scale. In particular, an application that is deployed across test, staging and production release environments might have duplicate configurations stored in multiple Git repositories. Any change to one config needs to be replicated to the others, creating the potential for human error. 

Application Manager embraces GitOps principles, leveraging Git repositories to enable declarative configuration management. It allows you to audit and review changes before they are deployed to environments. It also automatically scaffolds and enforces recommended Git repository structures, and allows you to perform template-free customization for configurations with Kustomize, a Kubernetes-native configuration management tool.

Application Manager runs inside your GKE cluster as a cluster add-on, and performs the following tasks: 

  • It pulls Kubernetes manifests from a Git repository (within a git branch, tag or commit) and deploys the manifests as an application in the cluster. 

  • It reports metadata about deployed applications (e.g. version, revision history, health, etc.) and visualizes the applications in Google Cloud Console.

Releasing an application with Application Manager

Now, let’s dive into more details on how to use Application Manager to release or deploy an application, from scaffolding Git repositories, defining application release environments, to deploying it in clusters. You can do all those tasks by executing simple commands in appctl, Application Manager’s command line interface. 

Here’s an example workflow of how you can release a “bookstore” app to both staging and production environments. 

First, initialize it by running 

appctl init bookstore --app-config-repo=github.com/$USER_OR_ORG/bookstore. 

This creates two remote Git repositories: 1) an application repository, for storing application configuration files in kustomize format (for easier configuration management), and 2) a deployment repository, for storing auto-generated, fully-rendered configuration files as the source of truth of what’s deployed in the cluster. 

After the Git repositories are initialized, you can add a staging environment to the bookstore app by running appctl env add staging --cluster=$MY_STAGING_CLUSTER, and do the same for prod environment. At this point, the application repository looks like this:

Here, we are using kustomize to manage environment-specific differences in the configuration. With kustomize, you can declaratively manage distinctly customized Kubernetes configurations for different environments using only Kubernetes API resource files, by patching overlays on top of the base configuration.

When you’re ready to release the application to the staging environment, simply create an application version with git tag in the application repository, and then run appctl prepare staging. This automatically generates hydrated configurations from the tagged version in the application repository, and pushes them to the staging branch of the deployment repository for an administrator to review. 

With this Google-recommended repository structure, Application Manager provides a clean separation between the easy-to-maintain kustomize configurations in the application repository, and the auto-generated deployment repository—an easy-to-review single source of truth; it also prevents these two repositories from diverging. 

Once the commits to hydrated configurations are reviewed and merged into the deployment repository, run appctl apply staging to deploy this application to the staging cluster. 

Promotion from staging to prod is as easy as appctl apply prod --from-env staging. To do rollback in case of failure, simply run appctl apply staging --from-tag=OLD_VERSION_TAG

What’s more, this appctl workflow can be automated and streamlined by executing it in scripts or pipelines. 

Application Manager for all your Kubernetes apps 

Now, with Application Manager, it’s easy to create a dev-to-production application delivery flow with a simple and declarative approach that’s recommended by Google. We are also working with our partners on the Google Cloud Marketplace to enable seamless updates of the Kubernetes applications you procure there, so you get automated updates and rollbacks of your partner applications. You can find more information here. For a detailed overview of Application Manager, please see this demo video. When you’re ready to get started, follow the steps in this tutorial.

Showing the C++ developer love with new client libraries

We use a lot of C++ at Google, and we’ve heard that many of you do as well. So whether you’re using C++ for your next amazing game, your high-frequency trading platform, your massively parallel scientific computations, or any of a variety of other applications, we want Google Cloud to be an excellent platform for you.

To that end, we are happy to say that we’re now building open-source C++ client libraries to help you access Google Cloud services. These are idiomatic C++ libraries that we intend to work well with your application and development workflow. Already, hundreds of GCP projects use generally available C++ libraries every day, including Google Cloud Storage (example code) and Cloud Bigtable (example code). We also have a beta release of our Cloud Spanner C++ library (example code), and we expect it to become generally available very soon. And we’re actively working on open-source client libraries for all the remaining cloud services. Several of these libraries are already being used by important Google services handling $IMPRESSIVE_NUMBER of data per $TIME_UNIT.

If you’re looking for more C++ client libraries, please let us know how we can help. You can contact your sales rep, or even feel free to directly contact the engineers by filing issues on our GitHub project page. We look forward to hearing from you and helping you get the most out of Google Cloud with your C++ application!

From Sheets to Apps: how to curate and send content automatically with a simple script

No matter the size of your business or the industry, sharing information is natural. It’s what makes a company run. And if you work in marketing, you understand that content can be valuable to people long after it’s been shared. 

From whitepapers to ebooks to videos, businesses put a lot of effort into making content, but this information can often get buried. As time passes, it may not be referenceable on websites, in old email threads, or in online search. Instead of requiring individuals to “dig” to find your information, consider making it available at their request.  In this post, we’ll go over how to curate and send content using an online form and a simple script in G Suite.

Curate and send marketing materials automatically
Let’s say that you work for a gardening business and your customers are interested in receiving marketing materials on specific subjects, like sustainability, community gardening, nutrition and more. You can use Google Forms as the primary interface to gather requests and automatically curate and email relevant information on these topics. When a user checks off boxes in the form, they get an email with links to the assets they selected thanks to a bit of Apps Script, Google’s JavaScript platform in the cloud.

You can even use a Google Docs template to make the email appear professional with a special header, custom font or inserted imagery. 

How to set up the script
If you’d like to try this for yourself, follow these step-by-step instructions in our G Suite Solution Gallery, which houses many free scripts that you can use. You’ll first need to make a copy of a spreadsheet, and then access Apps Script within the spreadsheet interface to have the code set up for you. In just seven steps, you can set up a workflow to automatically email users content that they desire. You can also customize your code as you see fit.

What’s great is that the email workflow is kicked off every time a user submits a form response thanks to using the onFormSubmit function.  By activating the Sheet’s trigger under Tools > Script Editor, the script turns the spreadsheet into a basic app to send content follow-ups without any need to fine-tune the code.

Use Google Sheets to analyze performance
To take it a step further, you can measure the performance of the content that’s downloaded from your “spreadsheet application” using Google Sheets’ built-in data analysis and visualization tools. Use pivot tables or cell functions within Sheets to tally and analyze the total requests for each content asset. Or more conveniently, try the Explore feature at the bottom of your spreadsheet, have Sheets do the analysis for you (with the help of machine learning).  Ask questions like “which content topics had the highest count?” to get insights.

Once you’re ready to present findings, Sheets integrates closely with Google Slides and Docs, so you can insert charts and tables within documents to share with others. Click the “update” button on each visual to have the data refreshed in real-time, so you always present the latest information.

Next steps? Copy the code to build your own “spreadsheet application” and automatically share content with others. Or check out this article to learn how the code works in more detail. If you want more inspiration, check out our G Suite Solutions Gallery to see what else you can build.

You can cook turkey in a toaster oven, but you don’t have to

When I was in college and couldn’t make it home for the Thanksgiving holiday, I would get together with other students in the same situation and do the next best thing: cook a traditional Thanksgiving feast of roast turkey, mashed potatoes and gravy, stuffing, and green beans by ourselves. In a dorm room. Using the kitchen equipment we had available: a toaster oven and a popcorn popper. 

The resulting dinner wasn’t terrible, but it didn’t hold a candle to the meal my family was enjoying back home, made with the benefit of an oven, high-BTU range, food processor, standing mixer—you get the idea.

Software development teams are sometimes in a similar situation. They need to build something new and have a few tools, so they build their application using what they have. Like our dorm-room Thanksgiving dinner, this can work, but it is probably not a good experience and may not get the best result.

Today, with cloud computing, software development teams have a lot more resources available to them. But sometimes teams move to the cloud but keep using the same old tools, just on a larger scale. That’s like moving from a toaster oven to a wall of large ovens, but not looking into how things like convection or microwave ovens, broilers, sous-vide cooking, instant pots, griddles, breadmakers, or woks can help you make a meal.

In short, if you’re an application developer and you’ve moved to the cloud, you should really explore all the new kinds of tools you can use to run your code, beyond configuring and managing virtual machines.

Like the number of side dishes on my parents’ holiday table, the number of Google Cloud Platform products you might use can be overwhelming. Here are a few you might want to look at first:

  • App Engine Standard Environment is a serverless platform for web applications. You bring your own application code and let the platform handle the web server itself, along with scaling and monitoring. It can even scale to zero, so if there are idle periods without traffic, you won’t be paying for computer time you aren’t using.

  • Some of the code you need might not be an application, but just a handler to deal with events as they happen, such as new data arriving or some operation being ready to start. Cloud Functions is another serverless platform that runs code written in supported languages in response to many kinds of events. Cloud Run can do similar tasks for you, with fewer restrictions on what languages and binaries you can run, but requiring a bit more management on your part.

  • Do you need regular housekeeping tasks performed, such as generating daily reports or deleting stale data? Instead of running a virtual machine just so you can trigger a cron job, you can have Cloud Scheduler do the triggering for you. If you want to get really fancy (like your aunt’s bourbon pecan pie), you can implement it with another serverless offering such as Cloud Functions, at specified intervals.

  • Instead of installing and managing a relational database server, use Cloud SQL instead. It’s reliable and secure, and handles backups and replication for you.

  • Maybe you don’t need (or just don’t want to use) a relational database. Cloud Firestore is a serverless NoSQL database that’s easy to use and that will scale up or down as needed. It also replicates your data across multiple regions for extremely high availability.

  • After Thanksgiving dinner, you may feel like a blob. Or you may just need to store blobs of data, such as files. But you don’t want to use a local filesystem, you want replicated and backed up storage. Some teams put these blobs into general purpose databases, but that’s not a good fit and can be expensive. Cloud Storage is designed to store and retrieve blob-format data on demand, affordably and reliably.

These products are great starting points in rethinking what kind of infrastructure your application could be built on, once you have adopted cloud computing. You might find they give you a better development experience and great outcomes relative to launching and managing more virtual machines. Now if you’ll excuse me, dinner’s ready!

Stackdriver Logging comes to Cloud Code in Visual Studio Code

A big part of troubleshooting your code is inspecting the logs. At Google Cloud, we offer Cloud Code, a plugin to popular integrated development environments (IDEs) to help you write, deploy, and debug cloud-native applications quickly and easily. Stackdriver Logging, meanwhile, is the go-to tool for all Google Cloud Platform (GCP) logs, providing advanced searching and filtering as well as detailed information about them. 

But deciphering logs can be tedious. Even worse, you need to leave your IDE to access Stackdriver Logging. Now, with the Cloud Code plugin, you can access your Stackdriver logs in the Visual Studio Code IDE directly! The new Cloud Code logs viewer helps you simplify and streamline the diagnostics process with three new features:

  • Integration with Stackdriver Logging 
  • A customizable logs viewer
  • Kubernetes-specific filtering  

View Stackdriver logs in VS Code

With the new Cloud Code logs viewer you can access your Stackdriver logs in VS Code directly. Simply open the logs viewer and Cloud Code displays all your Stackdriver logs. You can edit the filters just like you do in Stackdriver, and if you would like to see more detailed information you can easily return to Stackdriver Logging from the IDE with your filters in place.


In contrast to kubectl logs, Stackdriver logs are natively integrated with Google Cloud. Learn more about Stackdriver Logging here

Improved log exploration 

The new logs viewer provides a structured logs viewing experience that has several new features including: severity filters, colorized output, streaming capabilities, and timezone conversions. The new logs viewer presents an organized view of logs and lets you filter and search your logs from within VS Code. Think of the logs viewer as your first stop for all of your logs without having to leave your IDE.  The logs viewer will supports kubectl logs.


Kubernetes-specific filtering 

Kubernetes logs are complex. The new logs viewer lets you filter on Kubernetes-specific elements including: namespace, deployment, pod, container, and keyword. This allows you to easily see logs for specific pod or all the logs from a given deployment, helping you so you can navigate complex logs more effectively.

In addition to manual filtering, you can access the logs viewer from the Cloud Code resource browser and use the tree view to filter your logs. This way, you can locate a resource with the context around it. The tree view shows status and context information that can help you find important logs such as unhealthy or orphaned pods.


Get started 

Accessing Stackdriver Logs in VS Code with Cloud Code brings your logs closer to your code, with advanced filtering options that help you stay focused and in your IDE. To learn more, check out this guide to getting started with the Log Viewer. If you are new to Cloud Code or Stackdriver Logging, start by learning how to install Cloud Code and set up Stackdriver. If you are already using Cloud Code and Stackdriver Logging, there are no prerequisites to get started—just open the new logs viewer with Cloud Code and you’re ready to go!

Don’t just move to the cloud, modernize with Google Cloud

Our customers tell us they don’t just want to migrate their applications from point A to point B, they want to modernize their applications with cloud-native technologies and techniques, wherever those applications may be. 

Today, we’re excited to tell you about a variety of new customers that are using Anthos to transform their application portfolio, as well as new cloud migration, API management, and application development offerings:

  • New customers leveraging Anthos for a variety of on-prem, cloud and edge use cases
  • The general availability of Migrate for Anthos
  • Apigee hybrid in general availability
  • The general availability of Cloud Code

Accelerating app modernization with Anthos

Anthos was the first open app modernization platform to offer a unified control plane and service delivery across diverse cloud environments—managed cloud, on-premises and edge. Since it became generally available in the spring, organizations across a variety of industries and geographies have turned to Anthos to bring the benefits of cloud, containers and microservices to their applications. 

According to the findings from Forrester’s Total Economic Impact study, customers adopting Anthos have seen up to 5x return on investment based on the savings from ongoing license and support costs, and the incremental savings from operations and developer productivity. For one customer in the financial services industry, rolling out new features and updates to their core banking application used to take at least a quarter. Now with Anthos, they were able to eliminate months long development and release cycles, and roll out on a weekly basis. That’s a 13x improvement on time to market. 

This week, several new European Anthos customers will take the stage at Next UK to talk about how they’re using Anthos to transform their IT operations. 

Kaeser Kompressoren SE of Coburg, Germany, is a provider of compressed air products and services. The company needed a consistent platform to deploy and manage existing on-prem SAP workloads, like SAP Data Hub, and also wanted to be able to tap into other services running in Google Cloud to get more value from those environments. 

“Application modernization is enabling business innovation for Kaeser,” said Falko Lameter, CIO. “To gain better insights from data, we knew we needed to incorporate advanced machine learning and data analytics in all our applications. We chose Google Cloud’s Anthos because it offered the flexibility to incrementally modernize our legacy application on-premises without business disruption, while allowing us to run other applications on Anthos in Google Cloud and take advantage of its managed data analytics and ML/AI services.”

Then there’s Denizbank. Based in Turkey, Denizbank provides a variety of commercial banking services, and established the first Digital Banking Department in Turkey in 2012. Denizbank turned to Anthos for an open application modernization platform to help it develop its next-generation mobile banking applications.

“We operate in 11 different countries and have to comply with various regulatory requirements like data locality and sovereignty, which mandates some or all applications to reside on premises in certain countries, while the rest of the apps can move to the cloud in other countries,” said Dilek Duman, COO of DenizBank. “We chose Google Cloud’s Anthos for its flexibility to modernize our existing application investments with ease, and to deliver AI/ML powered software faster while improving operational security and governance. Anthos gives us the ability to have a unified management view of our hybrid deployments, giving us a consistent platform to run our banking workloads across environments.” 

Anthos is even starting to be deployed to edge locations, where, thanks to its 100% software-based design, it can run on any number of hardware form factors. We’re in advanced discussions with customers in telecommunications, retail, manufacturing and entertainment about using Anthos for edge use cases, as well as with global hardware OEMs.

Move and modernize with Migrate for Anthos

In addition to leveraging cloud technology for their on-premises environments with Anthos, customers also want to simultaneously migrate to the cloud and modernize with containers. That’s why we’re happy to announce the general availability of Migrate for Anthos, which provides a fast, low-friction path to convert physical servers or virtual machines from a variety of sources (on-prem, Amazon AWS, Microsoft Azure, or Google Compute Engine) directly into containers in Anthos GKE.

Migrate for Anthos makes it easy to modernize your applications without a lot of manual effort or specialized training. After upgrading your on-prem systems to containers with Migrate for Anthos, you’ll benefit from a reduction in OS-level management and maintenance, more efficient resource utilization, and easy integration with Google Cloud services for data analytics, AI and ML, and more. 

DevFactory aims to offload repetitive tasks in software development so that dev teams can focus on coding and productivity. As advocates for optimization through containers, they found Migrate for Anthos a key way to help deliver on their goals:  

“We usually see less than 1% resource utilization in data centers. Migrate for Anthos is a remarkable tool that allows us to migrate data center workloads to the cloud in a few simple steps,” said Rahul Subramaniam, CEO, Devfactory. “By automatically converting servers and virtual machines into containers with Migrate for Anthos, we get better resource utilization and dramatically reduced costs along with managed infrastructure in the end state, which makes this a very exciting and much-needed solution.” 

Migrate for Anthos is available at no additional cost, and can be used with or without an Anthos subscription.

API-first, everywhere, with Apigee hybrid 

To drive modernization and innovation, enterprises are increasingly adopting API-first approaches to connecting services across hybrid and multi-cloud environments. To address the need for hybrid API management, we’re announcing the general availability of Apigee hybrid, giving you the flexibility to deploy your API runtimes in a hybrid environment, while using cloud-based Apigee capabilities such as developer portals, API monitoring, and analytics. Apigee hybrid can be deployed as a workload on Anthos, giving you the benefits of an integrated Google Cloud stack, with Anthos’ automation and security benefits. 

Gap Inc. uses Apigee to publish, secure, and analyze APIs and easily onboard the development teams working with those APIs. Apigee hybrid will help Gap Inc. overcome the traditional tradeoffs between on-premises and cloud, providing the best of both worlds.   

“With Apigee hybrid, we can have an easy to manage, localized runtime for scenarios where latency or data sensitivity require it. At the same time, we can continue to enjoy all the benefits of Apigee such as Apigee’s developer portal and its rich API-lifecycle management capabilities,” said Patrick McMichael, Enterprise Architect at Gap Inc. 

Simplifying the developer experience

Google Cloud application development tools are designed to help you simplify creating apps for containers and Kubernetes, incorporate security and compliance into your pipelines, and scale up or down depending on demand, so you only pay for what you use. 

With these goals in mind, last week, we announced the general availability of Cloud Run and Cloud Run for Anthos. Cloud Run is a managed compute platform on Google Cloud that lets you run serverless containers on on a fully managed environment or on Anthos. With Cloud Run fully managed, you can easily deploy and run stateless containers written in any language, and enjoy serverless benefits such as automatic scale up and scale down and pay-for-use—without having to manage the underlying infrastructure. 

Cloud Run for Anthos, meanwhile, brings those same serverless developer experience to Anthos managed clusters, giving developers access to a modern, serverless compute platform while their organization modernizes its on-prem environment with Kubernetes. 

Easier Kubernetes development with Cloud Code

Today, we’re excited to announce the general availability of another important member of the Google Cloud application development stack: Cloud Code, which lets developers write, debug and deploy code to Google Cloud or any Kubernetes cluster through extensions to popular Integrated Developer Environments (IDEs) such as Visual Studio Code and IntelliJ. 

Developers are most productive while working in their favorite IDE. By embracing developers’ existing workflow and tools, Cloud Code makes working with Kubernetes feel like you are working with a local application, while preserving the investment you’ve made to configure your tools to your own specific needs. Cloud Code dramatically simplifies the creation and maintenance of Kubernetes applications.

In addition, Cloud Code speeds up development against Kubernetes by extending the edit-debug-review “inner loop” to the cloud. You get rapid feedback on your changes, ensuring that they’re of high quality. And when it comes to moving code to the production environment, Cloud Code supports popular continuous integration and delivery (CI/CD) tools like Cloud Build. 

Finally, with Cloud Code, diagnosing issues does not require a deep understanding of Kubernetes, thanks to connected debuggers and cluster-wide logging that help you address issues all from the context of your favorite tool. 

Toward modern, efficient applications

Application modernization means a lot of things to a lot of people. Depending on your environment, it can mean updating VMs to containers and Kubernetes, it can mean moving them to the cloud, or it can mean distributing them to edge locations and unifying workloads with consistent API and service management. For others, application modernization means using cloud-native tools and concepts like serverless and CI/CD. Whatever your definition, we can help you realize your business and modernization goals, achieving greater agility while improving overall governance.

Kubernetes development, simplified—Skaffold is now GA

Back in 2017, we noticed that developers creating Kubernetes-native applications spent a long time building and managing container images across registries, manually updating their Kubernetes manifests, and redeploying their applications every time they made even the smallest code changes. We set out to create a tool to automate these tasks, helping them focus on writing and maintaining code rather than managing the repetitive steps required during the edit-debug-deploy ‘inner loop’. From this observation, Skaffold was born.

Today, we’re announcing our first generally available release of Skaffold. Skaffold simplifies common operational tasks that you perform when doing Kubernetes development, letting you focus on your code changes and see them rapidly reflected on your cluster. It’s the underlying engine that drives Cloud Code, and a powerful tool in and of itself for improving developer productivity.

Skaffold’s central command, skaffold dev, watches local source code for changes, and rebuilds and redeploys applications to your cluster in real time. But Skaffold has grown to be much more than just a build and deployment tool—instead, it’s become a tool to increase developer velocity and productivity.

Feedback from Skaffold users bears this out. “Our customers love [Kubernetes], but consistently gave us feedback that developing on Kubernetes was cumbersome. Skaffold hit the mark in addressing this problem,” says Warren Strange, Engineering Director at ForgeRock. “Changes to a Docker image or a configuration that previously took several minutes to deploy now take seconds. Skaffold’s plugin architecture gives us the ability to deploy to Helm or Kustomize and use various Docker build plugins such as Kaniko. Skaffold replaced our bespoke collection of utilities and scripts with a streamlined tool that is easy to use.”

A Kubernetes developer’s best friend

Skaffold is a command line tool that saves developers time by automating most of the development workflow from source to deployment in an extensible way. It natively supports the most common image-building and application deployment strategies, making it compatible with a wide variety of both new and pre-existing projects. Skaffold also operates completely on the client-side, with no required components on your cluster, making it super lightweight and high-performance.

Skaffolds inner development loop.png
Skaffold’s inner development loop

By taking care of the operational tasks of iterative development, Skaffold removes a large burden from application developers and substantially improves productivity.

Over the last two years, there have been more than 5,000 commits from nearly 150 contributors to the Skaffold project, resulting in 40 releases, and we’re confident that Skaffold’s core functionality is mature. To commemorate this, let’s take a closer look at some of Skaffold’s core features.

Fast iterative development
When it comes to development, skaffold dev is your personal ops assistant: it knows about the source files that comprise your application, watches them while you work, and rebuilds and redeploys only what’s necessary. Skaffold comes with highly optimized workflows for local and remote deployment, giving you the flexibility to develop against local Kubernetes clusters like Minikube or Kind, as well as any remote Kubernetes cluster.

“Skaffold is an amazing tool that simplified development and delivery for us,” says Martin Höfling, Principal Consultant at TNG Technology Consulting GmbH. “Skaffold hit our sweet spot by covering two dimensions: First, the entire development cycle from local development, integration testing to delivery. Second, Skaffold enabled us to develop independently of the platform on Linux, OSX, and Windows, with no platform-specific logic required.”

Skaffold’s dev loop also automates typical developer tasks. It automatically tails logs from your deployed workloads, and port-forwards the remote application to your machine, so you can iterate directly against your service endpoints. Using Skaffold’s built-in utilities, you can do true cloud-native development, all while using a lightweight, client-side tool.

Production-ready CI/CD pipelines
Skaffold can be used as a building block for your production-level CI/CD pipelines. Taylor Barrella, Software Engineer at Quora, says that “Skaffold stood out as a tool we’d want for both development and deployment. It gives us a common entry point across applications that we can also reuse for CI/CD. Right now, all of our CI/CD pipelines for Kubernetes applications use Skaffold when building and deploying.”

Skaffold can be used to build images and deploy applications safely to production, reusing most of the same tooling that you use to run your applications locally. skaffold run runs an entire pipeline from build to deploy in one simple command, and can be decomposed into skaffold build and skaffold deploy for more fine-tuned control over the process. skaffold render can be used to build your application images, and output templated Kubernetes manifests instead of actually deploying to your cluster, making it easy to integrate with GitOps workflows.

Profiles let you use the same Skaffold configuration across multiple environments, express the differences via a Skaffold profile for each environment, and activate a specific profile using the current Kubernetes context. This means you can push images and deploy applications to completely different environments without ever having to modify the Skaffold configuration. This makes it easy for all members of a team to share the same Skaffold project configuration, while still being able to develop against their own personal development environments, and even use that same configuration to do deployments to staging and production environments.

On-cluster application debugging
Skaffold can help with a whole lot more than application deployment, not least of which is debugging. Skaffold natively supports direct debugging of Golang, NodeJS, Java, and Python code running on your cluster!

The skaffold debug command runs your application with a continuous build and deploy loop, and forwards any required debugging ports to your local machine. This allows Skaffold to automatically attach a debugger to your running application. Skaffold also takes care of any configuration changes dynamically, giving you a simple yet powerful tool for developing Kubernetes-native applications. skaffold debug powers the debugging features in Cloud Code for IntelliJ and Cloud Code for Visual Studio Code.

google cloud code.png

Cloud Code: Kubernetes development in the IDE

Cloud Code comes with tools to help you write, deploy, and debug cloud-native applications quickly and easily. It provides extensions to IDEs such as Visual Studio Code and IntelliJ to let you rapidly iterate, debug, and deploy code to Kubernetes. If that sounds similar to Skaffold, that’s because it is—Skaffold powers many of the core features that make Cloud Code so great! Things like local debugging of applications deployed to Kubernetes and continuous deployment are baked right into the Cloud Code extensions with the help of Skaffold.

To get the best IDE experience with Skaffold, try Cloud Code for Visual Studio Code or IntelliJ IDEA!

What’s next?

Our goal with Skaffold and Cloud Code is to offer industry-leading tools for Kubernetes development, and since Skaffold’s inception, we’ve engaged the broader community to ensure that Skaffold evolves in line with what users want. There are some amazing ideas from external contributors that we’d love to see come to fruition, and with the Kubernetes development ecosystem still in a state of flux, we’ll prioritize features that will have the most impact on Skaffold’s usefulness and usability. We’re also working closely with the Cloud Code team to surface Skaffold’s capabilities inside your IDE.

With the move to general availability, it’s never been a better time to start using (or continue to use) Skaffold, trusting that it will provide an excellent and production-ready development experience that you can rely on.

For more detailed information and docs, check out the Skaffold webpage, and as always, you can reach out to us on Github and Slack.

Special thanks to all of our contributors (you know who you are) who helped make Skaffold the awesome tool it is today!