Upload massive lists of products to Merchant Center using Centimani

Posted by Hector Parra, Jaime Martínez, Miguel Fernandes, Julia Hernández

Merchant Center lets merchants manage how their in-store and online product inventory appears on Google. It allows them to reach hundreds of millions of people looking to buy products like yours each day.

To upload their products, merchants can make use of feeds, that is, files with a list of products in a specific format. These can be shared with Merchant Center in different ways: using Google Sheets, SFTP or FTP shares, Google Cloud Storage or manually through the user interface. These methods work great for the majority of cases. But, if a merchant’s product list grows over time, they might reach the usage limits of the feeds. Depending on the case, quota extensions could be granted, but if the list continues to grow, it might reach a point where feeds no longer support that scale, and the Content API for Shopping would become the recommended way to go forward.

The main issue is, if a merchant is recommended to stop using feeds and start using the Content API due to scale problems, it means that the number of products is massive, and trying to use the Content API directly will give them usage and quota errors, as the QPS and products per call limits will be exceeded.

For this specific use case, Centimani becomes critical in helping merchants handle the upload process through the Content API in a controlled manner, avoiding any overload of the API.

Centimani is a configurable massive file processor able to split text files in chunks, process them following a strategic pattern and store the results in BigQuery for reporting. It provides configurable options for chunk size and number of retries, and takes care of exponential backoff to ensure all requests have enough retries to overcome potential temporary issues or errors. Centimani comes with two operators: Google Ads Offline Conversions Uploader, and Merchant Center Products Uploader, but it can be extended to other uses easily.

Centimani uses Google Cloud as its platform, and makes use of Cloud Storage for storing the data, Cloud Functions to do the data processing and the API calls, Cloud Tasks to coordinate the execution of each call, and BigQuery to store the audit information for reporting.

Centimani Architecture

To start using Centimani, a couple of configuration files need to be prepared with information about the Google Cloud Project to be used (including the element names), the credentials to access the Merchant Center accounts and how the load will be distributed (e.g., parallel executions, number of products per call). Then, the deployment is done automatically using a deployment script provided by the tool.

After the tool is deployed, a cloud function will be monitoring the input bucket in Cloud Storage, and every time a file is uploaded there, it will be processed. The tool uses the name of the file to select the operator that is going to be used (“MC” indicates Merchant Center Products Uploader), and the particular configuration to use (multiple configurations can be used to connect to Merchant Center accounts with different access credentials).

Whenever a file is uploaded, it will be sliced in parts if it is greater than the number of products allowed per call, they will be stored in the output bucket in Cloud Storage, and Cloud Tasks will start launching the API calls until all files are processed. Any file with errors will be stored in a folder called “slices_failed” to help troubleshoot any issues found in the process. Also, all the information about the executions will be stored temporarily in Datastore and then moved to BigQuery, where it can be used for monitoring the whole process from a centralized place.

Centimani Status Dashboard Architecture

Centimani provides an easy way for merchants to start using the Content API for Shopping to manage their products, without having to deal with the complexity of keeping the system under the limits.

For more information you can visit the Centimani repository on Github.

Machine Learning Communities: Q3 ‘21 highlights and achievements

Posted by HyeJung Lee, DevRel Community Manager and Soonson Kwon, DevRel Program Manager

Let’s explore highlights and achievements of vast Google Machine Learning communities by region for the last quarter. Activities of experts (GDE, professional individuals), communities (TFUG, TensorFlow user groups), students (GDSC, student clubs), and developers groups (GDG) are presented here.

Key highlights

Image shows a banner for 30 days of ML with Kaggle

30 days of ML with Kaggle is designed to help beginners study ML using Kaggle Learn courses as well as a competition specifically for the participants of this program. Collaborated with the Kaggle team so that +30 the ML GDEs and TFUG organizers participated as volunteers as online mentors as well as speakers for this initiative.

Total 16 of the GDE/GDSC/TFUGs run community organized programs by referring to the shared community organize guide. Houston TensorFlow & Applied AI/ML placed 6th out of 7573 teams — the only Americans in the Top 10 in the competition. And TFUG Santiago (Chile) organizers participated as well and they are number 17 on the public leaderboard.

Asia Pacific

Image shows Google Cloud and Coca-Cola logos

GDE Minori MATSUDA (Japan)’s project on Coca-Cola Bottlers Japan was published on Google Cloud Japan Blog covering creating an ML pipeline to deploy into real business within 2 months by using Vertex AI. This is also published on GCP blog in English.

GDE Chansung Park (Korea) and Sayak Paul (India) published many articles on GCP Blog. First, “Image search with natural language queries” explained how to build a simple image parser from natural language inputs using OpenAI’s CLIP model. From this second “Model training as a CI/CD system: (Part I, Part II)” post, you can learn more about why having a resilient CI/CD system for your ML application is crucial for success. Last, “Dual deployments on Vertex AI” talks about end-to-end workflow using Vertex AI, TFX and Kubeflow.

In China, GDE Junpeng Ye used TensorFlow 2.x to significantly reduce the codebase (15k → 2k) on WeChat Finder which is a TikTok alternative in WeChat. GDE Dan lee wrote an article on Understanding TensorFlow Series: Part 1, Part 2, Part 3-1, Part 3-2, Part 4

GDE Ngoc Ba from Vietnam has contributed AI Papers Reading and Coding series implementing ML/DL papers in TensorFlow and creates slides/videos every two weeks. (videos: Vit Transformer, MLP-Mixer and Transformer)

A beginner friendly codelabs (Get started with audio classification ,Go further with audio classification) by GDSC Sookmyung (Korea) learning to customize pre-trained audio classification models to your needs and deploy them to your apps, using TFlite Model Maker.

Cover image for Mat Kelcey's talk on JAX at the PyConAU event

GDE Matthew Kelcey from Australia gave a talk on JAX at PyConAU event. Mat gave an overview to fundamentals of JAX and an intro to some of the libraries being developed on top.

Image shows overview for the released PerceiverIO code

In Singapore, TFUG Singapore dived back into some of the latest papers, techniques, and fields of research that are delivering state-of-the-art results in a number of fields. GDE Martin Andrews included a brief code walkthrough for the released PerceiverIO code at perceiver– highlighting what JAX looks like, how Haiku relates to Sonnet, but also the data loading stuff which is done via tf.data.

Machine Learning Experimentation with TensorBoard book cover

GDE Imran us Salam Mian from Pakistan published a book “Machine Learning Experimentation with TensorBoard“.


GDE Aakash Nain has published the TF-JAX tutorial series from Part 4 to Part 8. Part 4 gives a brief introduction about JAX (What/Why), and DeviceArray. Part 5 covers why pure functions are good and why JAX prefers them. Part 6 focuses on Pseudo Random Number Generation (PRNG) in Numpy and JAX. Part 7 focuses on Just In Time Compilation (JIT) in JAX. And Part 8 covers vmap and pmap.

Image of Bhavesh's Google Cloud certificate

GDE Bhavesh Bhatt published a video about his experience on the Google Cloud Professional Data Engineer certification exam.

Image shows phase 1 and 2 of the Climate Change project using Vertex AI

Climate Change project using Vertex AI by ML GDE Sayak Paul and Siddha Ganju (NVIDIA). They published a paper (Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning) and open-sourced the project with regard to NASA Impact’s ETCI competition. This project made four NeurIPS workshops AI for Science: Mind the Gaps, Tackling Climate Change with Machine Learning, Women in ML, and Machine Learning and the Physical Sciences. And they finished as the first runners-up (see Test Phase 2).

Image shows example of handwriting recognition tutorial

Tutorial on handwriting recognition was contributed to Keras example by GDE Sayak Paul and Aakash Kumar Nain.

Graph regularization for image classification using synthesized graphs by GDE Sayak Pau was added to the official examples in the Neural Structured Learning in TensorFlow.

GDE Sayak Paul and Soumik Rakshit shared a new NLP dataset for multi-label text classification. The dataset consists of paper titles, abstracts, and term categories scraped from arXiv.

North America

Banner image shows students participating in Google Summer of Code

During the GSoC (Google Summer of Code), some GDEs mentored or co-mentored students. GDE Margaret Maynard-Reid (USA) mentored TF-GAN, Model Garden, TF Hub and TFLite products. You can get some of her experience and tips from the GDE Blog. And you can find GDE Sayak Paul (India) and Googler Morgan Roff’s GSoC experience in (co-)mentoring TensorFlow and TF Hub as well.

A beginner friendly workshop on TensorFlow with ML GDE Henry Ruiz (USA) was hosted by GDSC Texas A&M University (USA) for the students.

Screenshot from Youtube video on how transformers work

Youtube video Self-Attention Explained: How do Transformers work? by GDE Tanmay Bakshi from Canada explained how you can build a Transformer encoder-based neural network to classify code into 8 different programming languages using TPU, Colab with Keras.


GDG / GDSC Turkey hosted AI Summer Camp in cooperation with Global AI Hub. 7100 participants learned about ML, TensorFlow, CV and NLP.

Screenshot from slide presentation titled Why Jax?

TechTalk Speech Processing with Deep Learning and JAX/Trax by GDE Sergii Khomenko (Germany) and M. Yusuf Sarıgöz (Turkey). They reviewed technologies such as Jax, TensorFlow, Trax, and others that can help boost our research in speech processing.

South/Central America

Image shows Custom object detection in the browser using TensorFlow.js

On the other side of the world, in Brazil, GDE Hugo Zanini Gomes wrote an article about “Custom object detection in the browser using TensorFlow.js” using the TensorFlow 2 Object Detection API and Colab was posted on the TensorFlow blog.

Screenshot from a talk about Real-time semantic segmentation in the browser - Made with TensorFlow.js

And Hugo gave a talk about Real-time semantic segmentation in the browser – Made with TensorFlow.js covered using SavedModels in an efficient way in JavaScript directly enabling you to get the reach and scale of the web for your new research.

Data Pipelines for ML was talked about by GDE Nathaly Alarcon Torrico from Bolivia explained all the phases involved in the creation of ML and Data Science products, starting with the data collection, transformation, storage and Product creation of ML models.

Screensho from TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)

TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)“ was hosted by TFUG Santiago (Chile). In this talk the speaker gave a tour of the steps to follow to generate a model capable of being in the top 1% of the Kaggle Leaderboard. The focus was on showing the libraries and“ tricks ”that are used to be able to test many ideas quickly both in implementation and in execution and how to use them in productive environments.


Screenshot from workshop about Recurrent Neural Networks

GDE Ruqiya Bin Safi (Saudi Arabia) had a workshop about Recurrent Neural Networks : part 1 (Github / Slide) at the GDG Mena. And Ruqiya gave a talk about Recurrent Neural Networks: part 2 at the GDG Cloud Saudi (Saudi Arabia).

AI Training with Kaggle by GDSC Islamic University of Gaza from Palestine. It is a two month training covering Data Processing, Image Processing and NLP with Kaggle.

Sub-Saharan Africa

TFUG Ibadan had two TensorFlow events : Basic Sentiment analysis with Tensorflow and Introduction to Recommenders Systems with TensorFlow”.

Image of Yannick Serge Obam Akou's TensorFlow Certificate

Article covered some tips to study, prepare and pass the TensorFlow developer exam in French by ML GDE Yannick Serge Obam Akou (Cameroon).

Extend Google Apps Script with your API library to empower users

Posted by Keith Einstein, Product Manager

Banner image that shows the Cloud Task logo

Google is proud to announce the availability of the DocuSign API library for Google Apps Script. This newly created library gives all Apps Script users access to the more than 400 endpoints DocuSign has to offer so they can build digital signatures into their custom solutions and workflows within Google Workspace.

The Google Workspace Ecosystem

Last week at Google Cloud Next ‘21, in the session “How Miro, DocuSign, Adobe and Atlassian are helping organizations centralize their work”, we showcased a few partner integrations called add-ons, found on Google Workspace Marketplace. The Google Workspace Marketplace helps developers connect with the more than 3 billion people who use Google Workspace—with a stunning 4.8 billion apps installed to date. That incredible demand is fueling innovation in the ecosystem, and we now have more than 5,300 public apps available in the Google Workspace Marketplace, plus thousands more private apps that customers have built for themselves. As a developer, one of the benefits of an add-on is that it allows you to surface your application in a user-friendly manner that helps people reclaim their time, work more efficiently, and adds another touchpoint for them to engage with your product. While building an add-on enables users to frictionlessly engage with your product from within Google Workspace, to truly unlock limitless potential innovative companies like DocuSign are beginning to empower users to build the unique solutions they need by providing them with a Google Apps Script Library.

Apps Script enables Google Workspace customization

Many users are currently unlocking the power of Google Apps Script by creating the solutions and automations they need to help them reclaim precious time. Publishing a Google Apps Script Library is another great opportunity to bring a product into Google Workspace and gain access to those creators. It gives your users more choices in how they integrate your product into Google Workspace, which in turn empowers them with the flexibility to solve more business challenges with your product’s unique value.

Apps Script libraries can make the development and maintenance of a script more convenient by enabling users to take advantage of pre-built functionality and focus on the aspects that unlock unique value. This allows innovative companies to make available a variety of functionality that Apps Script users can use to create custom solutions and workflows with the features not found in an off-the-shelf app integration like a Google Workspace Add-on or Google Chat application.

The DocuSign API Library for Apps Script

One of the partners we showcased at Google Cloud Next ‘21 was DocuSign. The DocuSign eSignature for Google Workspace add-on has been installed almost two-million times. The add-on allows you to collect signatures or sign agreements from inside Gmail, Google Drive or Google Docs. While collecting signatures and signing agreements are some of the most common areas in which a user would use DocuSign eSignature inside Google Workspace, there are many more features to DocuSign’s eSignature product. In fact, their eSignature API has over 400 endpoints. Being able to go beyond those top features normally found in an add-on and into the rest of the functionality of DocuSign eSignature is where an Apps Script Library can be leveraged.

And that’s exactly what we’re partnering to do. Recently, DocuSign’s Lead API Product Manager, Jeremy Glassenberg (a Google Developer Expert for Google Workspace) joined us on the Totally Unscripted podcast to talk about DocuSign’s path to creating an Apps Script Library. At the DocuSign Developer Conference, on October 27th, Jeremy will be teaming up with Christian Schalk from our Google Cloud Developer Relations team to launch the DocuSign Apps Script Library and showcase how it can be used.

With the DocuSign Apps Script Library, users around the world who lean on Apps Script to build their workplace automations can create customized DocuSign eSignature processes. Leveraging the Apps Script Library in addition to the DocuSign add-on empowers companies who use both DocuSign and Google Workspace to have a more seamless workflow, increasing efficiency and productivity. The add-on allows customers to integrate the solution instantly into their Google apps, and solve for the most common use cases. The Apps Script Library allows users to go deep and solve for the specialized use cases where a single team (or knowledge worker) may need to tap into a less commonly used feature to create a unique solution.

See us at the DocuSign Developer Conference

The DocuSign Apps Script Library is now available in beta and if you’d like to know more about it drop a message to [email protected]. And be sure to register for the session on “Building a DocuSign Apps Script Library with Google Cloud”, Oct 27th @ 10:00 AM. For updates and news like this about the Google Workspace platform, please subscribe to our developer newsletter.

Migrating App Engine push queues to Cloud Tasks

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo


The previous Module 7 episode of Serverless Migration Station gave developers an idea of how App Engine push tasks work and how to implement their use in an existing App Engine ndb Flask app. In this Module 8 episode, we migrate this app from the App Engine Datastore (ndb) and Task Queue (taskqueue) APIs to Cloud NDB and Cloud Tasks. This makes your app more portable and provides a smoother transition from Python 2 to 3. The same principle applies to upgrading other legacy App Engine apps from Java 8 to 11, PHP 5 to 7, and up to Go 1.12 or newer.

Over the years, many of the original App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products, for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues, whose functionality has been split out to Cloud Tasks (push queues) and Cloud Pub/Sub (pull queues), now accessible to developers and applications outside of App Engine.

Migrating App Engine push queues to Cloud Tasks video

Migrating to Cloud NDB and Cloud Tasks

The key updates being made to the application:

  1. Add support for Google Cloud client libraries in the app’s configuration
  2. Switch from App Engine APIs to their standalone Cloud equivalents
  3. Make required library adjustments, e.g., add use of Cloud NDB context manager
  4. Complete additional setup for Cloud Tasks
  5. Make minor updates to the task handler itself

The bulk of the updates are in #3 and #4 above, and those are reflected in the following “diff”s for the main application file:

Screenshot shows primary differences in code when switching to Cloud NDB & Cloud Tasks

Primary differences switching to Cloud NDB & Cloud Tasks

With these changes implemented, the web app works identically to that of the Module 7 sample, but both the database and task queue functionality have been completely swapped to using the standalone/unbundled Cloud NDB and Cloud Tasks libraries… congratulations!

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push tasks migration guide for more information. Arriving at a fully-functioning Module 8 app featuring Cloud Tasks sets the stage for a larger migration ahead in Module 9. We’ve accomplished the most important step here, that is, getting off of the original App Engine legacy bundled services/APIs. The Module 9 migration from Python 2 to 3 and Cloud NDB to Cloud Firestore, plus the upgrade to the latest version of the Cloud Tasks client library are all fairly optional, but they represent a good opportunity to perform a medium-sized migration.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

Next ‘21: Must-see Google Workspace sessions for developers and creators

Posted by Charles Maxson, Developer Advocate

Banner image that shows the Google Workspace logo

Google Workspace offers a broad set of tools and capabilities that empowers creators and developers of all experience levels to build a wide range of custom productivity solutions. For professional developers looking to integrate their own app experiences into Workspace, the platform enables deep integrations with frameworks like Google Workspace Add-ons and Chat apps, as well as deep access to the full suite of Google Workspace apps via numerous REST APIs. And for citizen developers on the business side or developers looking to build solutions quickly and easily, tools like Apps Script and AppSheet make it simple to customize, extend, and automate workflows directly within Google Workspace.

At Next ‘21 we have 7 sessions you won’t want to miss that cover the breadth of the platform. From no-code and low-code solutions to content for developers looking to publish in the Google Workspace Marketplace and reach the more than 3 billion users in Workspace, Next ‘21 has something for everyone.

1. See what’s new in Google Workspace

Matthew Izatt, Product Manager, Google Cloud

Erika Trautman, Director Product Management, Google Cloud

Join us for an interactive demo and see the latest Google Workspace innovations in action. As the needs of our users shifted over the past year, we’ve delivered entirely new experiences to help people connect, create, and collaborate—across Gmail, Drive, Meet, Docs, and the rest of the apps. You’ll see how Google Workspace meets the needs of different types of users with thoughtfully designed experiences that are easy to use and easy to love, Then, we’ll go under the hood to show you the range of ways to build powerful integrations and apps for Google Workspace using tools that span from no-code to professional grade.

2. Developer Platform State of the Union: Google Workspace

Charles Maxson, Developer Advocate, Google Cloud

Steven Bazyl, Developer Relations Engineer, Google Cloud

Google Workspace offers a comprehensive developer platform to support every developer who’s on a journey to customize and enhance Google Workspace. In this session, take a deeper dive into the new tools, technologies, and advances across the Google Workspace developer platform that can help you create even better integrations, extensions, and workflows. We’ll focus on updates for Google Apps Script, Google Workspace Add-ons, Chat apps, APIs, AppSheet, and Google Workspace Marketplace.

3. How Miro, Docusign, Adobe and Atlassian are helping organizations centralize their work

Matt Izatt, Group Product Manager, Google Cloud

David Grabner, Product Lead, Apps & Integrations, Miro

Integrations make Google Workspace the hub for your work and give users more value by bringing all their tools into one space. Our ecosystem allows users to connect industry-leading software and custom-built applications with Google Workspace to centralize important information from the tools you use every day. And integrations are not limited to Gmail, Docs, or your favorite Google apps – they’re also available for Chat. With Chat apps, users can seamlessly blend conversations with automation and timely information to accelerate teamwork directly from within a core communication tool.

In this session, we’ll briefly review the Google Workspace platform and how Miro and Atlassian are helping organizations centralize their work and keep important information a mouse click or a tap away.

4. Learn how customers are empowering their workforce to customize Google Workspace

Charles Maxson, Developer Advocate, Google Cloud

Aspi Havewala, Global Head of Digital Workplace, Verizon

Organizations small and large are seeing their needs grow increasingly diverse as they pursue digital transformation projects. Many of our customers are empowering their workforces by allowing them to build advanced workflows and customizations using Google Apps Script. It’s a powerful low-code development platform included with Google Workspace that makes it fast and easy to build custom business solutions for your favorite Google Workspace applications – from macro automations to custom functions and menus. In this session, we’ll do a quick overview of the Apps Script platform and hear from customers who are using it to enable their organizations.

5. Transform your business operations with no-code apps

Arthur Rallu, Product Manager, Google Cloud

Paula Bell, Business Process Analyst, Kentucky Power Company, American Electric Power

Building business apps has become something anyone can do. Don’t believe us? Join this session to learn how Paula Bell, who self describes as a person with “zero coding experience” built a series of mission-critical apps on AppSheet that revolutionized how Kentucky Power, a branch of American Electric Power, runs their field operations.

6. How AppSheet helps you work smarter with Google Workspace

Mike Procopio, Senior Staff Software Engineer, Google Cloud

Millions of Google Workspace users are looking for new ways to reclaim time and work smarter within Google Workspace. AppSheet, Google Workspace’s first-party extensibility platform, will be announcing several new features that will allow people to automate and customize their work within their Google Workspace environment – all without having to write a line of code.

Join this session to learn how you can use these new features to work smarter in Google Workspace.

7. How to govern an innovative workforce and reduce Shadow IT

Kamila Klimek, Product Manager, Google Cloud

Jacinto Pelayo, Chief Executive Officer, Evenbytes

For organizations focused on growth, finding new ways that employees can use technology to work smarter and innovate is key to their success. But enabling employees to create their own solutions comes at a cost that IT is keenly aware of. The threats of external hacks, data leaks, and shadow IT make it difficult for IT to find a solution that gives them the control and visibility they need, while still empowering their workforce. AppSheet was built with these challenges in mind.

Join our session to learn how you can use AppSheet to effectively govern your workforce and reduce security threats, all while giving employees the tools to make robust, enterprise-grade applications.

To learn more about these sessions and to register, visit the Next ‘21 website and also check out my playlist of Next ‘21 content.

How to use App Engine push queues in Flask apps

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo


Since its original launch in 2008, many of the core Google App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products: for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues with Cloud Tasks. Today’s Module 7 episode of Serverless Migration Station reviews how App Engine push tasks work, by adding this feature to an existing App Engine ndb Flask app.

App Engine push queues in Flask apps video

That app is where we left off at the end of Module 1, migrating its web framework from App Engine webapp2 to Flask. The app registers web page visits, creating a Datastore Entity for each. After a new record is created, the ten most recent visits are displayed to the end-user. If the app only shows the latest visits, there is no reason to keep older visits, so the Module 7 exercise adds a push task that deletes all visits older than the oldest one shown. Tasks execute asynchronously outside the normal application flow.

Key updates

The following are the changes being made to the application:

  1. Add use of App Engine Task Queues (taskqueue) API
  2. Determine oldest visit displayed, logging and saving that timestamp
  3. Create task to delete old visits
  4. Update web page template to display timestamp threshold
  5. Log how many and which visits (by Entity ID) are deleted

Except for #4 which occurs in the HTML template file, these updates are reflected in the “diff”s for the main application file:

Screenshot of App Engine push tasks application source code differences

Adding App Engine push tasks application source code differences

With these changes implemented, the web app now shows the end-user which visits will be deleted by the new push task:

Screenshot of VisitMe example showing last ten site visits. A red circle around older visits being deleted

Sample application output

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push queue documentation for more information. Arriving at a fully-functioning Module 7 app featuring App Engine push tasks sets the stage for migrating it to Cloud Tasks (and Cloud NDB) ahead in Module 8.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

Exploring serverless with a nebulous app: Deploy the same app to App Engine, Cloud Functions, or Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the App Engine, Cloud Functions, and Cloud Run logos


Google Cloud offers three distinct ways of running your code or application in a serverless way, each serving different use cases. Google App Engine, our first Cloud product, was created to give users the ability to deploy source-based web applications or mobile backends directly to the cloud without the need of thinking about servers or scaling. Cloud Functions came later for scenarios where you may not have an entire app, great for one-off utility functions or event-driven microservices. Cloud Run is our latest fully-managed serverless product that gives developers the flexibility of containers along with the convenience of serverless.

As all are serverless compute platforms, users recognize they share some similarities along with clear differences, and often, they ask:

  1. How different is deploying code to App Engine, Cloud Functions, or Cloud Run?
  2. Is it challenging to move from one to another if I feel the other may better fit my needs?

We’re going to answer these questions today by sharing a unique application with you, one that can be deployed to all three platforms without changing any application code. All of the necessary changes are done in configuration.

More motivation

Another challenge for developers can be trying to learn how to use another Cloud product, such as this request, paraphrased from a user:

  1. I have a Google App Engine app
  2. I want to call the Cloud Translation API from that app

Sounds simple enough. This user went straight to the App Engine and Translation API documentation where they were able to get started with the App Engine Quickstart to get their app up and going, then found the Translation API setup page and started looking into permissions needed to access the API. However, they got stuck at the Identity and Access Management (IAM) page on roles, being overwhelmed at all the options but no clear path forward. In light of this, let’s add a third question to preceding pair outlined earlier:

  1. How do you access Cloud APIs from a Cloud serverless platform?

Without knowing what that user was going to build, let’s just implement a barebones translator, an “MVP” (minimally viable product) version of a simple “My Google Translate” Python Flask app using the Translation API, one of Google Cloud’s AI/ML “building block” APIs. These APIs are backed by pre-trained machine learning models, giving developers with little or no background in AI/ML the ability to leverage the benefits of machine learning with only API calls.

The application

The app consists of a simple web page prompting the user for a phrase to translate from English to Spanish. The translated results along with the original phrase are presented along with an empty form for a follow-up translation if desired. While the majority of this app’s deployments are in Python 3, there are still many users working on upgrading from Python 2, so some of those deployments are available to help with migration planning. Taking this into account, this app can be deployed (at least) eight different ways:

  1. Local (or hosted) Flask server (Python 2)
  2. Local (or hosted) Flask server (Python 3)
  3. Google App Engine (Python 2)
  4. Google App Engine (Python 3)
  5. Google Cloud Functions (Python 3)
  6. Google Cloud Run (Python 2 via Docker)
  7. Google Cloud Run (Python 3 via Docker)
  8. Google Cloud Run (Python 3 via Cloud Buildpacks)

The following is a brief glance at the files and which configurations they’re for: Screenshot of Nebulous serverless sample app files

Nebulous serverless sample app files

Diving straight into the application, let’s look at its primary function, translate():

@app.route('/', methods=['GET', 'POST'])
def translate(gcf_request=None):
local_request = gcf_request if gcf_request else request
text = translated = None
if local_request.method == 'POST':
text = local_request.form['text'].strip()
if text:
data = {
'contents': [text],
'parent': PARENT,
'target_language_code': TARGET[0],
rsp = TRANSLATE.translate_text(request=data)
translated = rsp.translations[0].translated_text
context = {
'orig': {'text': text, 'lc': SOURCE},
'trans': {'text': translated, 'lc': TARGET},
return render_template('index.html', **context)

Core component (translate()) of sample application

Some key app components:

  • Upon an initial request (GET), an HTML template is rendered featuring a simple form with an empty text field for the text to translate.
  • The form POSTs back to the app, and in this case, grabs the text to translate, sends the request to the Translation API, receives and displays the results to the user along with an empty form for another translation.
  • There is a special “ifdef” for Cloud Functions near the top to receive a request object because a web framework isn’t used like you’d have with App Engine or Cloud Run, so Cloud Functions provides one for this reason.

The app runs identically whether running locally or deployed to App Engine, Cloud Functions, or Cloud Run. The magic is all in the configuration. The requirements.txt file* is used in all configurations, whether to install third-party packages locally, or to direct the Cloud Build system to automatically install those libraries during deployment. Beyond requirements.txt, things start to differ:

  1. App Engine has an app.yaml file and possibly an appengine_config.py file.
  2. Cloud Run has either a Dockerfile (Docker) or Procfile (Cloud Buildpacks), and possibly a service.yaml file.
  3. Cloud Functions, the “simplest” of the three, has no configuration outside of a package requirements file (requirements.txt, package.json, etc.).

The following is what you should expect to see after completing one translation request: Screenshot of My Google Translate (1990s Edition) in Incognito Window

“My Google Translate” MVP app (Cloud Run edition)

Next steps

The sample app can be run locally or on your own hosting server, but now you also know how to deploy it to each of Cloud’s serverless platforms and what those subtle differences are. You also have a sense of the differences between each platform as well as what it takes to switch from one to another. Lastly, you now know how to access Cloud APIs from these platforms.

The user described earlier was overwhelmed at all the IAM roles and options available because this type of detail is required to provide the most security options for accessing Cloud services, but when prototyping, the fastest on-ramp is to use the default service account that comes with Cloud serverless platforms. These help you get that prototype working while allowing you to learn more about IAM roles and required permissions. Once you’ve progressed far enough to consider deploying to production, you can then follow the best practice of “least privileges” and create your own (user-managed) service accounts with the minimal permissions required so your application functions properly.

To dive in, the code and codelabs (free, self-paced, hands-on tutorials) for each deployment are available in its open source repository. An active Google Cloud billing account is required to deploy this application to each of our serverless platforms even though you can do all of them without incurring charges. More information can be found in the “Cost” section of the repo’s README. We hope this sample app teaches you more about the similarities and differences between our plaforms, shows you how you can “shift” applications comfortably between them, and provides a light introduction to another Cloud API. Also check out my colleague’s post featuring similar content for Node.js.

GDG NYC members apply their skills to help a local nonprofit reach higher

Posted by Kübra Zengin, Program Manager, Developer Relations

Image of Anna Nerezova and GDG NYC meetup on blog header image that reads GDG NYC members apply their skills to help a local nonprofit reach higher

Google Developer Group (GDG) chapters are in a unique position to help make an impact during a time where many companies and businesses are trying to shift to a digital first world. Perhaps no one knows this better than GDG NYC Lead, Anna Nerezova. Over the past year, she’s seen firsthand just how powerful the GDG NYC community can be when the right opportunity presents itself.

GDG NYC levels up their Google Cloud skills

In the past few years, Anna and other GDG NYC organizers have hosted a number of events focused on learning and sharing Cloud technologies with community members, including Cloud Study Jams and in-person workshops on Machine Learning Cloud-Speech-to-Text, Natural Language Processing, and more. Last year, GDG NYC took Google Cloud learning to the next level with a series of virtual Google Cloud tech talks on understanding BigQuery, Serverless best practices, and Anthos, with speakers from the Google Cloud team.

Image of GDG NYC members watching a speaker give a talk

A GDG NYC speaker session

Thanks to these hands-on workshops, speaker sessions, and technical resources provided by Google, GDG NYC community members are able to upskill in a wide variety of technologies at an accelerated pace, all the while gaining the confidence to put those skills into practice. Beyond gaining new skills, Google Developer Group members are often able to unlock opportunities to make positive impacts in ways they never thought possible. As a GDG Lead, Anna is always on the lookout for opportunities that give community members the chance to apply their skills for a higher purpose.

Building a Positive Planet

Anna identified one such opportunity for her community via Positive Planet US, a local nonprofit dedicated to alleviating global and local poverty through positive entrepreneurship. Positive Planet International, originally formed in France, has helped 11 million people escape poverty across 42 countries in Europe, the Middle East, and Africa since its inception in 1998. Just last year, Positive Planet US was launched in New York City, with a mission to create local and global economic growth in underprivileged communities in the wake of the pandemic.

Anna recognized how the past few years’ emphasis on learning and leveraging Google Cloud technology in her GDG chapter could help make a transformative impact on the nonprofit. A partnership wouldn’t just benefit Positive Planet US, it would give community members a chance to apply what they’ve learned, build experience, and give back. Anna and fellow GDG NYC Lead, Ralph Yozzo, worked with Positive Planet US to identify areas of opportunity where GDG NYC members could best apply their skills. With Positive Planet US still needing to build the infrastructure necessary to get up and running, it seemed that there were limitless opportunities for GDG NYC community members to step in and help out.

Volunteers from GDG NYC quickly got to work, building Positive Planet US’ website from the ground up. Google Cloud Platform was used to build out the site’s infrastructure, set up secure payments for donations, launch email campaigns, and more. Applying learnings from a series of AMP Study Jams held by GDG NYC, volunteers implemented the AMP plugin for WordPress to improve user experience and keep the website optimized, all according to Google’s Core Web Vitals and page experience guidelines. Volunteers from GDG NYC have also helped with program management, video creation, social media, and more. No matter the job, the work that volunteers put in makes a real impact and helps drive Positive Planet US’ efforts to make a difference in marginalized communities.

Positive Planet drives community impact

Positive Planet US volunteers are currently working hard to support the nonprofit’s flagship project, the Accelerator Hub for Minority Women Entrepreneurs, launched last year. As part of the program, participants receive personalized coaching from senior executives at Genpact and Capgemini, helping them turn their amazing ideas into thriving businesses. From learning how to grow a business to applying for a business loan, participating women from disadvantaged communities get the tools they need to flourish as entrepreneurs. The 10-week program is running its second cohort now, and aims to support 1,000 women by next year.

Screenshot of participants of Positive Planet US’ second Accelerator Hub Program in a virtual meeting

Some participants of Positive Planet US’ second Accelerator Hub Program

With Positive Planet US’ next cohort for 50 women entrepreneurs starting soon, Anna is working to find coaches of all different skill levels directly from the GDG community. If you’re interested in volunteering with Positive Planet US, click here.

Anna is excited about the ongoing collaboration between Positive Planet US and GDG NYC, and is continuing to identify opportunities for GDG members to give back. And with a new series of Android and Cloud Study Jams on the horizon and DevFest 2021 right around the corner, GDG NYC organizers hope to welcome even more developers into the Google Developer Group community. For more info about GDG NYC’s upcoming events, click here.

Join a Google Developer Group chapter near you here.

Skip the setup— Run code directly from Google Cloud’s documentation

Posted by Abby Carey, Developer Advocate

Blog header

Long gone are the days of looking for documentation, finding a how-to guide, and questioning whether the commands and code samples actually work.

Google Cloud recently added a Cloud Shell integration within each and every documentation page.

This new functionality lets you test code in a preprovisioned virtual machine instance while learning about Google Cloud services. Running commands and code from the documentation cuts down on context switching between the documentation and a terminal window to run the commands in a tutorial.

This gif shows how Google Cloud’s documentation uses Cloud Shell, letting you run commands in a quickstart within your Cloud Shell environment.

gif showing how Google Cloud’s documentation uses Cloud Shell, letting you run commands in a quickstart within your Cloud Shell environment.

If you’re new to developing on Google Cloud, this creates a low barrier to entry for trying Google Cloud services and APIs. After activating billing verification with your Google Cloud account, you can test services that have a free tier at no charge, like Pub/Sub and Cloud Vision.

  1. Open a Google Cloud documentation page (like this Pub/Sub quickstart).
  2. Sign into your Google account.
  3. In the top navigation, click Activate Cloud Shell.
  4. Select your project or create one if you don’t already have one. You can select a project by running the gcloud config set project command or by using this drop-down menu:
    image showing how to select a project
  5. Copy, paste, and run your commands.

If you want to test something a bit more adventurous, try to deploy a containerized web application, or get started with BigQuery.

A bit about Cloud Shell

If you’ve been developing on Google Cloud, chances are you’ve already interacted with Cloud Shell in the Cloud Console. Cloud Shell is a ready-to-go, online development and operations environment. It comes preinstalled with common command-line tools, programming languages, and the Cloud SDK.

Just like in the Cloud Console, your Cloud Shell terminal stays open as you navigate the site. As you work through tutorials within Google Cloud’s documentation, the Cloud Shell terminal stays on your screen. This helps with progressing from two connected tutorials, like the Pub/Sub quickstart and setting up a Pub/Sub Proxy.

Having a preprovisioned environment setup by Google eliminates the age old question of “Is my machine the problem?” when you eventually try to run these commands locally.

What about code samples?

While Cloud Shell is useful for managing your Google Cloud resources, it also lets you test code samples. If you’re using Cloud Client Libraries, you can customize and run sample code in the Cloud Shell’s built in code editor: Cloud Shell Editor.

Cloud Shell Editor is Cloud Shell’s built-in, browser-based code editor, powered by the Eclipse Theia IDE platform. To open it, click the Open Editor button from your Cloud Shell terminal:

Image showing how to open Cloud Shell Editor

Cloud Shell Editor has rich language support and debuggers for Go, Java, .Net, Python, NodeJS and more languages, integrated source control, local emulators for Kubernetes, and more features. With the Cloud Shell Editor open, you can then walk through a client library tutorial like Cloud Vision’s Detect labels guide, running terminal commands and code from one browser tab.

Open up a Google Cloud quickstart and give it a try! This could be a game-changer for your learning experience.

Cloud Shell Editor has rich language support and debuggers for Go, Java, .Net, Python, NodeJS and more languages, integrated source control, local emulators for Kubernetes, and more features. With the Cloud Shell Editor open, you can then walk through a client library tutorial like Cloud Vision’s Detect labels guide, running terminal commands and code from one browser tab.

Open up a Google Cloud quickstart and give it a try! This could be a game-changer for your learning experience.

An easier way to move your App Engine apps to Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Blue header

An easier yet still optional migration

In the previous episode of the Serverless Migration Station video series, developers learned how to containerize their App Engine code for Cloud Run using Docker. While Docker has gained popularity over the past decade, not everyone has containers integrated into their daily development workflow, and some prefer “containerless” solutions but know that containers can be beneficial. Well today’s video is just for you, showing how you can still get your apps onto Cloud Run, even If you don’t have much experience with Docker, containers, nor Dockerfiles.

App Engine isn’t going away as Google has expressed long-term support for legacy runtimes on the platform, so those who prefer source-based deployments can stay where they are so this is an optional migration. Moving to Cloud Run is for those who want to explicitly move to containerization.

Migrating to Cloud Run with Cloud Buildpacks video

So how can apps be containerized without Docker? The answer is buildpacks, an open-source technology that makes it fast and easy for you to create secure, production-ready container images from source code, without a Dockerfile. Google Cloud Buildpacks adheres to the buildpacks open specification and allows users to create images that run on all GCP container platforms: Cloud Run (fully-managed), Anthos, and Google Kubernetes Engine (GKE). If you want to containerize your apps while staying focused on building your solutions and not how to create or maintain Dockerfiles, Cloud Buildpacks is for you.

In the last video, we showed developers how to containerize a Python 2 Cloud NDB app as well as a Python 3 Cloud Datastore app. We targeted those specific implementations because Python 2 users are more likely to be using App Engine’s ndb or Cloud NDB to connect with their app’s Datastore while Python 3 developers are most likely using Cloud Datastore. Cloud Buildpacks do not support Python 2, so today we’re targeting a slightly different audience: Python 2 developers who have migrated from App Engine ndb to Cloud NDB and who have ported their apps to modern Python 3 but now want to containerize them for Cloud Run.

Developers familiar with App Engine know that a default HTTP server is provided by default and started automatically, however if special launch instructions are needed, users can add an entrypoint directive in their app.yaml files, as illustrated below. When those App Engine apps are containerized for Cloud Run, developers must bundle their own server and provide startup instructions, the purpose of the ENTRYPOINT directive in the Dockerfile, also shown below.

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

In this migration, there is no Dockerfile. While Cloud Buildpacks does the heavy-lifting, determining how to package your app into a container, it still needs to be told how to start your service. This is exactly what a Procfile is for, represented by the last file in the image above. As specified, your web server will be launched in the same way as in app.yaml and the Dockerfile above; these config files are deliberately juxtaposed to expose their similarities.

Other than this swapping of configuration files and the expected lack of a .dockerignore file, the Python 3 Cloud NDB app containerized for Cloud Run is nearly identical to the Python 3 Cloud NDB App Engine app we started with. Cloud Run’s build-and-deploy command (gcloud run deploy) will use a Dockerfile if present but otherwise selects Cloud Buildpacks to build and deploy the container image. The user experience is the same, only without the time and challenges required to maintain and debug a Dockerfile.

Get started now

If you’re considering containerizing your App Engine apps without having to know much about containers or Docker, we recommend you try this migration on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While our content initially focuses on Python users, we hope to one day also cover other legacy runtimes so stay tuned. Containerization may seem foreboding, but the goal is for Cloud Buildpacks and migration resources like this to aid you in your quest to modernize your serverless apps!

The Google Cloud Startup Summit is coming on September 9, 2021

Posted by Chris Curtis, Startup Marketing Manager at Google Cloud

Startup Summit logo

We’re excited to announce our first-ever Google Cloud Startup Summit will be taking place on September 9, 2021.

We hope you will join us as we bring together our startup community, including startup founders, CTOs, VCs and Google experts to provide behind the scenes insights and inspiring stories of innovation. To kick off the event, we’ll be bringing in X’s Captain of Moonshots, Astro Teller, for a keynote focused on innovation. We’ll also have exciting technical and business sessions,with Google leaders, industry experts, venture investors and startup leaders. You can see the full agenda here to get more details on the sessions.

We can’t wait to see you at the Google Cloud Startup Summit at 10am PT on September 9! Register to secure your spot today.

Containerizing Google App Engine apps for Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google App Engine header

An optional migration

Serverless Migration Station is a video mini-series from Serverless Expeditions focused on helping developers modernize their applications running on a serverless compute platform from Google Cloud. Previous episodes demonstrated how to migrate away from the older, legacy App Engine (standard environment) services to newer Google Cloud standalone equivalents like Cloud Datastore. Today’s product crossover episode differs slightly from that by migrating away from App Engine altogether, containerizing those apps for Cloud Run.

There’s little question the industry has been moving towards containerization as an application deployment mechanism over the past decade. However, Docker and use of containers weren’t available to early App Engine developers until its flexible environment became available years later. Fast forward to today where developers have many more options to choose from, from an increasingly open Google Cloud. Google has expressed long-term support for App Engine, and users do not need to containerize their apps, so this is an optional migration. It is primarily for those who have decided to add containerization to their application deployment strategy and want to explicitly migrate to Cloud Run.

If you’re thinking about app containerization, the video covers some of the key reasons why you would consider it: you’re not subject to traditional serverless restrictions like development language or use of binaries (flexibility); if your code, dependencies, and container build & deploy steps haven’t changed, you can recreate the same image with confidence (reproducibility); your application can be deployed elsewhere or be rolled back to a previous working image if necessary (reusable); and you have plenty more options on where to host your app (portability).

Migration and containerization

Legacy App Engine services are available through a set of proprietary, bundled APIs. As you can surmise, those services are not available on Cloud Run. So if you want to containerize your app for Cloud Run, it must be “ready to go,” meaning it has migrated to either Google Cloud standalone equivalents or other third-party alternatives. For example, in a recent episode, we demonstrated how to migrate from App Engine ndb to Cloud NDB for Datastore access.

While we’ve recently begun to produce videos for such migrations, developers can already access code samples and codelab tutorials leading them through a variety of migrations. In today’s video, we have both Python 2 and 3 sample apps that have divested from legacy services, thus ready to containerize for Cloud Run. Python 2 App Engine apps accessing Datastore are most likely to be using Cloud NDB whereas it would be Cloud Datastore for Python 3 users, so this is the starting point for this migration.

Because we’re “only” switching execution platforms, there are no changes at all to the application code itself. This entire migration is completely based on changing the apps’ configurations from App Engine to Cloud Run. In particular, App Engine artifacts such as app.yaml, appengine_config.py, and the lib folder are not used in Cloud Run and will be removed. A Dockerfile will be implemented to build your container. Apps with more complex configurations in their app.yaml files will likely need an equivalent service.yaml file for Cloud Run — if so, you’ll find this app.yaml to service.yaml conversion tool handy. Following best practices means there’ll also be a .dockerignore file.

App Engine and Cloud Functions are sourced-based where Google Cloud automatically provides a default HTTP server like gunicorn. Cloud Run is a bit more “DIY” because users have to provide a container image, meaning bundling our own server. In this case, we’ll pick gunicorn explicitly, adding it to the top of the existing requirements.txt required packages file(s), as you can see in the screenshot below. Also illustrated is the Dockerfile where gunicorn is started to serve your app as the final step. The only differences for the Python 2 equivalent Dockerfile are: a) require the Cloud NDB package (google-cloud-ndb) instead of Cloud Datastore, and b) start with a Python 2 base image.

Image of The Python 3 requirements.txt and Dockerfile

The Python 3 requirements.txt and Dockerfile

Next steps

To walk developers through migrations, we always “START” with a working app then make the necessary updates that culminate in a working “FINISH” app. For this migration, the Python 2 sample app STARTs with the Module 2a code and FINISHes with the Module 4a code. Similarly, the Python 3 app STARTs with the Module 3b code and FINISHes with the Module 4b code. This way, if something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. If you are considering this migration for your own applications, we recommend you try it on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 so stay tuned. We’ll continue with our journey from App Engine to Cloud Run ahead in Module 5 but will do so without explicit knowledge of containers, Docker, or Dockerfiles. Modernizing your development workflow to using containers and best practices like crafting a CI/CD pipeline isn’t always straightforward; we hope content like this helps you progress in that direction!