Migrating from App Engine Blobstore to Cloud Storage (Module 16)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The most recent Serverless Migration Station video demonstrated how to add use of the App Engine’s Blobstore service to a sample Python 2 App Engine app, kicking off the first of a 2-part series on migrating away from Blobstore. In today’s Module 16 video, we complete this journey, arriving at Cloud Storage. Moving away from proprietary App Engine services like Blobstore makes apps more portable, giving them enough flexibility to:

Showing App Engine users how to migrate to Cloud Storage

As described previously, a Blobstore for Python 2 dependency on webapp made the Module 15 content more straightforward to implement if it was still using webapp2. To completely modernize this app here in Module 16, the following migrations should be carried out:

  • Migrate from webapp2 (and webapp) to Flask
  • Migrate from App Engine NDB to Cloud NDB
  • Migrate from App Engine Blobstore to Cloud Storage
  • Migrate from Python 2 to Python (2 and) 3

Performing the migrations

Prior to modifying the application code, a variety of configuration updates need to be made. Updates applying only to Python 2 feature a “Py2” designation while those migrating to Python 3 will see “Py3” annotations.

  1. Remove the built-in Jinja2 library from app.yaml—Jinja2 already comes with Flask, so remove use of the older built-in version which may possibly conflict with the contemporary Flask version you’re using. (Py2)
  2. Use of Cloud client libraries (such as those for Cloud NDB and Cloud Storage) require a pair of built-in libraries, grpcio and setuptools, so add those to app.yaml (Py2)
  3. Remove everything in app.yaml except for a valid runtime (Py3)
  4. Add Cloud NDB and Cloud Storage client libraries to requirements.txt (Py2 & Py3)
  5. Create an appengine_config.py supporting both built-in (those in app.yaml) and non built-in (those in requirements.txt) libraries used (Py2)

The Module 15 app already migrated away from webapp2‘s (Django) templating system to Jinja2. This is useful when migrating to Flask because Jinja2 is Flask’s default template system. Switching from App Engine NDB to Cloud NDB is fairly straightforward as the latter was designed to be mostly compatible with the original. The only change visible in this sample app is to move Datastore calls into Python with blocks.

The most significant changes occur when moving the upload and download handlers from webapp to Cloud Storage. The video and corresponding codelab go more in-depth into the necessary changes, but in summary, these are the updates required in the main application:

  1. webapp2 is replaced by Flask. Instead of using the older built-in version of Jinja2, use the version that comes with Flask.
  2. App Engine Blobstore and NDB are replaced by Cloud NDB and Cloud Storage, respectively.
  3. The webapp Blobstore handler functionality is replaced by a combination of the io standard library module plus components from Flask and Werkzeug. Furthermore, the handler classes and methods are replaced by Flask functions.
  4. The main handler class and corresponding GET and POST methods are all replaced by a single Flask function.

The results

With all the changes implemented, the original Module 15 app still operates identically in Module 16, starting with a form requesting a visit artifact followed by the most recents visits page:
The sample app’s artifact prompt page


The sample app’s most recent visits page.

The only difference is that four migrations have been completed where all of the “infrastructure” is now taken care of by non-App Engine legacy services. Furthermore, the Module 16 app could be either a Python 2 or 3 app. As far as the end-user is concerned, “nothing happened.”

Migrating sample app from App Engine Blobstore to Cloud Storage

Wrap-up

Module 16 featured four different migrations, modernizing the Module 15 app from using App Engine legacy services like NDB and Blobstore to Cloud NDB and Cloud Storage, respectively. While we recommend users move to the latest offerings from Google Cloud, migrating from Blobstore to Cloud Storage isn’t required, and should you opt to do so, can do it on your own timeline. In addition to today’s video, be sure to check out the Module 16 codelab which leads you step-by-step through the migrations discussed.

In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate to Cloud Storage when porting your app to Python 3. You can continue using Blobstore in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you’re using other App Engine legacy services be sure to check out the other Migration Modules in this series. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Google Dev Library Letters — 12th Issue

Posted by Garima Mehra, Program Manager

‘Google Dev Library Letters’ is curated to bring you some of the latest projects developed with Google tech submitted to Google Dev Library Platform. We hope this brings you the inspiration you need for your next project!


Android

Shape your Image: Circle, Rounded Square, or Cuts at the corner in Android by Sriyank Siddhartha

Using the MDC library, shape images in just a few lines of code by using ShapeableImageView.

Foso/Ktorfit by Jens Klingenberg

HTTP client / Kotlin Symbol Processor for Kotlin Multiplatform (Js, Jvm, Android, Native, iOS) using KSP and Ktor clients inspired by Retrofit.

Machine Learning Communities: Q2 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the second quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!

TensorFlow/Keras

TFUG Agadir hosted #MLReady phase as a part of #30DaysOfML. #MLReady aimed to prepare the attendees with the knowledge required to understand the different types of problems which deep learning can solve, and helped attendees be prepared for the TensorFlow Certificate.

TFUG Taipei hosted the basic Python and TensorFlow courses named From Python to TensorFlow. The aim of these events is to help everyone learn about the basics of Python and TensorFlow, including TensorFlow Hub, TensorFlow API. The event videos are shared every week via Youtube playlist.

TFUG New York hosted Introduction to Neural Radiance Fields for TensorFlow users. The talk included Volume Rendering, 3D view synthesis, and links to a minimal implementation of NeRF using Keras and TensorFlow. In the event, ML GDE Aritra Roy Gosthipaty (India) had a talk focusing on breaking the concepts of the academic paper, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis into simpler and more ingestible snippets.

TFUG Turkey, GDG Edirne and GDG Mersin organized a TensorFlow Bootcamp 22 and ML GDE M. Yusuf Sarıgöz (Turkey) participated as a speaker, TensorFlow Ecosystem: Get most out of auxiliary packages. Yusuf demonstrated the inner workings of TensorFlow, how variables, tensors and operations interact with each other, and how auxiliary packages are built upon this skeleton.

TFUG Mumbai hosted the June Meetup and 110 folks gathered. ML GDE Sayak Paul (India) and TFUG mentor Darshan Despande shared knowledge through sessions. And ML workshops for beginners went on and participants built up machine learning models without writing a single line of code.

ML GDE Hugo Zanini (Brazil) wrote Realtime SKU detection in the browser using TensorFlow.js. He shared a solution for a well-known problem in the consumer packaged goods (CPG) industry: real-time and offline SKU detection using TensorFlow.js.

ML GDE Gad Benram (Portugal) wrote Can a couple TensorFlow lines reduce overfitting? He explained how just a few lines of code can generate data augmentations and boost a model’s performance on the validation set.

ML GDE Victor Dibia (USA) wrote How to Build An Android App and Integrate Tensorflow ML Models sharing how to run machine learning models locally on Android mobile devices, How to Implement Gradient Explanations for a HuggingFace Text Classification Model (Tensorflow 2.0) explaining in 5 steps about how to verify the model is focusing on the right tokens to classify text. He also wrote how to finetune a HuggingFace model for text classification, using Tensorflow 2.0.

ML GDE Karthic Rao (India) released a new series ML for JS developers with TFJS. This series is a combination of short portrait and long landscape videos. You can learn how to build a toxic word detector using TensorFlow.js.

ML GDE Sayak Paul (India) implemented the DeiT family of ViT models, ported the pre-trained params into the implementation, and provided code for off-the-shelf inference, fine-tuning, visualizing attention rollout plots, distilling ViT models through attention. (code | pretrained model | tutorial)

ML GDE Sayak Paul (India) and ML GDE Aritra Roy Gosthipaty (India) inspected various phenomena of a Vision Transformer, shared insights from various relevant works done in the area, and provided concise implementations that are compatible with Keras models. They provide tools to probe into the representations learned by different families of Vision Transformers. (tutorial | code)

JAX/Flax

ML GDE Aakash Nain (India) had a special talk, Introduction to JAX for ML GDEs, TFUG organizers and ML community network organizers. He covered the fundamentals of JAX/Flax so that more and more people try out JAX in the near future.

ML GDE Seunghyun Lee (Korea) started a project, Training and Lightweighting Cookbook in JAX/FLAX. This project attempts to build a neural network training and lightweighting cookbook including three kinds of lightweighting solutions, i.e., knowledge distillation, filter pruning, and quantization.

ML GDE Yucheng Wang (China) wrote History and features of JAX and explained the difference between JAX and Tensorflow.

ML GDE Martin Andrews (Singapore) shared a video, Practical JAX : Using Hugging Face BERT on TPUs. He reviewed the Hugging Face BERT code, written in JAX/Flax, being fine-tuned on Google’s Colab using Google TPUs. (Notebook for the video)

ML GDE Soumik Rakshit (India) wrote Implementing NeRF in JAX. He attempts to create a minimal implementation of 3D volumetric rendering of scenes represented by Neural Radiance Fields.

Kaggle

ML GDEs’ Kaggle notebooks were announced as the winner of Google OSS Expert Prize on Kaggle: Sayak Paul and Aritra Roy Gosthipaty’s Masked Image Modeling with Autoencoders in March; Sayak Paul’s Distilling Vision Transformers in April; Sayak Paul & Aritra Roy Gosthipaty’s Investigating Vision Transformer Representations; Soumik Rakshit’s Tensorflow Implementation of Zero-Reference Deep Curve Estimation in May and Aakash Nain’s The Definitive Guide to Augmentation in TensorFlow and JAX in June.

ML GDE Luca Massaron (Italy) published The Kaggle Book with Konrad Banachewicz. This book details competition analysis, sample code, end-to-end pipelines, best practices, and tips & tricks. And in the online event, Luca and the co-author talked about how to compete on Kaggle.

ML GDE Ertuğrul Demir (Turkey) wrote Kaggle Handbook: Fundamentals to Survive a Kaggle Shake-up covering bias-variance tradeoff, validation set, and cross validation approach. In the second post of the series, he showed more techniques using analogies and case studies.

TFUG Chennai hosted ML Study Jam with Kaggle and created study groups for the interested participants. More than 60% of members were active during the whole program and many of them shared their completion certificates.

TFUG Mysuru organizer Usha Rengaraju shared a Kaggle notebook which contains the implementation of the research paper: UNETR – Transformers for 3D Biomedical Image Segmentation. The model automatically segments the stomach and intestines on MRI scans.

TFX

ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) shared how to deploy a deep learning model with Docker, Kubernetes, and Github actions, with two promising ways – FastAPI (for REST) and TF Serving (for gRPC).

ML GDE Ukjae Jeong (Korea) and ML Engineers at Karrot Market, a mobile commerce unicorn with 23M users, wrote Why Karrot Uses TFX, and How to Improve Productivity on ML Pipeline Development.

ML GDE Jun Jiang (China) had a talk introducing the concept of MLOps, the production-level end-to-end solutions of Google & TensorFlow, and how to use TFX to build the search and recommendation system & scientific research platform for large-scale machine learning training.

ML GDE Piero Esposito (Brazil) wrote Building Deep Learning Pipelines with Tensorflow Extended. He showed how to get started with TFX locally and how to move a TFX pipeline from local environment to Vertex AI; and provided code samples to adapt and get started with TFX.

TFUG São Paulo (Brazil) had a series of online webinars on TensorFlow and TFX. In the TFX session, they focused on how to put the models into production. They talked about the data structures in TFX and implementation of the first pipeline in TFX: ingesting and validating data.

TFUG Stockholm hosted MLOps, TensorFlow in Production, and TFX covering why, what and how you can effectively leverage MLOps best practices to scale ML efforts and had a look at how TFX can be used for designing and deploying ML pipelines.

Cloud AI

ML GDE Chansung Park (Korea) wrote MLOps System with AutoML and Pipeline in Vertex AI on GCP official blog. He showed how Google Cloud Storage and Google Cloud Functions can help manage data and handle events in the MLOps system.

He also shared the Github repository, Continuous Adaptation with VertexAI’s AutoML and Pipeline. This contains two notebooks to demonstrate how to automate to produce a new AutoML model when the new dataset comes in.

TFUG Northwest (Portland) hosted The State and Future of AI + ML/MLOps/VertexAI lab walkthrough. In this event, ML GDE Al Kari (USA) outlined the technology landscape of AI, ML, MLOps and frameworks. Googler Andrew Ferlitsch had a talk about Google Cloud AI’s definition of the 8 stages of MLOps for enterprise scale production and how Vertex AI fits into each stage. And MLOps engineer Chris Thompson covered how easy it is to deploy a model using the Vertex AI tools.

Research

ML GDE Qinghua Duan (China) released a video which introduces Google’s latest 540 billion parameter model. He introduced the paper PaLM, and described the basic training process and innovations.

ML GDE Rumei LI (China) wrote blog postings reviewing papers, DeepMind’s Flamingo and Google’s PaLM.

The Google Cloud Startup Summit is coming on June 2, 2022

Posted by Chris Curtis, Startup Marketing Manager at Google Cloud

We’re excited to announce our annual Google Cloud Startup Summit will be taking place on June 2nd, 2022.

We hope you will join us as we bring together our startup & VC communities. Join us to dive into topics relevant to startups and enjoy sessions such as:

The future of web3

  • Hear from Google Cloud CEO, Thomas Kurian and Dapper Labs Co-founder and CEO, Roham Gharegozlou, as they discuss web3 and how startups can prepare for the paradigm changes it brings.

VC AMA: Startup Summit Edition

  • Join us for a very special edition of the VC AMA series where we’ll have a discussion with Derek Zanutto from CapitalG, Alison Lange Engel from Greycroft and Matt Turck from FirstMark to discuss investment trends and advice for founders around cloud, data, and the future of disruption in legacy industries.

What’s new for the Google for Startups Cloud Program

  • Exciting announcements from Ryan Kiskis, Director of the Startup Ecosystem at Google Cloud, on how Google Cloud is investing in the startup ecosystem with tailored programs and offers.

Technical leaders & business sessions

  • Growth insights from top startups Discord, Swit, and Streak on how their tech stack helped propel their growth.

Additionally, startups will have an opportunity to join ‘Ask me Anything’ live sessions after the event to interact with Google Cloud startup experts and technical teams to discuss questions that may come up throughout the event.

You can see the full agenda here to get more details on the sessions.

We can’t wait to see you at the Google Cloud Startup Summit. Register to secure your spot today.

Machine Learning Communities: Q1 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the first quarter of the year! We are enthusiastic and grateful about all the activities that the communities across the globe do. Here are the highlights!

ML Ecosystem Campaign Highlights

ML Olympiad is an associated Kaggle Community Competitions hosted by Machine Learning Google Developers Experts (ML GDEs) or TensorFlow User Groups (TFUGs) sponsored by Google. The first round was hosted from January to March, suggesting solving critical problems of our time. Competition highlights include Autism Prediction Challenge, Arabic_Poems, Hausa Sentiment Analysis, Quality Education, Good Health and Well Being. Thank you TFUG Saudi, New York, Guatemala, São Paulo, Pune, Mysuru, Chennai, Bauchi, Casablanca, Agadir, Ibadan, Abidjan, Malaysia and ML GDE Ruqiya Bin Safi, Vinicius Fernandes Caridá, Yogesh Kulkarni, Mohammed buallay, Sayed Ali Alkamel, Yannick Serge Obam, Elyes Manai, Thierno Ibrahima DIOP, Poo Kuan Hoong for hosting ML Olympiad!

Highlights and Achievements of ML Communities

TFUG organizer Ali Mustufa Shaikh (TFUG Mumbai) and Rishit Dagli won the TensorFlow Community Spotlight award (paper and code). This project was supported by provided Google Cloud credit.

ML GDE Sachin Kumar (Qatar) posted Build a retail virtual agent from scratch with Dialogflow CX – Ultimate Chatbot Tutorials. In this tutorial, you will learn how to build a chatbot and voice bot from scratch using Dialogflow CX, a Conversational AI Platform (CAIP) for building conversational UIs.

ML GDE Ngoc Ba (Vietnam) posted MTet: Multi-domain Translation for English and Vietnamese. This project is about how to collect high quality data and train a state-of-the-art neural machine translation model for Vietnamese. And it utilized Google Cloud TPU, Cloud Storage and related GCP products for faster training.

Kaggle announced the Google Open Source Prize early this year (Winners announcement page). In January, ML GDE Aakash Kumar Nain (India)’s Building models in JAX – Part1 (Stax) was awarded.

In February, ML GDE Victor Dibia (USA)’s notebook Signature Image Cleaning with Tensorflow 2.0 and ML GDE Sayak Paul (India) & Soumik Rakshit’s notebook gaugan-keras were awarded.

TFUG organizer Usha Rengaraju posted Variable Selection Networks (AI for Climate Change) and Probabilistic Bayesian Neural Networks using TensorFlow Probability notebooks on Kaggle. They both got gold medals, and she has become a Triple GrandMaster!

TFUG Chennai hosted the two events, Transformers – A Journey into attention and Intro to Deep Reinforcement Learning. Those events were planned for beginners. Events include introductory sessions explaining the transformers research papers and the basic concept of reinforcement learning.

ML GDE Margaret Maynard-Reid (USA), Nived P A, and Joel Shor posted Our Summer of Code Project on TF-GAN. This article describes enhancements made to the TensorFlow GAN library (TF-GAN) of the last summer.

ML GDE Aakash Nain (India) released a series of tutorials about building models in JAX. In the second tutorial, Aakash uses one of the most famous and most widely used high-level libraries for Jax to build a classifier. In the notebook, you will be taking a deep dive into Flax, too.

ML GDE Bhavesh Bhatt (India) built a model for braille to audio with 95% accuracy. He created a model that translates braille to text and audio, lending a helping hand to people with visual disabilities.

ML GDE Sayak Paul (India) recently wrote Publishing ConvNeXt Models on TensorFlow Hub. This is a contribution from the 30 versions of the model, ready for inference and transfer learning, with documentation and sample code. And he also posted First Steps in GSoC to encourage the fellow ML GDEs’ participation in Google Summer of Code (GSoC).

ML GDE Merve Noyan (Turkey) trained 40 models on keras.io/examples; built demos for them with Streamlit and Gradio. And those are currently being hosted here. She also held workshops entitled NLP workshop with TensorFlow for TFUG Delhi, TFUG Chennai, TFUG Hyderabad and TFUG Casablanca. It covered the basic to advanced topics in NLP right from Transformers till model hosting in Hugging Face, using TFX and TF Serve.

How is Dev Library useful to the open-source community?

Posted by Ankita Tripathi, Community Manager (Dev Library)


Witnessing a plethora of open-source enthusiasts in the developer ecosystem in recent years gave birth to the idea of Google’s Dev Library. The inception of the platform happened in June 2021 with the only objective of giving visibility to developers who have been creating and building projects relentlessly using Google technologies. But why the Dev Library?

Why Dev Library?

Open-source communities are currently at a boom. The past 3 years have seen a surge of folks constantly building in public, talking about open-source contributions, digging into opportunities, and carving out a valuable portfolio for themselves. The idea behind the Dev Library as a whole was also to capture these open-source projects and leverage them for the benefit of other developers.

This platform acted as a gold mine for projects created using Google technologies (Android, Angular, Flutter, Firebase, Machine Learning, Google Assistant, Google Cloud).

With the platform, we also catered to the burning issue – creating a central place for the huge number of projects and articles scattered across various platforms. Therefore, the Dev Library became a one-source platform for all the open source projects and articles for Google technologies.

How can you use the Dev Library?

“It is a library full of quality projects and articles.”

External developers cannot construe Dev Library as the first platform for blog posts or projects, but the vision is bigger than being a mere platform for the display of content. It envisages the growth of developers along with tech content creation. The uniqueness of the platform lies in the curation of its submissions. Unlike other platforms, you don’t get your submitted work on the site by just clicking ‘Submit’. Behind the scenes, Dev Library has internal Google engineers for each product area who:

  • thoroughly assess each submission,
  • check for relevancy, freshness, and quality,
  • approve the ones that pass the check, and reject the others with a note.

It is a painstaking process, and Dev Library requires a 4-6 week turnaround time to complete the entire curation procedure and get your work on the site.

What we aim to do with the platform:

  • Provide visibility: Developers create open-source projects and write articles on platforms to bring visibility to their work and attract more contributions. Dev Library’s intention is to continue to provide this amplification for the efforts and time spent by external contributors.
  • Kickstart a beginner’s open-source contribution journey: The biggest challenge for a beginner to start applying their learnings to build Android or Flutter applications is ‘Where do I start my contributions from’? While we see an open-source placard unfurled everywhere, beginners still struggle to find their right place. With the Dev Library, you get a stack of quality projects hand-picked for you keeping the freshness of the tech and content quality intact. For example, Tomas Trajan, a Dev Library contributor created an Angular material starter project where they have ‘good first issues’ to start your contributions with.
  • Recognition: Your selection of the content on the Dev Library acts as recognition to the tiring hours you’ve put in to build a running open-source project and explain it well. Dev Library also delivers hero content in their monthly newsletter, features top contributors, and is in the process to gamify the developer efforts. As an example, one of our contributors created a Weather application using Android and added a badge ‘Part of Dev Library’.

    With your contributions at one place under the Author page, you can use it as a portfolio for your work while simultaneously increasing your chances to become the next Google Developer Expert (GDE).

Features on the platform

Keeping developers in mind, we’ve updated features on the platform as follows:

  • Added a new product category; Google Assistant – All Google Assistant and Smart home projects now have a designated category on the Dev Library.
  • Integrated a new way to make submissions across product areas via the Advocu form.
  • Introduced a special section to submit Cloud Champion articles on Google Cloud.
  • Included displays on each Author page indicating the expertise of individual contributors
  • Upcoming: An expertise filter to help you segment out content based on Beginner, Intermediate, or Expert levels.

To submit your idea or suggestion, refer to this form, and put down your suggestions.

Contributor Love

Dev Library as a platform is more about the contributors who lie on the cusp of creation and consumption of the available content. Here are some contributors who have utilized the platform their way. Here’s how the Dev Library has helped along their journey:

Roaa Khaddam: Roaa is a Senior Flutter Mobile Developer and Co-Founder at MultiCaret Inc.

How has the Dev Library helped you?

“It gave me the opportunity to share what I created with an incredible community and look at the projects my fellow Flutter mates have created. It acts as a great learning resource.”

Somkiat Khitwongwattana: Somkiat is an Android GDE and a consistent user of Android technology from Thailand.

How has the Dev Library helped you?

“I used to discover new open source libraries and helpful articles for Android development in many places and it took me longer than necessary. But the Dev Library allows me to explore these useful resources in one place.”

Kevin Kreuzer: Kevin is an Angular developer and contributes to the community in various ways.

How has the Dev Library helped you?

“Dev Library is a great tool to find excellent Angular articles or open source projects. Dev Library offers a great filtering function and therefore makes it much easier to find the right open source library for your use case.”

What started as a platform to highlight and showcase some open-source projects has grown into a product where developers can share their learnings, inspire others, and contribute to the ecosystem at large.

Do you have an Open Source learning or project in the form of a blog or GitHub repo you’d like to share? Please submit it to the Dev Library platform. We’d love to add you to our ever growing list of developer contributors!

Year in review: the Google Workspace Platform 2021

Posted by Charles Maxson, Developer Advocate

In 2021, we saw many changes and improvements to the Google Workspace Platform geared at helping developers build new solutions to keep up with the challenges of how we worked, like hybrid and fully remote office work. More than ever, we needed tools for virtual collaboration and digital processes to keep our work going. As paper processes in the office were less viable and we continued to go see digital transformations become necessary, many new custom solutions like desk reservation systems and automated test logging have evolved.

2021 was also a year for Platform milestones, Google Workspace grew to more than 3 billion users globally, we reached more than 5,300 public apps in the Google Workspace Marketplace, and we crossed over 4.8 billion apps installed (up from 1 billion in 2020)! We were also busy bringing Platform innovations and improving our developer experience to help building for Google Workspace easier and faster. Here’s a look at some of the key enhancements the Google Workspace Platform brought to the developer community.

Google Cloud Champion Innovators program

Community building is one of the most effective ways to support developers, which is why we created Google Cloud Innovators.This new community program was designed for developers and technical practitioners using Google Cloud and we welcome everyone.

And when we say everyone, it’s not just professional developers, data scientists, or student developers and hobbyists, we also mean non-technical end users. The growing Google community has something for everyone.

GWAO Alternate Runtimes goes GA

Google Workspace Add-ons are customized applications that tightly integrate with Google Workspace applications, and can be found in the Google Workspace Marketplace, or built specifically for your own domain. The development of these applications were limited to using Apps Script, our native scripting language for the Google Workspace Platform. With the launch of Alternate Runtimes you can now develop add-ons with your preferred hosting infrastructure, development tool chain, source control system, coding language, and code libraries; it was a highly requested update from the developer community, opening up the Platform to many new developer scenarios.

Card Builder UI Application

The GWAO Card Builder tool allows you to visually design the user interfaces for your Google Workspace Add-ons and Google Chat apps projects. It is a must-have for Google Workspace developers using either Apps Script or Alternate Runtimes, enabling you to prototype and design Card UIs super fast without hassle and errors of hand coding JSON or Apps Script on your own.

Card Builder tool for building Google Workspace Add-ons and Chat Apps

Recommended for Google Workspace

This program showcases a selection of market-leading applications built by software vendors across a wide range of categories, including project management, customer support, and finance in our Google Workspace Marketplace. These apps undergo rigorous usability and security testing to make sure they meet our requirements for high quality integrations. They must also have an exemplary track record of user satisfaction, reliability, and privacy.

Recommended for Google Workspace program showcases high quality applications

Chat Slash Commands and Dialogs

Slash commands simplify the way users interact with your Chat bot, offering them a visual leading way to discover and execute your bot’s primary features. As a developer, slash commands are straightforward to implement, and essential in offering a better bot experience. In addition to Slash Commands, Dialogs were a new capability introduced to the Chat App framework that allows developers to build user interfaces to capture inputs and parameters in a structured, reliable way. This was a tremendous step forward for bot usability because it simplified and streamlined the process of users interacting with bot commands. Now with dialogs, users can be led visually to supply inputs via prompts, versus having to rely on wrapping bot commands with natural language inputs.

Forms API beta

Google Forms enables easy creation and distribution of forms, surveys, and quizzes. Forms is used for a wide variety of use cases across business operations, customer management, event planning and logistics, education, and more. With the Google Forms API Beta announcement, developers were able to provide programmatic access for managing forms and acting on responses, empowering developers to build powerful integrations on top of Forms.

Google Workspace Marketplace updates

We made many updates to the Google Workspace Marketplace to improve both the user and developer experience. We added updates to the application detail page that included pricing and when the listing was last updated. The homepage also saw improvements with various curated categories by the Google team under Editor’s Choice. Finally, we launched the marketplace badges for developers to promote their published applications on websites and marketing channels. Oh, and we also had a logo update if you hadn’t noticed.

Google Workspace Marketplace Badges for application promotion

Farewell 2021 and here’s to welcoming in 2022

2021 brought us many innovations to the Google Workspace Platform to help developers address the needs of their users and it also brought more empowerment to knowledge workers to build the solutions they needed with our no-code and low-code platforms. These are just the highlights for the Google Workspace Platform and we look forward to more innovation in 2022. To keep up with all the news about the Platform, please subscribe to our newsletter.

How to get started in cloud computing

Posted by Google Cloud training & certifications team

Validated cloud skills are in demand. With Google Cloud certifications, employers know that certified individuals have proven knowledge of various professional roles within the cloud industry. Google Cloud certifications have also been recognized as some of the highest-paying IT certifications for the past several years. This year, the Google Cloud Certified Professional Data Engineer topped the list with an average salary of $171,749, while the Google Cloud Certified Professional Cloud Architect came in second place, with an average salary of $169,029.

You may be wondering what sort of background you need to take advantage of these opportunities: What sort of classes should you take? How exactly do you get started in the cloud without experience? Here are some tips to start learning about Google Cloud and build your cloud computing skills.

Get hands-on experience with cloud computing

Google Cloud training offers a wide range of learning paths featuring comprehensive courses and hands-on labs, so you get to practice with the real Google Cloud console. For instance, If you wanted to take classes to prepare for the Professional Data Engineer certification mentioned above, there is a complete learning path featuring four courses and 31 hands-on labs to help familiarize you with relevant topics like BigQuery, machine learning, IoT, TensorFlow, and more.

There are nine learning paths providing you with a launch pad to all major pillars of cloud computing, from networking, cloud security, database management, and hybrid cloud infrastructure. Each broader learning path contains specific learning paths to help you specifically train for job roles like Machine Learning Engineer. Visit the Google Cloud training page to find the right path for you.

Learn live from cloud experts

Google Cloud regularly hosts a half-day live training event called Cloud OnBoard which features hands-on learning led by experts. All sessions are also available to watch on-demand after the event.

If you’re a developer new to cloud computing, we recommend you start with Google Cloud Fundamentals, an entry-level course to learn about the basics of Google Cloud. Experts guide you through hands-on labs where you can practice using the Google Console, Google Cloud Shell, and more.

You’ll be introduced to core components of Google Cloud and given an overview of how its tools impact the entire cloud computing landscape. The curriculum covers Compute Engine and how to create VM instances from scratch and from existing templates, how to connect them together, and end with projects that can talk to each other safely and securely. You will also learn about the different storage and database options available on Google Cloud.

Other Cloud OnBoard event topics include cloud architecture, Kubernetes, data analytics, and cloud application development.

Explore Google Cloud infrastructure

Cloud infrastructure is the backbone of the internet. Understanding cloud infrastructure is a good starting point to start digging deeper into cloud concepts because it will give you a taste of the various aspects of cloud computing to figure out what you like best, whether it’s networking, security, or application development.

Build your foundational Google Cloud knowledge with our on-demand infrastructure training in the cloud infrastructure learning path. This learning path will provide you with practical experience through expert-guided labs which dive into Cloud Storage and other key application services like Google Cloud’s operations suite and Cloud Functions.

Show off your skills

Once you have a strong grasp on Google Cloud basics, you can start earning skill badges to demonstrate your experience.

Skill badges are digital credentials that recognize your ability to solve real-world problems with your cloud knowledge. You can share them on your resume or social profile so your professional network sees your technical skills. This can be useful for recruiters or employers as you transition to cloud computing work.Skill badges also enable you to get in-depth, hands-on experience with different Google Cloud offerings on the way to earning the credential.

You can also use them to start preparing for Google Cloud certifications which are more intensive and show employers that you are a cloud expert. Most Google Cloud certifications recommend having at least 6 months or up to several years of industry experience depending on the material.

Ready to get started in the cloud? Visit the Google Cloud training page to see all your options from in-person classes, online courses, special events, and more.

Upload massive lists of products to Merchant Center using Centimani

Posted by Hector Parra, Jaime Martínez, Miguel Fernandes, Julia Hernández

Merchant Center lets merchants manage how their in-store and online product inventory appears on Google. It allows them to reach hundreds of millions of people looking to buy products like yours each day.

To upload their products, merchants can make use of feeds, that is, files with a list of products in a specific format. These can be shared with Merchant Center in different ways: using Google Sheets, SFTP or FTP shares, Google Cloud Storage or manually through the user interface. These methods work great for the majority of cases. But, if a merchant’s product list grows over time, they might reach the usage limits of the feeds. Depending on the case, quota extensions could be granted, but if the list continues to grow, it might reach a point where feeds no longer support that scale, and the Content API for Shopping would become the recommended way to go forward.

The main issue is, if a merchant is recommended to stop using feeds and start using the Content API due to scale problems, it means that the number of products is massive, and trying to use the Content API directly will give them usage and quota errors, as the QPS and products per call limits will be exceeded.

For this specific use case, Centimani becomes critical in helping merchants handle the upload process through the Content API in a controlled manner, avoiding any overload of the API.

Centimani is a configurable massive file processor able to split text files in chunks, process them following a strategic pattern and store the results in BigQuery for reporting. It provides configurable options for chunk size and number of retries, and takes care of exponential backoff to ensure all requests have enough retries to overcome potential temporary issues or errors. Centimani comes with two operators: Google Ads Offline Conversions Uploader, and Merchant Center Products Uploader, but it can be extended to other uses easily.

Centimani uses Google Cloud as its platform, and makes use of Cloud Storage for storing the data, Cloud Functions to do the data processing and the API calls, Cloud Tasks to coordinate the execution of each call, and BigQuery to store the audit information for reporting.

Centimani Architecture

To start using Centimani, a couple of configuration files need to be prepared with information about the Google Cloud Project to be used (including the element names), the credentials to access the Merchant Center accounts and how the load will be distributed (e.g., parallel executions, number of products per call). Then, the deployment is done automatically using a deployment script provided by the tool.

After the tool is deployed, a cloud function will be monitoring the input bucket in Cloud Storage, and every time a file is uploaded there, it will be processed. The tool uses the name of the file to select the operator that is going to be used (“MC” indicates Merchant Center Products Uploader), and the particular configuration to use (multiple configurations can be used to connect to Merchant Center accounts with different access credentials).

Whenever a file is uploaded, it will be sliced in parts if it is greater than the number of products allowed per call, they will be stored in the output bucket in Cloud Storage, and Cloud Tasks will start launching the API calls until all files are processed. Any file with errors will be stored in a folder called “slices_failed” to help troubleshoot any issues found in the process. Also, all the information about the executions will be stored temporarily in Datastore and then moved to BigQuery, where it can be used for monitoring the whole process from a centralized place.

Centimani Status Dashboard Architecture

Centimani provides an easy way for merchants to start using the Content API for Shopping to manage their products, without having to deal with the complexity of keeping the system under the limits.

For more information you can visit the Centimani repository on Github.

Machine Learning Communities: Q3 ‘21 highlights and achievements

Posted by HyeJung Lee, DevRel Community Manager and Soonson Kwon, DevRel Program Manager

Let’s explore highlights and achievements of vast Google Machine Learning communities by region for the last quarter. Activities of experts (GDE, professional individuals), communities (TFUG, TensorFlow user groups), students (GDSC, student clubs), and developers groups (GDG) are presented here.

Key highlights

Image shows a banner for 30 days of ML with Kaggle

30 days of ML with Kaggle is designed to help beginners study ML using Kaggle Learn courses as well as a competition specifically for the participants of this program. Collaborated with the Kaggle team so that +30 the ML GDEs and TFUG organizers participated as volunteers as online mentors as well as speakers for this initiative.

Total 16 of the GDE/GDSC/TFUGs run community organized programs by referring to the shared community organize guide. Houston TensorFlow & Applied AI/ML placed 6th out of 7573 teams — the only Americans in the Top 10 in the competition. And TFUG Santiago (Chile) organizers participated as well and they are number 17 on the public leaderboard.

Asia Pacific

Image shows Google Cloud and Coca-Cola logos

GDE Minori MATSUDA (Japan)’s project on Coca-Cola Bottlers Japan was published on Google Cloud Japan Blog covering creating an ML pipeline to deploy into real business within 2 months by using Vertex AI. This is also published on GCP blog in English.

GDE Chansung Park (Korea) and Sayak Paul (India) published many articles on GCP Blog. First, “Image search with natural language queries” explained how to build a simple image parser from natural language inputs using OpenAI’s CLIP model. From this second “Model training as a CI/CD system: (Part I, Part II)” post, you can learn more about why having a resilient CI/CD system for your ML application is crucial for success. Last, “Dual deployments on Vertex AI” talks about end-to-end workflow using Vertex AI, TFX and Kubeflow.

In China, GDE Junpeng Ye used TensorFlow 2.x to significantly reduce the codebase (15k → 2k) on WeChat Finder which is a TikTok alternative in WeChat. GDE Dan lee wrote an article on Understanding TensorFlow Series: Part 1, Part 2, Part 3-1, Part 3-2, Part 4

GDE Ngoc Ba from Vietnam has contributed AI Papers Reading and Coding series implementing ML/DL papers in TensorFlow and creates slides/videos every two weeks. (videos: Vit Transformer, MLP-Mixer and Transformer)

A beginner friendly codelabs (Get started with audio classification ,Go further with audio classification) by GDSC Sookmyung (Korea) learning to customize pre-trained audio classification models to your needs and deploy them to your apps, using TFlite Model Maker.

Cover image for Mat Kelcey's talk on JAX at the PyConAU event

GDE Matthew Kelcey from Australia gave a talk on JAX at PyConAU event. Mat gave an overview to fundamentals of JAX and an intro to some of the libraries being developed on top.

Image shows overview for the released PerceiverIO code

In Singapore, TFUG Singapore dived back into some of the latest papers, techniques, and fields of research that are delivering state-of-the-art results in a number of fields. GDE Martin Andrews included a brief code walkthrough for the released PerceiverIO code at perceiver– highlighting what JAX looks like, how Haiku relates to Sonnet, but also the data loading stuff which is done via tf.data.

Machine Learning Experimentation with TensorBoard book cover

GDE Imran us Salam Mian from Pakistan published a book “Machine Learning Experimentation with TensorBoard“.

India

GDE Aakash Nain has published the TF-JAX tutorial series from Part 4 to Part 8. Part 4 gives a brief introduction about JAX (What/Why), and DeviceArray. Part 5 covers why pure functions are good and why JAX prefers them. Part 6 focuses on Pseudo Random Number Generation (PRNG) in Numpy and JAX. Part 7 focuses on Just In Time Compilation (JIT) in JAX. And Part 8 covers vmap and pmap.

Image of Bhavesh's Google Cloud certificate

GDE Bhavesh Bhatt published a video about his experience on the Google Cloud Professional Data Engineer certification exam.

Image shows phase 1 and 2 of the Climate Change project using Vertex AI

Climate Change project using Vertex AI by ML GDE Sayak Paul and Siddha Ganju (NVIDIA). They published a paper (Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning) and open-sourced the project with regard to NASA Impact’s ETCI competition. This project made four NeurIPS workshops AI for Science: Mind the Gaps, Tackling Climate Change with Machine Learning, Women in ML, and Machine Learning and the Physical Sciences. And they finished as the first runners-up (see Test Phase 2).

Image shows example of handwriting recognition tutorial

Tutorial on handwriting recognition was contributed to Keras example by GDE Sayak Paul and Aakash Kumar Nain.

Graph regularization for image classification using synthesized graphs by GDE Sayak Pau was added to the official examples in the Neural Structured Learning in TensorFlow.

GDE Sayak Paul and Soumik Rakshit shared a new NLP dataset for multi-label text classification. The dataset consists of paper titles, abstracts, and term categories scraped from arXiv.

North America

Banner image shows students participating in Google Summer of Code

During the GSoC (Google Summer of Code), some GDEs mentored or co-mentored students. GDE Margaret Maynard-Reid (USA) mentored TF-GAN, Model Garden, TF Hub and TFLite products. You can get some of her experience and tips from the GDE Blog. And you can find GDE Sayak Paul (India) and Googler Morgan Roff’s GSoC experience in (co-)mentoring TensorFlow and TF Hub as well.

A beginner friendly workshop on TensorFlow with ML GDE Henry Ruiz (USA) was hosted by GDSC Texas A&M University (USA) for the students.

Screenshot from Youtube video on how transformers work

Youtube video Self-Attention Explained: How do Transformers work? by GDE Tanmay Bakshi from Canada explained how you can build a Transformer encoder-based neural network to classify code into 8 different programming languages using TPU, Colab with Keras.

Europe

GDG / GDSC Turkey hosted AI Summer Camp in cooperation with Global AI Hub. 7100 participants learned about ML, TensorFlow, CV and NLP.

Screenshot from slide presentation titled Why Jax?

TechTalk Speech Processing with Deep Learning and JAX/Trax by GDE Sergii Khomenko (Germany) and M. Yusuf Sarıgöz (Turkey). They reviewed technologies such as Jax, TensorFlow, Trax, and others that can help boost our research in speech processing.

South/Central America

Image shows Custom object detection in the browser using TensorFlow.js

On the other side of the world, in Brazil, GDE Hugo Zanini Gomes wrote an article about “Custom object detection in the browser using TensorFlow.js” using the TensorFlow 2 Object Detection API and Colab was posted on the TensorFlow blog.

Screenshot from a talk about Real-time semantic segmentation in the browser - Made with TensorFlow.js

And Hugo gave a talk about Real-time semantic segmentation in the browser – Made with TensorFlow.js covered using SavedModels in an efficient way in JavaScript directly enabling you to get the reach and scale of the web for your new research.

Data Pipelines for ML was talked about by GDE Nathaly Alarcon Torrico from Bolivia explained all the phases involved in the creation of ML and Data Science products, starting with the data collection, transformation, storage and Product creation of ML models.

Screensho from TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)

TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)“ was hosted by TFUG Santiago (Chile). In this talk the speaker gave a tour of the steps to follow to generate a model capable of being in the top 1% of the Kaggle Leaderboard. The focus was on showing the libraries and“ tricks ”that are used to be able to test many ideas quickly both in implementation and in execution and how to use them in productive environments.

MENA

Screenshot from workshop about Recurrent Neural Networks

GDE Ruqiya Bin Safi (Saudi Arabia) had a workshop about Recurrent Neural Networks : part 1 (Github / Slide) at the GDG Mena. And Ruqiya gave a talk about Recurrent Neural Networks: part 2 at the GDG Cloud Saudi (Saudi Arabia).

AI Training with Kaggle by GDSC Islamic University of Gaza from Palestine. It is a two month training covering Data Processing, Image Processing and NLP with Kaggle.

Sub-Saharan Africa

TFUG Ibadan had two TensorFlow events : Basic Sentiment analysis with Tensorflow and Introduction to Recommenders Systems with TensorFlow”.

Image of Yannick Serge Obam Akou's TensorFlow Certificate

Article covered some tips to study, prepare and pass the TensorFlow developer exam in French by ML GDE Yannick Serge Obam Akou (Cameroon).

Extend Google Apps Script with your API library to empower users

Posted by Keith Einstein, Product Manager

Banner image that shows the Cloud Task logo

Google is proud to announce the availability of the DocuSign API library for Google Apps Script. This newly created library gives all Apps Script users access to the more than 400 endpoints DocuSign has to offer so they can build digital signatures into their custom solutions and workflows within Google Workspace.

The Google Workspace Ecosystem

Last week at Google Cloud Next ‘21, in the session “How Miro, DocuSign, Adobe and Atlassian are helping organizations centralize their work”, we showcased a few partner integrations called add-ons, found on Google Workspace Marketplace. The Google Workspace Marketplace helps developers connect with the more than 3 billion people who use Google Workspace—with a stunning 4.8 billion apps installed to date. That incredible demand is fueling innovation in the ecosystem, and we now have more than 5,300 public apps available in the Google Workspace Marketplace, plus thousands more private apps that customers have built for themselves. As a developer, one of the benefits of an add-on is that it allows you to surface your application in a user-friendly manner that helps people reclaim their time, work more efficiently, and adds another touchpoint for them to engage with your product. While building an add-on enables users to frictionlessly engage with your product from within Google Workspace, to truly unlock limitless potential innovative companies like DocuSign are beginning to empower users to build the unique solutions they need by providing them with a Google Apps Script Library.

Apps Script enables Google Workspace customization

Many users are currently unlocking the power of Google Apps Script by creating the solutions and automations they need to help them reclaim precious time. Publishing a Google Apps Script Library is another great opportunity to bring a product into Google Workspace and gain access to those creators. It gives your users more choices in how they integrate your product into Google Workspace, which in turn empowers them with the flexibility to solve more business challenges with your product’s unique value.

Apps Script libraries can make the development and maintenance of a script more convenient by enabling users to take advantage of pre-built functionality and focus on the aspects that unlock unique value. This allows innovative companies to make available a variety of functionality that Apps Script users can use to create custom solutions and workflows with the features not found in an off-the-shelf app integration like a Google Workspace Add-on or Google Chat application.

The DocuSign API Library for Apps Script

One of the partners we showcased at Google Cloud Next ‘21 was DocuSign. The DocuSign eSignature for Google Workspace add-on has been installed almost two-million times. The add-on allows you to collect signatures or sign agreements from inside Gmail, Google Drive or Google Docs. While collecting signatures and signing agreements are some of the most common areas in which a user would use DocuSign eSignature inside Google Workspace, there are many more features to DocuSign’s eSignature product. In fact, their eSignature API has over 400 endpoints. Being able to go beyond those top features normally found in an add-on and into the rest of the functionality of DocuSign eSignature is where an Apps Script Library can be leveraged.

And that’s exactly what we’re partnering to do. Recently, DocuSign’s Lead API Product Manager, Jeremy Glassenberg (a Google Developer Expert for Google Workspace) joined us on the Totally Unscripted podcast to talk about DocuSign’s path to creating an Apps Script Library. At the DocuSign Developer Conference, on October 27th, Jeremy will be teaming up with Christian Schalk from our Google Cloud Developer Relations team to launch the DocuSign Apps Script Library and showcase how it can be used.

With the DocuSign Apps Script Library, users around the world who lean on Apps Script to build their workplace automations can create customized DocuSign eSignature processes. Leveraging the Apps Script Library in addition to the DocuSign add-on empowers companies who use both DocuSign and Google Workspace to have a more seamless workflow, increasing efficiency and productivity. The add-on allows customers to integrate the solution instantly into their Google apps, and solve for the most common use cases. The Apps Script Library allows users to go deep and solve for the specialized use cases where a single team (or knowledge worker) may need to tap into a less commonly used feature to create a unique solution.

See us at the DocuSign Developer Conference

The DocuSign Apps Script Library is now available in beta and if you’d like to know more about it drop a message to [email protected]. And be sure to register for the session on “Building a DocuSign Apps Script Library with Google Cloud”, Oct 27th @ 10:00 AM. For updates and news like this about the Google Workspace platform, please subscribe to our developer newsletter.

Migrating App Engine push queues to Cloud Tasks

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo

Introduction

The previous Module 7 episode of Serverless Migration Station gave developers an idea of how App Engine push tasks work and how to implement their use in an existing App Engine ndb Flask app. In this Module 8 episode, we migrate this app from the App Engine Datastore (ndb) and Task Queue (taskqueue) APIs to Cloud NDB and Cloud Tasks. This makes your app more portable and provides a smoother transition from Python 2 to 3. The same principle applies to upgrading other legacy App Engine apps from Java 8 to 11, PHP 5 to 7, and up to Go 1.12 or newer.

Over the years, many of the original App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products, for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues, whose functionality has been split out to Cloud Tasks (push queues) and Cloud Pub/Sub (pull queues), now accessible to developers and applications outside of App Engine.

Migrating App Engine push queues to Cloud Tasks video

Migrating to Cloud NDB and Cloud Tasks

The key updates being made to the application:

  1. Add support for Google Cloud client libraries in the app’s configuration
  2. Switch from App Engine APIs to their standalone Cloud equivalents
  3. Make required library adjustments, e.g., add use of Cloud NDB context manager
  4. Complete additional setup for Cloud Tasks
  5. Make minor updates to the task handler itself

The bulk of the updates are in #3 and #4 above, and those are reflected in the following “diff”s for the main application file:

Screenshot shows primary differences in code when switching to Cloud NDB & Cloud Tasks

Primary differences switching to Cloud NDB & Cloud Tasks

With these changes implemented, the web app works identically to that of the Module 7 sample, but both the database and task queue functionality have been completely swapped to using the standalone/unbundled Cloud NDB and Cloud Tasks libraries… congratulations!

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push tasks migration guide for more information. Arriving at a fully-functioning Module 8 app featuring Cloud Tasks sets the stage for a larger migration ahead in Module 9. We’ve accomplished the most important step here, that is, getting off of the original App Engine legacy bundled services/APIs. The Module 9 migration from Python 2 to 3 and Cloud NDB to Cloud Firestore, plus the upgrade to the latest version of the Cloud Tasks client library are all fairly optional, but they represent a good opportunity to perform a medium-sized migration.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.