How to use App Engine pull tasks (Module 18)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The Serverless Migration Station mini-series helps App Engine developers modernize their apps to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17, or to sister serverless platforms Cloud Functions and Cloud Run. Another goal of this series is to demonstrate how to move away from App Engine’s original APIs (now referred to as legacy bundled services) to Cloud standalone replacement services. Once no longer dependent on these proprietary services, apps become much more portable, making them flexible enough to:

App Engine’s Task Queue service provides infrastructure for executing tasks outside of the standard request-response workflow. Tasks may consist of workloads exceeding request timeouts or periodic tangential work. The Task Queue service provides two different queue types, push and pull, for developers to perform auxiliary work.

Push queues are covered in Migration Modules 7-9, demonstrating how to add use of push tasks to an existing baseline app followed by steps to migrate that functionality to Cloud Tasks, the standalone successor to the Task Queues push service. We turn to pull queues in today’s video where Module 18 demonstrates how to add use of pull tasks to the same baseline sample app. Module 19 follows, showing how to migrate that usage to Cloud Pub/Sub.

Adding use of pull queues

In addition to registering page visits, the sample app needs to be modified to track visitors. Visits are comprised of a timestamp and visitor information such as the IP address and user agent. We’ll modify the app to use the IP address and track how many visits come from each address seen. The home page is modified to show the top visitors in addition to the most recent visits:

Screen grab of the sample app's updated home page tracking visits and visitors
The sample app’s updated home page tracking visits and visitors

When visits are registered, pull tasks are created to track the visitors. The pull tasks sit patiently in the queue until they are processed in aggregate periodically. Until that happens, the top visitors table stays static. These tasks can be processed in a number of ways: periodically by a cron or Cloud Scheduler job, a separate App Engine backend service, explicitly by a user (via browser or command-line HTTP request), event-triggered Cloud Function, etc. In the tutorial, we issue a curl request to the app’s endpoint to process the enqueued tasks. When all tasks have completed, the table then reflects any changes to the current top visitors and their visit counts:

Screen grab of processed pull tasks updated in the top visitors table
Processed pull tasks update the top visitors table

Below is some pseudocode representing the core part of the app that was altered to add Task Queue pull task usage, namely a new data model class, VisitorCount, to track visitor counts, enqueuing a (pull) task to update visitor counts when registering individual visits in store_visit(), and most importantly, a new function fetch_counts(), accessible via /log, to process enqueued tasks and update overall visitor counts. The bolded lines represent the new or altered code.

Adding App Engine Task Queue pull task usage to sample app showing 'Before'[Module 1] on the left and 'After' [Module 18] with altered code on the right
Adding App Engine Task Queue pull task usage to sample app

Wrap-up

This “migration” is comprised of adding Task Queue pull task usage to support tracking visitor counts to the Module 1 baseline app and arrives at the finish line with the Module 18 app. To get hands-on experience doing it yourself, do the codelab by hand and follow along with the video. Then you’ll be ready to upgrade to Cloud Pub/Sub should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate pull tasks to Pub/Sub when porting your app to Python 3. You can continue using Task Queue in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Pub/Sub, see Module 19, including its codelab. All Serverless Migration Station content (codelabs, videos, and source code) are available at its open source repo. While we’re initially focusing on Python users, the Cloud team is covering other runtimes soon, so stay tuned. Also check out other videos in the broader Serverless Expeditions series.

Machine Learning Communities: Q3 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the third quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!

TensorFlow/Keras

Load-testing TensorFlow Serving’s REST Interface

Load-testing TensorFlow Serving’s REST Interface by ML GDE Sayak Paul (India) and Chansung Park (Korea) shares the lessons and findings they learned from conducting load tests for an image classification model across numerous deployment configurations.

TFUG Taipei hosted events (Python + Hugging Face-Translation+ tf.keras.losses, Python + Object detection, Python+Hugging Face-Token Classification+tf.keras.initializers) in September and helped community members learn how to use TF and Hugging face to implement machine learning model to solve problems.

Neural Machine Translation with Bahdanau’s Attention Using TensorFlow and Keras and the related video by ML GDE Aritra Roy Gosthipaty (India) explains the mathematical intuition behind neural machine translation.

Serving a TensorFlow image classification model as RESTful and gRPC based services with TFServing, Docker, and Kubernetes

Automated Deployment of TensorFlow Models with TensorFlow Serving and GitHub Actions by ML GDE Chansung Park (Korea) and Sayak Paul (India) explains how to automate TensorFlow model serving on Kubernetes with TensorFlow Serving and GitHub Action.

Deploying 🤗 ViT on Kubernetes with TF Serving by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to scale the deployment of a ViT model from 🤗 Transformers using Docker and Kubernetes.

Screenshot of the TensorFlow Forum in the Chinese Language run by the tf.wiki team

Long-term TensorFlow Guidance on tf.wiki Forum by ML GDE Xihan Li (China) provides TensorFlow guidance by answering the questions from Chinese developers on the forum.

photo of a phone with the Hindi letter 'Ohm' drawn on the top half of the screen. Hinidi Character recognition shows the letter Ohm as the Predicted Result below.

Hindi Character Recognition on Android using TensorFlow Lite by ML GDE Nitin Tiwari (India) shares an end-to-end tutorial on training a custom computer vision model to recognize Hindi characters. In TFUG Pune event, he also gave a presentation titled Building Computer Vision Model using TensorFlow: Part 1.

Using TFlite Model Maker to Complete a Custom Audio Classification App by ML GDE Xiaoxing Wang (China) shows how to use TFLite Model Maker to build a custom audio classification model based on YAMNet and how to import and use the YAMNet-based custom models in Android projects.

SoTA semantic segmentation in TF with 🤗 by ML GDE Sayak Paul (India) and Chansung Park (Korea). The SegFormer model was not available on TensorFlow.

Text Augmentation in Keras NLP by ML GDE Xiaoquan Kong (China) explains what text augmentation is and how the text augmentation feature in Keras NLP is designed.

The largest vision model checkpoint (public) in TF (10 Billion params) through 🤗 transformers by ML GDE Sayak Paul (India) and Aritra Roy Gosthipaty (India). The underlying model is RegNet, known for its ability to scale.

A simple TensorFlow implementation of a DCGAN to generate CryptoPunks

CryptoGANs open-source repository by ML GDE Dimitre Oliveira (Brazil) shows simple model implementations following TensorFlow best practices that can be extended to more complex use-cases. It connects the usage of TensorFlow with other relevant frameworks, like HuggingFace, Gradio, and Streamlit, building an end-to-end solution.

TFX

TFX Machine Learning Pipeline from data injection in TFRecord to pushing out Vertex AI

MLOps for Vision Models from 🤗 with TFX by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for a vision model (TensorFlow) from 🤗 Transformers using the TF ecosystem.

First release of TFX Addons Package by ML GDE Hannes Hapke (United States). The package has been downloaded a few thousand times (source). Google and other developers maintain it through bi-weekly meetings. Google’s Open Source Peer Award has recognized the work.

TFUG São Paulo hosted TFX T1 | E4 & TFX T1 | E5. And ML GDE Vinicius Caridá (Brazil) shared how to train a model in a TFX pipeline. The fifth episode talks about Pusher: publishing your models with TFX.

Semantic Segmentation model within ML pipeline by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for semantic segmentation task with TFX and various GCP products such as Vertex Pipeline, Training, and Endpoints.

JAX/Flax

Screen shot of Tutorial 2 (JAX): Introduction to JAX+Flax with GitHub Repo and Codelab via university of Amseterdam

JAX Tutorial by ML GDE Phillip Lippe (Netherlands) is meant to briefly introduce JAX, including writing and training neural networks with Flax.

TFUG Malaysia hosted Introduction to JAX for Machine Learning (video) and Leong Lai Fong gave a talk. The attendees learned what JAX is and its fundamental yet unique features, which make it efficient to use when executing deep learning workloads. After that, they started training their first JAX-powered deep learning model.

TFUG Taipei hosted Python+ JAX + Image classification and helped people learn JAX and how to use it in Colab. They shared knowledge about the difference between JAX and Numpy, the advantages of JAX, and how to use it in Colab.

Introduction to JAX by ML GDE João Araújo (Brazil) shared the basics of JAX in Deep Learning Indaba 2022.

A comparison of the performance and overview of issues resulting from changing from NumPy to JAX

Should I change from NumPy to JAX? by ML GDE Gad Benram (Portugal) compares the performance and overview of the issues that may result from changing from NumPy to JAX.

Introduction to JAX: efficient and reproducible ML framework by ML GDE Seunghyun Lee (Korea) introduced JAX/Flax and their key features using practical examples. He explained the pure function and PRNG, which make JAX explicit and reproducible, and XLA and mapping functions which make JAX fast and easily parallelized.

Data2Vec Style pre-training in JAX by ML GDE Vasudev Gupta (India) shares a tutorial for demonstrating how to pre-train Data2Vec using the Jax/Flax version of HuggingFace Transformers.

Distributed Machine Learning with JAX by ML GDE David Cardozo (Canada) delivered what makes JAX different from TensorFlow.

Image classification with JAX & Flax by ML GDE Derrick Mwiti (Kenya) explains how to build convolutional neural networks with JAX/Flax. And he wrote several articles about JAX/Flax: What is JAX?, How to load datasets in JAX with TensorFlow, Optimizers in JAX and Flax, Flax vs. TensorFlow, etc..

Kaggle

DDPMs – Part 1 by ML GDE Aakash Nain (India) and cait-tf by ML GDE Sayak Paul (India) were announced as Kaggle ML Research Spotlight Winners.

Forward process in DDPMs from Timestep 0 to 100

Fresher on Random Variables, All you need to know about Gaussian distribution, and A deep dive into DDPMs by ML GDE Aakash Nain (India) explain the fundamentals of diffusion models.

In Grandmasters Journey on Kaggle + The Kaggle Book, ML GDE Luca Massaron (Italy) explained how Kaggle helps people in the data science industry and which skills you must focus on apart from the core technical skills.

Cloud AI

How Cohere is accelerating language model training with Google Cloud TPUs by ML GDE Joanna Yoo (Canada) explains what Cohere engineers have done to solve scaling challenges in large language models (LLMs).

ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google

In Using machine learning to transform finance with Google Cloud and Digits, ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google, about how Digits leverages Google Cloud’s machine learning tools to empower accountants and business owners with near-zero latency.

A tour of Vertex AI by TFUG Chennai for ML, cloud, and DevOps engineers who are working in MLOps. This session was about the introduction of Vertex AI, handling datasets and models in Vertex AI, deployment & prediction, and MLOps.

TFUG Abidjan hosted two events with GDG Cloud Abidjan for students and professional developers who want to prepare for a Google Cloud certification: Introduction session to certifications and Q&A, Certification Study Group.

Flow chart showing shows how to deploy a ViT B/16 model on Vertex AI

Deploying 🤗 ViT on Vertex AI by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to deploy a ViT B/16 model on Vertex AI. They cover some critical aspects of a deployment such as auto-scaling, authentication, endpoint consumption, and load-testing.

Photo collage of AI generated images

TFUG Singapore hosted The World of Diffusion – DALL-E 2, IMAGEN & Stable Diffusion. ML GDE Martin Andrews (Singapore) and Sam Witteveen (Singapore) gave talks named “How Diffusion Works” and “Investigating Prompt Engineering on Diffusion Models” to bring people up-to-date with what has been going on in the world of image generation.

ML GDE Martin Andrews (Singapore) have done three projects: GCP VM with Nvidia set-up and Convenience Scripts, Containers within a GCP host server, with Nvidia pass-through, Installing MineRL using Containers – with linked code.

Jupyter Services on Google Cloud by ML GDE Gad Benram (Portugal) explains the differences between Vertex AI Workbench, Colab, and Deep Learning VMs.

Google Cloud's Two Towers Recommender and TensorFlow

Train and Deploy Google Cloud’s Two Towers Recommender by ML GDE Rubens de Almeida Zimbres (Brazil) explains how to implement the model and deploy it in Vertex AI.

Research & Ecosystem

WOMEN DATA SCIENCE, LA PAZ Club de lectura de papers de Machine Learning Read, Learn and Share the knowledge #MLPaperReadingClubs, Nathaly Alarcón, @WIDS_LaPaz #MLPaperReadingClubs

The first session of #MLPaperReadingClubs (video) by ML GDE Nathaly Alarcon Torrico (Bolivia) and Women in Data Science La Paz. Nathaly led the session, and the community members participated in reading the ML paper “Zero-shot learning through cross-modal transfer.”

In #MLPaperReadingClubs (video) by TFUG Lesotho, Arnold Raphael volunteered to lead the first session “Zero-shot learning through cross-modal transfer.”

Screenshot of a screenshare of Zero-shot learning through cross-modal transfer to 7 participants in a virtual call

ML Paper Reading Clubs #1: Zero Shot Learning Paper (video) by TFUG Agadir introduced a model that can recognize objects in images even if no training data is available for the objects. TFUG Agadir prepared this event to make people interested in machine learning research and provide them with a broader vision of differentiating good contributions from great ones.

Opening of the Machine Learning Paper Reading Club (video) by TFUG Dhaka introduced ML Paper Reading Club and the group’s plan.

EDA on SpaceX Falcon 9 launches dataset (Kaggle) (video) by TFUG Mysuru & TFUG Chandigarh organizer Aashi Dutt (presenter) walked through exploratory data analysis on SpaceX Falcon 9 launches dataset from Kaggle.

Screenshot of ML GDE Qinghua Duan (China) showing how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Introduction to MRC-style dialogue summaries based on BERT by ML GDE Qinghua Duan (China) shows how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Plant disease classification using Deep learning model by ML GDE Yannick Serge Obam Akou (Cameroon) talked on plant disease classification using deep learning model : an end to end Android app (open source project) that diagnoses plant diseases.

TensorFlow/Keras implementation of Nystromformer

Nystromformer Github repository by Rishit Dagli provides TensorFlow/Keras implementation of Nystromformer, a transformer variant that uses the Nyström method to approximate standard self-attention with O(n) complexity which allows for better scalability.

Extending support for App Engine bundled services (Module 17)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Background

App Engine initially launched in 2008, providing a suite of bundled services making it convenient for applications to access a database (Datastore), caching service (Memcache), independent task execution (Task Queue), Google Sign-In authentication (Users), or large “blob” storage (Blobstore), or other companion services. However, apps leveraging those services can only run their apps on App Engine.

To increase app portability and help Google move towards its goal of having the most open cloud on the market, App Engine launched its 2nd-generation service in 2018, initially removing those legacy services. The newer platform allows developers to upgrade apps to the latest language runtimes, such as moving from Python 2 to 3 or Java 8 to 11 (and today, Java 17). One of the major drawbacks to the 1st-generation runtimes is that they’re customized, proprietary, and restrictive in what you can use or can’t.

Instead, the 2nd-generation platform uses open source runtimes, meaning ability to follow standard development practices, use common/known idioms, and have fewer restrictions of 3rd-party libraries, and obviating the need to copy or “vendor” them with your code. Unfortunately, to use these newer runtimes, migrating away from App Engine services were required because while you could upgrade language releases, there was no access to bundled services, breaking apps or requiring complete rewrites, making it a showstopper for many users.

Due to their popularity and the desire to ease the upgrade process for customers, the App Engine team restored access to most (but not all) of those services in Fall 2021. Today’s Serverless Migration Station video demonstrates how to continue usage of bundled services available to Python 3 developers.

Showing App Engine users how to use bundled services on Python 3

Performing the upgrade

Modernizing the typical Python 2 App Engine app looks something like this:
  1. Migrate from the webapp2 framework (not available in Python 3)
  2. Port from Python 2 to 3, preserve use of bundled services
  3. Optional migration to Cloud standalone or similar 3rd-party services

The first step is to move to a standard Python web framework like Flask, Django, Pyramid, etc. Below is some pseudocode from Migration Module 1 demonstrating how to migrate from webapp2 to Flask:

codeblocks for porting Python 2 sample app from webapp2 to Flask
Step 1: Port Python 2 sample app from webapp2 to Flask

The key changes are bolded in the above code snippets. Notice how the App Engine NDB code [the Visit class definition plus store_visit() and fetch_visits() functions] are unaffected by this web framework migration. The full webapp2 code sample can be found in the Module 0 repo folder while the completed migration to Flask sample is located in the Module 1 repo folder.

After your app has ported frameworks, you’re free to upgrade to Python 3 while preserving access to the bundled services if your app uses any. Below is pseudocode demonstrating how to upgrade the same sample app to Python 3 as well as the code changes needed to continue to use App Engine NDB:

codeblocks for porting sample app to Python 3, preserving use of NDB bundled service
Step 2: Port sample app to Python 3, preserving use of NDB bundled service
The original app was designed to work under both Python 2 and 3 interpreters, so no language changes were required in this case. We added an import of the new App Engine SDK followed by the key update wrapping the WSGI object so the app can access the bundled services. As before, the key updates are bolded. Some updates to configuration are also required, and those are outlined in the documentation and the (Module 17) codelab.

The NDB code is also left untouched in this migration. Not all of the bundled services feature such a hands-free migration, and we hope to cover some of the more complex ones ahead in Module 22. Java, PHP, and Go users have it even better, requiring fewer or no code changes at all. The Python 2 Flask sample is located in the Module 1 repo folder, and the resulting Python 3 app can be found in the Module 1b repo folder.

The immediate benefit of step two is the ability to upgrade to a more current version of language runtime. This leaves the third step of migrating off the bundled services as optional, especially if you plan on staying on App Engine for the long-term.

Additional options

If you decide to migrate off the bundled services, you can do so on your own timeline. It should be a consideration should you ever want to move to modern serverless platforms such as Cloud Functions or Cloud Run, to lower-level platforms because you want more control, like GKE, our managed Kubernetes service, or Compute Engine VMs.

Step three is also where the rest of the Serverless Migration Station content may be useful:

*code samples and codelabs available; videos forthcoming

As far as moving to modern serverless platforms, if you want to break apart a large App Engine app into multiple microservices, consider Cloud Functions. If your organization has added containerization as part of your software development workflow, consider Cloud Run. It’s suitable for apps if you’re familiar with containers and Docker, but even if you or your team don’t have that experience, Cloud Buildpacks can do the heavy lifting for you. Here are the relevant migration modules to explore:

    Wrap-up

    Early App Engine users appreciate the convenience of the platform’s bundled services, and after listening to user feedback, adding them back to 2nd-generation runtimes is another way we can help developers modernize their apps. Whether upgrading to newer language runtimes to stay on App Engine and continue to use its bundled services, migrating to Cloud standalone products, or shifting to other serverless platforms, the Google Cloud team aims to provide the tools to help streamline your modernization efforts.

    All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. Today’s video features a special guest to provide a teaser of what to expect for Java. For additional video content, check out the broader Serverless Expeditions series.

    Dev Library Letters: 14th Issue

    Posted by Garima Mehra, Program Manager

    ‘Google Dev Library letters’ is curated to bring you some of the best projects developed with Google tech that have been submitted to the Dev Library platform. We hope this brings you the inspiration you need for your next project!

    Android

    Image-compressor 
    by Vinod Baste

    Check out Vinod’s Android Image compress library that helps reduce the size of the image by 90% without losing any of its pixels.

    SealedX 
    by Jaewoong Eum

    Learn how to auto-generate extensive sealed classes and interfaces for Android and Kotlin.

    Flutter

    GitHub Actions to deploy
    Flutter Web to gh-pages
     
    by Sai Rajendra Immadi

    Tired of manually deploying the app every time? Or do you want to deploy your flutter web applications to gh-pages? Use this blog as your guide.

    Double And Triple Dots in Flutter 
    by Lakshydeep Vikram

    Learn the reason for using double and triple dots in flutter and where to use them.

    Machine Learning

    Nystromformer 
    by Rishit Dagli

    Learn how to use the Nystrom method to approximate standard self-attention. 


    Google Cloud

    by Ezekias Bokove

    Learn how to set up a notification system for Cloud Run services. 

    Switch to GCP for cost savings and better performance
    by Gaurav Madan

    Learn why architects dealing with complex application design and who use well-known Google services should consider the Google Cloud Platform. 


    “The Google community includes people with diverse backgrounds. No matter what an individual circumstance is, the platform should support anyone to explore and be creative. We encourage authors to boldly consider diverse backgrounds and to be inclusive when authoring.”

    Vinesh Prasanna M

    Customer Engineer | Google Cloud 



    “Authoring a good code sample is hard. The difficulty comes from the additional pieces you need to add to your respository to keep the code sample fresh and appealing to your developers.”

    Brett Morgan

    Developer Relations Engineer | Flutter





    Want to read more? 
    Check out the latest projects and community-authored content by visiting Google Dev Library
    Submit your projects to showcase your work and inspire developers!

    Google Cloud & Kotlin GDE Kevin Davin helps others learn in the face of challenges

    Posted by Kevin Hernandez, , Developer Relations Community Manager

    Kevin Davin speaking at the SnowCamp Conference in 2019
    Kevin Davin has always had a passion for learning and helping others learn, no matter their background or unique challenges they may face. He explains, “I want to learn something new every day, I want to help others learn, and I’m addicted to learning.” This mantra is evident in everything he does from giving talks at numerous conferences to helping people from underrepresented groups overcome imposter syndrome and even helping them become GDEs. In addition to learning, Kevin is also passionate about diversity and inclusion efforts, partly inspired by navigating the world with partial blindness.
    Kevin has been a professional programmer for 10 years now and has been in the field of Computer Science for about 20 years. Through the years, he has emphasized the importance of learning how and where to learn. For example, while he learned a lot while he was studying at a university, he was able to learn just as much through his colleagues. In fact, it was through his colleagues that he picked up lessons in teamwork and the ability to learn from people with different points of view and experience. Since he was able to learn so much from those around him, Kevin also wanted to pay it forward and started volunteering at a school for people with disabilities. Guided by the Departmental Centers for People with Disabilities, the aim of the program is to teach coding languages and reintegrate students into a technical profession. During his time at this center, Kevin helped students practice what they learned and ultimately successfully transition into a new career.

    During these experiences, Kevin was always involved in the developer community through open-source projects. It was through these projects that he learned about the GDE program and was connected to Google Developer advocates. Kevin was drawn to the GDE program because he wanted to share his knowledge with others and have direct access to Google in order to become an advocate on behalf of developers. In 2016, he discovered Kubernetes and helped his company at the time move to Google Cloud. He always felt like this model was the right solution and invested a lot of time to learn it and practice it. “Google Cloud is made for developers. It’s like a Lego set because you can take the parts you want and put it together,” he remarked.

    The GDE program has given him access to the things he values most: being a part of a developer community, being an advocate for developers, helping people from all backgrounds feel included, and above all, an opportunity to learn something new every day. Kevin’s parting advice for hopeful GDEs is: “Even if you can’t reach the goal of being a GDE now, you can always get accepted in the future. Don’t be afraid to fail because without failure, you won’t learn anything.” With his involvement in the program, Kevin hopes to continue connecting with the developer community and learning while supporting diversity efforts.

    Learn more about Kevin on Twitter & LinkedIn.

    The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

    Migrating from App Engine Blobstore to Cloud Storage (Module 16)

    Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

    Introduction and background

    The most recent Serverless Migration Station video demonstrated how to add use of the App Engine’s Blobstore service to a sample Python 2 App Engine app, kicking off the first of a 2-part series on migrating away from Blobstore. In today’s Module 16 video, we complete this journey, arriving at Cloud Storage. Moving away from proprietary App Engine services like Blobstore makes apps more portable, giving them enough flexibility to:

    Showing App Engine users how to migrate to Cloud Storage

    As described previously, a Blobstore for Python 2 dependency on webapp made the Module 15 content more straightforward to implement if it was still using webapp2. To completely modernize this app here in Module 16, the following migrations should be carried out:

    • Migrate from webapp2 (and webapp) to Flask
    • Migrate from App Engine NDB to Cloud NDB
    • Migrate from App Engine Blobstore to Cloud Storage
    • Migrate from Python 2 to Python (2 and) 3

    Performing the migrations

    Prior to modifying the application code, a variety of configuration updates need to be made. Updates applying only to Python 2 feature a “Py2” designation while those migrating to Python 3 will see “Py3” annotations.

    1. Remove the built-in Jinja2 library from app.yaml—Jinja2 already comes with Flask, so remove use of the older built-in version which may possibly conflict with the contemporary Flask version you’re using. (Py2)
    2. Use of Cloud client libraries (such as those for Cloud NDB and Cloud Storage) require a pair of built-in libraries, grpcio and setuptools, so add those to app.yaml (Py2)
    3. Remove everything in app.yaml except for a valid runtime (Py3)
    4. Add Cloud NDB and Cloud Storage client libraries to requirements.txt (Py2 & Py3)
    5. Create an appengine_config.py supporting both built-in (those in app.yaml) and non built-in (those in requirements.txt) libraries used (Py2)

    The Module 15 app already migrated away from webapp2‘s (Django) templating system to Jinja2. This is useful when migrating to Flask because Jinja2 is Flask’s default template system. Switching from App Engine NDB to Cloud NDB is fairly straightforward as the latter was designed to be mostly compatible with the original. The only change visible in this sample app is to move Datastore calls into Python with blocks.

    The most significant changes occur when moving the upload and download handlers from webapp to Cloud Storage. The video and corresponding codelab go more in-depth into the necessary changes, but in summary, these are the updates required in the main application:

    1. webapp2 is replaced by Flask. Instead of using the older built-in version of Jinja2, use the version that comes with Flask.
    2. App Engine Blobstore and NDB are replaced by Cloud NDB and Cloud Storage, respectively.
    3. The webapp Blobstore handler functionality is replaced by a combination of the io standard library module plus components from Flask and Werkzeug. Furthermore, the handler classes and methods are replaced by Flask functions.
    4. The main handler class and corresponding GET and POST methods are all replaced by a single Flask function.

    The results

    With all the changes implemented, the original Module 15 app still operates identically in Module 16, starting with a form requesting a visit artifact followed by the most recents visits page:
    The sample app’s artifact prompt page


    The sample app’s most recent visits page.

    The only difference is that four migrations have been completed where all of the “infrastructure” is now taken care of by non-App Engine legacy services. Furthermore, the Module 16 app could be either a Python 2 or 3 app. As far as the end-user is concerned, “nothing happened.”

    Migrating sample app from App Engine Blobstore to Cloud Storage

    Wrap-up

    Module 16 featured four different migrations, modernizing the Module 15 app from using App Engine legacy services like NDB and Blobstore to Cloud NDB and Cloud Storage, respectively. While we recommend users move to the latest offerings from Google Cloud, migrating from Blobstore to Cloud Storage isn’t required, and should you opt to do so, can do it on your own timeline. In addition to today’s video, be sure to check out the Module 16 codelab which leads you step-by-step through the migrations discussed.

    In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate to Cloud Storage when porting your app to Python 3. You can continue using Blobstore in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

    If you’re using other App Engine legacy services be sure to check out the other Migration Modules in this series. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

    Google Dev Library Letters — 12th Issue

    Posted by Garima Mehra, Program Manager

    ‘Google Dev Library Letters’ is curated to bring you some of the latest projects developed with Google tech submitted to Google Dev Library Platform. We hope this brings you the inspiration you need for your next project!


    Android

    Shape your Image: Circle, Rounded Square, or Cuts at the corner in Android by Sriyank Siddhartha

    Using the MDC library, shape images in just a few lines of code by using ShapeableImageView.

    Foso/Ktorfit by Jens Klingenberg

    HTTP client / Kotlin Symbol Processor for Kotlin Multiplatform (Js, Jvm, Android, Native, iOS) using KSP and Ktor clients inspired by Retrofit.

    Machine Learning Communities: Q2 ‘22 highlights and achievements

    Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

    Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the second quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!

    TensorFlow/Keras

    TFUG Agadir hosted #MLReady phase as a part of #30DaysOfML. #MLReady aimed to prepare the attendees with the knowledge required to understand the different types of problems which deep learning can solve, and helped attendees be prepared for the TensorFlow Certificate.

    TFUG Taipei hosted the basic Python and TensorFlow courses named From Python to TensorFlow. The aim of these events is to help everyone learn about the basics of Python and TensorFlow, including TensorFlow Hub, TensorFlow API. The event videos are shared every week via Youtube playlist.

    TFUG New York hosted Introduction to Neural Radiance Fields for TensorFlow users. The talk included Volume Rendering, 3D view synthesis, and links to a minimal implementation of NeRF using Keras and TensorFlow. In the event, ML GDE Aritra Roy Gosthipaty (India) had a talk focusing on breaking the concepts of the academic paper, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis into simpler and more ingestible snippets.

    TFUG Turkey, GDG Edirne and GDG Mersin organized a TensorFlow Bootcamp 22 and ML GDE M. Yusuf Sarıgöz (Turkey) participated as a speaker, TensorFlow Ecosystem: Get most out of auxiliary packages. Yusuf demonstrated the inner workings of TensorFlow, how variables, tensors and operations interact with each other, and how auxiliary packages are built upon this skeleton.

    TFUG Mumbai hosted the June Meetup and 110 folks gathered. ML GDE Sayak Paul (India) and TFUG mentor Darshan Despande shared knowledge through sessions. And ML workshops for beginners went on and participants built up machine learning models without writing a single line of code.

    ML GDE Hugo Zanini (Brazil) wrote Realtime SKU detection in the browser using TensorFlow.js. He shared a solution for a well-known problem in the consumer packaged goods (CPG) industry: real-time and offline SKU detection using TensorFlow.js.

    ML GDE Gad Benram (Portugal) wrote Can a couple TensorFlow lines reduce overfitting? He explained how just a few lines of code can generate data augmentations and boost a model’s performance on the validation set.

    ML GDE Victor Dibia (USA) wrote How to Build An Android App and Integrate Tensorflow ML Models sharing how to run machine learning models locally on Android mobile devices, How to Implement Gradient Explanations for a HuggingFace Text Classification Model (Tensorflow 2.0) explaining in 5 steps about how to verify the model is focusing on the right tokens to classify text. He also wrote how to finetune a HuggingFace model for text classification, using Tensorflow 2.0.

    ML GDE Karthic Rao (India) released a new series ML for JS developers with TFJS. This series is a combination of short portrait and long landscape videos. You can learn how to build a toxic word detector using TensorFlow.js.

    ML GDE Sayak Paul (India) implemented the DeiT family of ViT models, ported the pre-trained params into the implementation, and provided code for off-the-shelf inference, fine-tuning, visualizing attention rollout plots, distilling ViT models through attention. (code | pretrained model | tutorial)

    ML GDE Sayak Paul (India) and ML GDE Aritra Roy Gosthipaty (India) inspected various phenomena of a Vision Transformer, shared insights from various relevant works done in the area, and provided concise implementations that are compatible with Keras models. They provide tools to probe into the representations learned by different families of Vision Transformers. (tutorial | code)

    JAX/Flax

    ML GDE Aakash Nain (India) had a special talk, Introduction to JAX for ML GDEs, TFUG organizers and ML community network organizers. He covered the fundamentals of JAX/Flax so that more and more people try out JAX in the near future.

    ML GDE Seunghyun Lee (Korea) started a project, Training and Lightweighting Cookbook in JAX/FLAX. This project attempts to build a neural network training and lightweighting cookbook including three kinds of lightweighting solutions, i.e., knowledge distillation, filter pruning, and quantization.

    ML GDE Yucheng Wang (China) wrote History and features of JAX and explained the difference between JAX and Tensorflow.

    ML GDE Martin Andrews (Singapore) shared a video, Practical JAX : Using Hugging Face BERT on TPUs. He reviewed the Hugging Face BERT code, written in JAX/Flax, being fine-tuned on Google’s Colab using Google TPUs. (Notebook for the video)

    ML GDE Soumik Rakshit (India) wrote Implementing NeRF in JAX. He attempts to create a minimal implementation of 3D volumetric rendering of scenes represented by Neural Radiance Fields.

    Kaggle

    ML GDEs’ Kaggle notebooks were announced as the winner of Google OSS Expert Prize on Kaggle: Sayak Paul and Aritra Roy Gosthipaty’s Masked Image Modeling with Autoencoders in March; Sayak Paul’s Distilling Vision Transformers in April; Sayak Paul & Aritra Roy Gosthipaty’s Investigating Vision Transformer Representations; Soumik Rakshit’s Tensorflow Implementation of Zero-Reference Deep Curve Estimation in May and Aakash Nain’s The Definitive Guide to Augmentation in TensorFlow and JAX in June.

    ML GDE Luca Massaron (Italy) published The Kaggle Book with Konrad Banachewicz. This book details competition analysis, sample code, end-to-end pipelines, best practices, and tips & tricks. And in the online event, Luca and the co-author talked about how to compete on Kaggle.

    ML GDE Ertuğrul Demir (Turkey) wrote Kaggle Handbook: Fundamentals to Survive a Kaggle Shake-up covering bias-variance tradeoff, validation set, and cross validation approach. In the second post of the series, he showed more techniques using analogies and case studies.

    TFUG Chennai hosted ML Study Jam with Kaggle and created study groups for the interested participants. More than 60% of members were active during the whole program and many of them shared their completion certificates.

    TFUG Mysuru organizer Usha Rengaraju shared a Kaggle notebook which contains the implementation of the research paper: UNETR – Transformers for 3D Biomedical Image Segmentation. The model automatically segments the stomach and intestines on MRI scans.

    TFX

    ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) shared how to deploy a deep learning model with Docker, Kubernetes, and Github actions, with two promising ways – FastAPI (for REST) and TF Serving (for gRPC).

    ML GDE Ukjae Jeong (Korea) and ML Engineers at Karrot Market, a mobile commerce unicorn with 23M users, wrote Why Karrot Uses TFX, and How to Improve Productivity on ML Pipeline Development.

    ML GDE Jun Jiang (China) had a talk introducing the concept of MLOps, the production-level end-to-end solutions of Google & TensorFlow, and how to use TFX to build the search and recommendation system & scientific research platform for large-scale machine learning training.

    ML GDE Piero Esposito (Brazil) wrote Building Deep Learning Pipelines with Tensorflow Extended. He showed how to get started with TFX locally and how to move a TFX pipeline from local environment to Vertex AI; and provided code samples to adapt and get started with TFX.

    TFUG São Paulo (Brazil) had a series of online webinars on TensorFlow and TFX. In the TFX session, they focused on how to put the models into production. They talked about the data structures in TFX and implementation of the first pipeline in TFX: ingesting and validating data.

    TFUG Stockholm hosted MLOps, TensorFlow in Production, and TFX covering why, what and how you can effectively leverage MLOps best practices to scale ML efforts and had a look at how TFX can be used for designing and deploying ML pipelines.

    Cloud AI

    ML GDE Chansung Park (Korea) wrote MLOps System with AutoML and Pipeline in Vertex AI on GCP official blog. He showed how Google Cloud Storage and Google Cloud Functions can help manage data and handle events in the MLOps system.

    He also shared the Github repository, Continuous Adaptation with VertexAI’s AutoML and Pipeline. This contains two notebooks to demonstrate how to automate to produce a new AutoML model when the new dataset comes in.

    TFUG Northwest (Portland) hosted The State and Future of AI + ML/MLOps/VertexAI lab walkthrough. In this event, ML GDE Al Kari (USA) outlined the technology landscape of AI, ML, MLOps and frameworks. Googler Andrew Ferlitsch had a talk about Google Cloud AI’s definition of the 8 stages of MLOps for enterprise scale production and how Vertex AI fits into each stage. And MLOps engineer Chris Thompson covered how easy it is to deploy a model using the Vertex AI tools.

    Research

    ML GDE Qinghua Duan (China) released a video which introduces Google’s latest 540 billion parameter model. He introduced the paper PaLM, and described the basic training process and innovations.

    ML GDE Rumei LI (China) wrote blog postings reviewing papers, DeepMind’s Flamingo and Google’s PaLM.

    The Google Cloud Startup Summit is coming on June 2, 2022

    Posted by Chris Curtis, Startup Marketing Manager at Google Cloud

    We’re excited to announce our annual Google Cloud Startup Summit will be taking place on June 2nd, 2022.

    We hope you will join us as we bring together our startup & VC communities. Join us to dive into topics relevant to startups and enjoy sessions such as:

    The future of web3

    • Hear from Google Cloud CEO, Thomas Kurian and Dapper Labs Co-founder and CEO, Roham Gharegozlou, as they discuss web3 and how startups can prepare for the paradigm changes it brings.

    VC AMA: Startup Summit Edition

    • Join us for a very special edition of the VC AMA series where we’ll have a discussion with Derek Zanutto from CapitalG, Alison Lange Engel from Greycroft and Matt Turck from FirstMark to discuss investment trends and advice for founders around cloud, data, and the future of disruption in legacy industries.

    What’s new for the Google for Startups Cloud Program

    • Exciting announcements from Ryan Kiskis, Director of the Startup Ecosystem at Google Cloud, on how Google Cloud is investing in the startup ecosystem with tailored programs and offers.

    Technical leaders & business sessions

    • Growth insights from top startups Discord, Swit, and Streak on how their tech stack helped propel their growth.

    Additionally, startups will have an opportunity to join ‘Ask me Anything’ live sessions after the event to interact with Google Cloud startup experts and technical teams to discuss questions that may come up throughout the event.

    You can see the full agenda here to get more details on the sessions.

    We can’t wait to see you at the Google Cloud Startup Summit. Register to secure your spot today.

    Machine Learning Communities: Q1 ‘22 highlights and achievements

    Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

    Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the first quarter of the year! We are enthusiastic and grateful about all the activities that the communities across the globe do. Here are the highlights!

    ML Ecosystem Campaign Highlights

    ML Olympiad is an associated Kaggle Community Competitions hosted by Machine Learning Google Developers Experts (ML GDEs) or TensorFlow User Groups (TFUGs) sponsored by Google. The first round was hosted from January to March, suggesting solving critical problems of our time. Competition highlights include Autism Prediction Challenge, Arabic_Poems, Hausa Sentiment Analysis, Quality Education, Good Health and Well Being. Thank you TFUG Saudi, New York, Guatemala, São Paulo, Pune, Mysuru, Chennai, Bauchi, Casablanca, Agadir, Ibadan, Abidjan, Malaysia and ML GDE Ruqiya Bin Safi, Vinicius Fernandes Caridá, Yogesh Kulkarni, Mohammed buallay, Sayed Ali Alkamel, Yannick Serge Obam, Elyes Manai, Thierno Ibrahima DIOP, Poo Kuan Hoong for hosting ML Olympiad!

    Highlights and Achievements of ML Communities

    TFUG organizer Ali Mustufa Shaikh (TFUG Mumbai) and Rishit Dagli won the TensorFlow Community Spotlight award (paper and code). This project was supported by provided Google Cloud credit.

    ML GDE Sachin Kumar (Qatar) posted Build a retail virtual agent from scratch with Dialogflow CX – Ultimate Chatbot Tutorials. In this tutorial, you will learn how to build a chatbot and voice bot from scratch using Dialogflow CX, a Conversational AI Platform (CAIP) for building conversational UIs.

    ML GDE Ngoc Ba (Vietnam) posted MTet: Multi-domain Translation for English and Vietnamese. This project is about how to collect high quality data and train a state-of-the-art neural machine translation model for Vietnamese. And it utilized Google Cloud TPU, Cloud Storage and related GCP products for faster training.

    Kaggle announced the Google Open Source Prize early this year (Winners announcement page). In January, ML GDE Aakash Kumar Nain (India)’s Building models in JAX – Part1 (Stax) was awarded.

    In February, ML GDE Victor Dibia (USA)’s notebook Signature Image Cleaning with Tensorflow 2.0 and ML GDE Sayak Paul (India) & Soumik Rakshit’s notebook gaugan-keras were awarded.

    TFUG organizer Usha Rengaraju posted Variable Selection Networks (AI for Climate Change) and Probabilistic Bayesian Neural Networks using TensorFlow Probability notebooks on Kaggle. They both got gold medals, and she has become a Triple GrandMaster!

    TFUG Chennai hosted the two events, Transformers – A Journey into attention and Intro to Deep Reinforcement Learning. Those events were planned for beginners. Events include introductory sessions explaining the transformers research papers and the basic concept of reinforcement learning.

    ML GDE Margaret Maynard-Reid (USA), Nived P A, and Joel Shor posted Our Summer of Code Project on TF-GAN. This article describes enhancements made to the TensorFlow GAN library (TF-GAN) of the last summer.

    ML GDE Aakash Nain (India) released a series of tutorials about building models in JAX. In the second tutorial, Aakash uses one of the most famous and most widely used high-level libraries for Jax to build a classifier. In the notebook, you will be taking a deep dive into Flax, too.

    ML GDE Bhavesh Bhatt (India) built a model for braille to audio with 95% accuracy. He created a model that translates braille to text and audio, lending a helping hand to people with visual disabilities.

    ML GDE Sayak Paul (India) recently wrote Publishing ConvNeXt Models on TensorFlow Hub. This is a contribution from the 30 versions of the model, ready for inference and transfer learning, with documentation and sample code. And he also posted First Steps in GSoC to encourage the fellow ML GDEs’ participation in Google Summer of Code (GSoC).

    ML GDE Merve Noyan (Turkey) trained 40 models on keras.io/examples; built demos for them with Streamlit and Gradio. And those are currently being hosted here. She also held workshops entitled NLP workshop with TensorFlow for TFUG Delhi, TFUG Chennai, TFUG Hyderabad and TFUG Casablanca. It covered the basic to advanced topics in NLP right from Transformers till model hosting in Hugging Face, using TFX and TF Serve.

    How is Dev Library useful to the open-source community?

    Posted by Ankita Tripathi, Community Manager (Dev Library)


    Witnessing a plethora of open-source enthusiasts in the developer ecosystem in recent years gave birth to the idea of Google’s Dev Library. The inception of the platform happened in June 2021 with the only objective of giving visibility to developers who have been creating and building projects relentlessly using Google technologies. But why the Dev Library?

    Why Dev Library?

    Open-source communities are currently at a boom. The past 3 years have seen a surge of folks constantly building in public, talking about open-source contributions, digging into opportunities, and carving out a valuable portfolio for themselves. The idea behind the Dev Library as a whole was also to capture these open-source projects and leverage them for the benefit of other developers.

    This platform acted as a gold mine for projects created using Google technologies (Android, Angular, Flutter, Firebase, Machine Learning, Google Assistant, Google Cloud).

    With the platform, we also catered to the burning issue – creating a central place for the huge number of projects and articles scattered across various platforms. Therefore, the Dev Library became a one-source platform for all the open source projects and articles for Google technologies.

    How can you use the Dev Library?

    “It is a library full of quality projects and articles.”

    External developers cannot construe Dev Library as the first platform for blog posts or projects, but the vision is bigger than being a mere platform for the display of content. It envisages the growth of developers along with tech content creation. The uniqueness of the platform lies in the curation of its submissions. Unlike other platforms, you don’t get your submitted work on the site by just clicking ‘Submit’. Behind the scenes, Dev Library has internal Google engineers for each product area who:

    • thoroughly assess each submission,
    • check for relevancy, freshness, and quality,
    • approve the ones that pass the check, and reject the others with a note.

    It is a painstaking process, and Dev Library requires a 4-6 week turnaround time to complete the entire curation procedure and get your work on the site.

    What we aim to do with the platform:

    • Provide visibility: Developers create open-source projects and write articles on platforms to bring visibility to their work and attract more contributions. Dev Library’s intention is to continue to provide this amplification for the efforts and time spent by external contributors.
    • Kickstart a beginner’s open-source contribution journey: The biggest challenge for a beginner to start applying their learnings to build Android or Flutter applications is ‘Where do I start my contributions from’? While we see an open-source placard unfurled everywhere, beginners still struggle to find their right place. With the Dev Library, you get a stack of quality projects hand-picked for you keeping the freshness of the tech and content quality intact. For example, Tomas Trajan, a Dev Library contributor created an Angular material starter project where they have ‘good first issues’ to start your contributions with.
    • Recognition: Your selection of the content on the Dev Library acts as recognition to the tiring hours you’ve put in to build a running open-source project and explain it well. Dev Library also delivers hero content in their monthly newsletter, features top contributors, and is in the process to gamify the developer efforts. As an example, one of our contributors created a Weather application using Android and added a badge ‘Part of Dev Library’.

      With your contributions at one place under the Author page, you can use it as a portfolio for your work while simultaneously increasing your chances to become the next Google Developer Expert (GDE).

    Features on the platform

    Keeping developers in mind, we’ve updated features on the platform as follows:

    • Added a new product category; Google Assistant – All Google Assistant and Smart home projects now have a designated category on the Dev Library.
    • Integrated a new way to make submissions across product areas via the Advocu form.
    • Introduced a special section to submit Cloud Champion articles on Google Cloud.
    • Included displays on each Author page indicating the expertise of individual contributors
    • Upcoming: An expertise filter to help you segment out content based on Beginner, Intermediate, or Expert levels.

    To submit your idea or suggestion, refer to this form, and put down your suggestions.

    Contributor Love

    Dev Library as a platform is more about the contributors who lie on the cusp of creation and consumption of the available content. Here are some contributors who have utilized the platform their way. Here’s how the Dev Library has helped along their journey:

    Roaa Khaddam: Roaa is a Senior Flutter Mobile Developer and Co-Founder at MultiCaret Inc.

    How has the Dev Library helped you?

    “It gave me the opportunity to share what I created with an incredible community and look at the projects my fellow Flutter mates have created. It acts as a great learning resource.”

    Somkiat Khitwongwattana: Somkiat is an Android GDE and a consistent user of Android technology from Thailand.

    How has the Dev Library helped you?

    “I used to discover new open source libraries and helpful articles for Android development in many places and it took me longer than necessary. But the Dev Library allows me to explore these useful resources in one place.”

    Kevin Kreuzer: Kevin is an Angular developer and contributes to the community in various ways.

    How has the Dev Library helped you?

    “Dev Library is a great tool to find excellent Angular articles or open source projects. Dev Library offers a great filtering function and therefore makes it much easier to find the right open source library for your use case.”

    What started as a platform to highlight and showcase some open-source projects has grown into a product where developers can share their learnings, inspire others, and contribute to the ecosystem at large.

    Do you have an Open Source learning or project in the form of a blog or GitHub repo you’d like to share? Please submit it to the Dev Library platform. We’d love to add you to our ever growing list of developer contributors!

    Year in review: the Google Workspace Platform 2021

    Posted by Charles Maxson, Developer Advocate

    In 2021, we saw many changes and improvements to the Google Workspace Platform geared at helping developers build new solutions to keep up with the challenges of how we worked, like hybrid and fully remote office work. More than ever, we needed tools for virtual collaboration and digital processes to keep our work going. As paper processes in the office were less viable and we continued to go see digital transformations become necessary, many new custom solutions like desk reservation systems and automated test logging have evolved.

    2021 was also a year for Platform milestones, Google Workspace grew to more than 3 billion users globally, we reached more than 5,300 public apps in the Google Workspace Marketplace, and we crossed over 4.8 billion apps installed (up from 1 billion in 2020)! We were also busy bringing Platform innovations and improving our developer experience to help building for Google Workspace easier and faster. Here’s a look at some of the key enhancements the Google Workspace Platform brought to the developer community.

    Google Cloud Champion Innovators program

    Community building is one of the most effective ways to support developers, which is why we created Google Cloud Innovators.This new community program was designed for developers and technical practitioners using Google Cloud and we welcome everyone.

    And when we say everyone, it’s not just professional developers, data scientists, or student developers and hobbyists, we also mean non-technical end users. The growing Google community has something for everyone.

    GWAO Alternate Runtimes goes GA

    Google Workspace Add-ons are customized applications that tightly integrate with Google Workspace applications, and can be found in the Google Workspace Marketplace, or built specifically for your own domain. The development of these applications were limited to using Apps Script, our native scripting language for the Google Workspace Platform. With the launch of Alternate Runtimes you can now develop add-ons with your preferred hosting infrastructure, development tool chain, source control system, coding language, and code libraries; it was a highly requested update from the developer community, opening up the Platform to many new developer scenarios.

    Card Builder UI Application

    The GWAO Card Builder tool allows you to visually design the user interfaces for your Google Workspace Add-ons and Google Chat apps projects. It is a must-have for Google Workspace developers using either Apps Script or Alternate Runtimes, enabling you to prototype and design Card UIs super fast without hassle and errors of hand coding JSON or Apps Script on your own.

    Card Builder tool for building Google Workspace Add-ons and Chat Apps

    Recommended for Google Workspace

    This program showcases a selection of market-leading applications built by software vendors across a wide range of categories, including project management, customer support, and finance in our Google Workspace Marketplace. These apps undergo rigorous usability and security testing to make sure they meet our requirements for high quality integrations. They must also have an exemplary track record of user satisfaction, reliability, and privacy.

    Recommended for Google Workspace program showcases high quality applications

    Chat Slash Commands and Dialogs

    Slash commands simplify the way users interact with your Chat bot, offering them a visual leading way to discover and execute your bot’s primary features. As a developer, slash commands are straightforward to implement, and essential in offering a better bot experience. In addition to Slash Commands, Dialogs were a new capability introduced to the Chat App framework that allows developers to build user interfaces to capture inputs and parameters in a structured, reliable way. This was a tremendous step forward for bot usability because it simplified and streamlined the process of users interacting with bot commands. Now with dialogs, users can be led visually to supply inputs via prompts, versus having to rely on wrapping bot commands with natural language inputs.

    Forms API beta

    Google Forms enables easy creation and distribution of forms, surveys, and quizzes. Forms is used for a wide variety of use cases across business operations, customer management, event planning and logistics, education, and more. With the Google Forms API Beta announcement, developers were able to provide programmatic access for managing forms and acting on responses, empowering developers to build powerful integrations on top of Forms.

    Google Workspace Marketplace updates

    We made many updates to the Google Workspace Marketplace to improve both the user and developer experience. We added updates to the application detail page that included pricing and when the listing was last updated. The homepage also saw improvements with various curated categories by the Google team under Editor’s Choice. Finally, we launched the marketplace badges for developers to promote their published applications on websites and marketing channels. Oh, and we also had a logo update if you hadn’t noticed.

    Google Workspace Marketplace Badges for application promotion

    Farewell 2021 and here’s to welcoming in 2022

    2021 brought us many innovations to the Google Workspace Platform to help developers address the needs of their users and it also brought more empowerment to knowledge workers to build the solutions they needed with our no-code and low-code platforms. These are just the highlights for the Google Workspace Platform and we look forward to more innovation in 2022. To keep up with all the news about the Platform, please subscribe to our newsletter.