Migrating from App Engine ndb to Cloud NDB

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Migrating to standalone services

Today we’re introducing the first video showing long-time App Engine developers how to migrate from the App Engine ndb client library that connects to Datastore. While the legacy App Engine ndb service is still available for Datastore access, new features and continuing innovation are going into Cloud Datastore, so we recommend Python 2 users switch to standalone product client libraries like Cloud NDB.

This video and its corresponding codelab show developers how to migrate the sample app introduced in a previous video and gives them hands-on experience performing the migration on a simple app before tackling their own applications. In the immediately preceding “migration module” video, we transitioned that app from App Engine’s original webapp2 framework to Flask, a popular framework in the Python community. Today’s Module 2 content picks up where that Module 1 leaves off, migrating Datastore access from App Engine ndb to Cloud NDB.

Migrating to Cloud NDB opens the doors to other modernizations, such as moving to other standalone services that succeed the original App Engine legacy services, (finally) porting to Python 3, breaking up large apps into microservices for Cloud Functions, or containerizing App Engine apps for Cloud Run.

Moving to Cloud NDB

App Engine’s Datastore matured to becoming its own standalone product in 2013, Cloud Datastore. Cloud NDB is the replacement client library designed for App Engine ndb users to preserve much of their existing code and user experience. Cloud NDB is available in both Python 2 and 3, meaning it can help expedite a Python 3 upgrade to the second generation App Engine platform. Furthermore, Cloud NDB gives non-App Engine apps access to Cloud Datastore.

As you can see from the screenshot below, one key difference between both libraries is that Cloud NDB provides a context manager, meaning you would use the Python with statement in a similar way as opening files but for Datastore access. However, aside from moving code inside with blocks, no other changes are required of the original App Engine ndb app code that accesses Datastore. Of course your “YMMV” (your mileage may vary) depending on the complexity of your code, but the goal of the team is to provide as seamless of a transition as possible as well as to preserve “ndb“-style access.

The difference between the App Engine ndb and Cloud NDB versions of the sample app

The “diffs” between the App Engine ndb and Cloud NDB versions of the sample app

Next steps

To try this migration yourself, hit up the corresponding codelab and use the video for guidance. This Module 2 migration sample “STARTs” with the Module 1 code completed in the previous codelab (and video). Users can use their solution or grab ours in the Module 1 repo folder. The goal is to arrive at the end with an identical, working app that operates just like the Module 1 app but uses a completely different Datastore client library. You can find this “FINISH” code sample in the Module 2a folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. Bonus content migrating to Python 3 App Engine can also be found in the video and codelab, resulting in a second FINISH, the Module 2b folder.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned! Developers should also check out the official Cloud NDB migration guide which provides more migration details, including key differences between both client libraries.

Ahead in Module 3, we will continue the Cloud NDB discussion and present our first optional migration, helping users move from Cloud NDB to the native Cloud Datastore client library. If you can’t wait, try out its codelab found in the table at the repo above. Migrations aren’t always easy; we hope this content helps you modernize your apps and shows we’re focused on helping existing users as much as new ones.

Migrating from App Engine webapp2 to Flask

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Migrating web framework

The Google Cloud team recently introduced a series of codelabs (free, self-paced, hands-on tutorials) and corresponding videos designed to help users on one of our serverless compute platforms modernize their apps, with an initial focus on our earliest users running their apps on Google App Engine. We kick off this content by showing users how to migrate from App Engine’s webapp2 web framework to Flask, a popular framework in the Python community.

While users have always been able to use other frameworks with App Engine, webapp2 comes bundled with App Engine, making it the default choice for many developers. One new requirement in App Engine’s next generation platform (which launched in 2018) is that web frameworks must do their own routing, which unfortunately, means that webapp2 is no longer supported, so here we are. The good news is that as a result, modern App Engine is more flexible, lets users to develop in a more idiomatic fashion, and makes their apps more portable.

For example, while webapp2 apps can run on App Engine, Flask apps can run on App Engine, your servers, your data centers, or even on other clouds! Furthermore, Flask has more users, more published resources, and is better supported. If Flask isn’t right for you, you can select from other WSGI-compliant frameworks such as Django, Pyramid, and others.

Video and codelab content

In this “Module 1” episode of Serverless Migration Station (part of the Serverless Expeditions series) Google engineer Martin Omander and I explore this migration and walk developers through it step-by-step.

In the previous video, we introduced developers to the baseline Python 2 App Engine NDB webapp2 sample app that we’re taking through each of the migrations. In the video above, users see that the majority of the changes are in the main application handler, MainHandler:

The diffs between the webapp2 and Flask versions of the sample app

The “diffs” between the webapp2 and Flask versions of the sample app

Upon (re)deploying the app, users should see no visible changes to the output from the original version:

VisitMe application sample output

VisitMe application sample output

Next steps

Today’s video picks up from where we left off: the Python 2 baseline app in its Module 0 repo folder. We call this the “START”. By the time the migration has completed, the resulting source code, called “FINISH”, can be found in the Module 1 repo folder. If you mess up partway through, you can rewind back to the START, or compare your solution with ours, FINISH. We also hope to one day provide a Python 3 version as well as cover other legacy runtimes like Java 8, PHP 5, and Go 1.11 and earlier, so stay tuned!

All of the migration learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. The next video (Module 2) will cover migrating from App Engine’s ndb library for Datastore to Cloud NDB. We hope you find all these resources helpful in your quest to modernize your serverless apps!

Introducing “Serverless Migration Station” Learning Modules

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Helping users modernize their serverless apps

Earlier this year, the Google Cloud team introduced a series of codelabs (free, online, self-paced, hands-on tutorials) designed for technical practitioners modernizing their serverless applications. Today, we’re excited to announce companion videos, forming a set of “learning modules” made up of these videos and their corresponding codelab tutorials. Modernizing your applications allows you to access continuing product innovation and experience a more open Google Cloud. The initial content is designed with App Engine developers in mind, our earliest users, to help you take advantage of the latest features in Google Cloud. Here are some of the key migrations and why they benefit you:

  • Migrate to Cloud NDB: App Engine’s legacy ndb library used to access Datastore is tied to Python 2 (which has been sunset by its community). Cloud NDB gives developers the same NDB-style Datastore access but is Python 2-3 compatible and allows Datastore to be used outside of App Engine.
  • Migrate to Cloud Run: There has been a continuing shift towards containerization, an app modernization process making apps more portable and deployments more easily reproducible. If you appreciate App Engine’s easy deployment and autoscaling capabilities, you can get the same by containerizing your App Engine apps for Cloud Run.
  • Migrate to Cloud Tasks: while the legacy App Engine taskqueue service is still available, new features and continuing innovation are going into Cloud Tasks, its standalone equivalent letting users create and execute App Engine and non-App Engine tasks.

The “Serverless Migration Station” videos are part of the long-running Serverless Expeditions series you may already be familiar with. In each video, Google engineer Martin Omander and I explore a variety of different modernization techniques. Viewers will be given an overview of the task at hand, a deeper-dive screencast takes a closer look at the code or configuration files, and most importantly, illustrates to developers the migration steps necessary to transform the same sample app across each migration.

Sample app

The baseline sample app is a simple Python 2 App Engine NDB and webapp2 application. It registers every web page visit (saving visiting IP address and browser/client type) and displays the most recent queries. The entire application is shown below, featuring Visit as the data Kind, the store_visit() and fetch_visits() functions, and the main application handler, MainHandler.


import os
import webapp2
from google.appengine.ext import ndb
from google.appengine.ext.webapp import template

class Visit(ndb.Model):
'Visit entity registers visitor IP address & timestamp'
visitor = ndb.StringProperty()
timestamp = ndb.DateTimeProperty(auto_now_add=True)

def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()

def fetch_visits(limit):
'get most recent visits'
return (v.to_dict() for v in Visit.query().order(
-Visit.timestamp).fetch(limit))

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(10)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

app = webapp2.WSGIApplication([
('/', MainHandler),
], debug=True)

Baseline sample application code

Upon deploying this application to App Engine, users will get output similar to the following:

image of a website with text saying VisitMe example

VisitMe application sample output

This application is the subject of today’s launch video, and the main.py file above along with other application and configuration files can be found in the Module 0 repo folder.

Next steps

Each migration learning module covers one modernization technique. A video outlines the migration while the codelab leads developers through it. Developers will always get a starting codebase (“START”) and learn how to do a specific migration, resulting in a completed codebase (“FINISH”). Developers can hit the reset button (back to START) if something goes wrong or compare their solutions to ours (FINISH). The hands-on experience helps users build muscle-memory for when they’re ready to do their own migrations.

All of the migration learning modules, corresponding Serverless Migration Station videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. While there’s an initial focus on Python 2 and App Engine, you’ll also find content for Python 3 users as well as non-App Engine users. We’re looking into similar content for other legacy languages as well so stay tuned. We hope you find all these resources helpful in your quest to modernize your serverless apps!

Machine Learning GDEs: Q1 2021 highlights, projects and achievements

Posted by HyeJung Lee and MJ You, Google ML Ecosystem Community Managers. Reviewed by Soonson Kwon, Developer Relations Program Manager.

Google Developers Experts is a community of passionate developers who love to share their knowledge with others. Many of them specialize in Machine Learning (ML). Despite many unexpected changes over the last months and reduced opportunities for various in person activities during the ongoing pandemic, their enthusiasm did not stop.

Here are some highlights of the ML GDE’s hard work during the Q1 2021 which contributed to the global ML ecosystem.

ML GDE YouTube channel

ML GDE YouTube page

With the initiative and lead of US-based GDE Margaret Maynard-Reid, we launched the ML GDEs YouTube channel. It is a great way for GDEs to reach global audiences, collaborate as a community, create unique content and promote each other’s work. It will contain all kinds of ML related topics: talks on technical topics, tutorials, interviews with another (ML) GDE, a Googler or anyone in the ML community etc. Many videos have already been uploaded, including: ML GDE’s intro from all over the world, tips for TensorFlow & GCP Certification and how to use Google Cloud Platform etc. Subscribe to the channel now!!

TensorFlow Everywhere

TensorFlow Everywhere logo

17 ML GDEs presented at TensorFlow Everywhere (a global community-led event series for TensorFlow and Machine Learning enthusiasts and developers around the world) hosted by local TensorFlow user groups. You can watch the recorded sessions in the TensorFlow Everywhere playlist on the ML GDE Youtube channel. Most of the sessions cover new features in Tensorflow.

International Women’s Day

Many ML GDEs participated in activities to celebrate International Women’s Day (March 8th). GDE Ruqiya Bin Safi (based in Saudi Arabia) cooperated with WTM Saudi Arabia to organize “Socialthon” – social development hackathons and gave a talk “Successful Experiences in Social Development“, which reached 77K viervers live and hit 10K replays. India-based GDE Charmi Chokshi participated in GirlScript’s International Women’s Day event and gave a talk: “Women In Tech and How we can help the underrepresented in the challenging world”. If you’re looking for more inspiring materials, check out the “Women in AI” playlist on our ML GDE YouTube channel!

Mentoring

ML GDEs are also very active in mentoring community developers, students in the Google Developer Student Clubs and startups in the Google for Startups Accelerator program. Among many, GDE Arnaldo Gualberto (Brazil) conducted mentorship sessions for startups in the Google Fast Track program, discussing how to solve challanges using Machine Learning/Deep Learning with TensorFlow.

TensorFlow

Practical Adversarial Robustness in Deep Learning: Problems and Solutions
ML using TF cookbook and ML for Dummies book

Meanwhile in Europe, GDEs Alexia Audevart (based in France) and Luca Massaron (based in Italy) released “Machine Learning using TensorFlow Cookbook”. It provides simple and effective ideas to successfully use TensorFlow 2.x in computer vision, NLP and tabular data projects. Additionally, Luca published the second edition of the Machine Learning For Dummies book, first published in 2015. Her latest edition is enhanced with product updates and the principal is a larger share of pages devoted to discussion of Deep Learning and TensorFlow / Keras usage.

YouTube video screenshot

On top of her women-in-tech related activities, Ruqiya Bin Safi is also running a “Welcome to Deep Learning Course and Orientation” monthly workshop throughout 2021. The course aims to help participants gain foundational knowledge of deep learning algorithms and get practical experience in building neural networks in TensorFlow.

TensorFlow Project showcase

Nepal-based GDE Kshitiz Rimal gave a talk “TensorFlow Project Showcase: Cash Recognition for Visually Impaired” on his project which uses TensorFlow, Google Cloud AutoML and edge computing technologies to create a solution for the visually impaired community in Nepal.

Screenshot of TF Everywhere NA talk

On the other side of the world, in Canada, GDE Tanmay Bakshi presented a talk “Machine Learning-powered Pipelines to Augment Human Specialists” during TensorFlow Everywhere NA. It covered the world of NLP through Deep Learning, how it’s historically been done, the Transformer revolution, and how using the TensorFlow & Keras to implement use cases ranging from small-scale name generation to large-scale Amazon review quality ranking.

Google Cloud Platform

Google Cloud Platform YouTube playlist screenshot

We have been equally busy on the GCP side as well. In the US, GDE Srivatsan Srinivasan created a series of videos called “Artificial Intelligence on Google Cloud Platform”, with one of the episodes, “Google Cloud Products and Professional Machine Learning Engineer Certification Deep Dive“, getting over 3,000 views.

ML Analysis Pipeline

Korean GDE Chansung Park contributed to TensorFlow User Group Korea with his “Machine Learning Pipeline (CI/CD for ML Products in GCP)” analysis, focused on about machine learning pipeline in Google Cloud Platform.

Analytics dashboard

Last but not least, GDE Gad Benram based in Israel wrote an article on “Seven Tips for Forecasting Cloud Costs”, where he explains how to build and deploy ML models for time series forecasting with Google Cloud Run. It is linked with his solution of building a cloud-spend control system that helps users more-easily analyze their cloud costs.

If you want to know more about the Google Experts community and all their global open-source ML contributions, visit the GDE Directory and connect with GDEs on Twitter and LinkedIn. You can also meet them virtually on the ML GDE’s YouTube Channel!

Google Developer Group Spotlight: A conversation with Cloud Architect, Ilias Papachristos


Posted by Jennifer Kohl, Global Program Manager, Google Developer Communities

The Google Developer Groups Spotlight series interviews inspiring leaders of community meetup groups around the world. Our goal is to learn more about what developers are working on, how they’ve grown their skills with the Google Developer Group community, and what tips they might have for us all.

We recently spoke with Ilias Papachristos, Google Developer Group Cloud Thessaloniki Lead in Greece. Check out our conversation with Ilias on Cloud architecture, reading official documentation, and suggested resources to help developers grow professionally.

Tell us a little about yourself?

I’m a family man, ex-army helicopter pilot, Kendo sensei, beta tester at Coursera, Lead of the Google Developer Group Cloud Thessaloniki community, Google Cloud Professional Architect, and a Cloud Board Moderator on the Google Developers Community Leads Platform (CLP).

I love outdoor activities, reading books, listening to music, and cooking for my family and friends!

Can you explain your work in Cloud technologies?

Over my career, I have used Compute Engine for an e-shop, AutoML Tables for an HR company, and have architected the migration of a company in Mumbai. Now I’m consulting for a company on two of their projects: one that uses Cloud Run and another that uses Kubernetes.

Both of them have Cloud SQL and the Kubernetes project will use the AI Platform. We might even end up using Dataflow with BigQuery for the streaming and Scheduler or Manager, but I’m still working out the details.

I love the chance to share knowledge with the developer community. Many days, I open my PC, read the official Google Cloud blog, and share interesting articles on the CLP Cloud Board and GDG Cloud Thessaloniki’s social media accounts. Then, I check Google Cloud’s Medium publication for extra articles. Read, comment, share, repeat!

How did the Google Developer Group community help your Cloud career?

My overall knowledge of Google Cloud has to do with my involvement with Google Developer Groups. It is not just one thing. It’s about everything! At the first European GDG Leads Summit, I met so many people who were sharing their knowledge and offering their help. For a newbie like me it was and still is something that I keep in my heart as a treasure

I’ve also received so many informative lessons on public speaking from Google Developer Group and Google Developer Student Club Leads. They always motivate me to continue talking about the things I love!

What has been the most inspiring part of being a part of your local Google Developer Group?

Collaboration with the rest of the DevFest Hellas Team! For this event, I was a part of a small group of 12 organizers, all of whom never had hosted a large meetup before. With the help of Google Developer Groups, we had so much fun while creating a successful DevFest learning program for 360 people.

What are some technical resources you have found the most helpful for your professional development?

Besides all of the amazing tricks and tips you can learn from the Google Cloud training team and courses on the official YouTube channel, I had the chance to hear a talk by Wietse Venema on Cloud Run. I also have learned so much about AI from Dale Markovitz’s videos on Applied AI. And of course, I can’t leave out Priyanka Vergadia’s posts, articles, and comic-videos!

Official documentation has also been a super important part of my career. Here are five links that I am using right now as an Architect:

  1. Google Cloud Samples
  2. Cloud Architecture Center
  3. Solve with Google Cloud
  4. Google Cloud Solutions
  5. 13 sample architectures to kickstart your Google Cloud journey

How did you become a Google Developer Group Lead?

I am a member of the Digital Analytics community in Thessaloniki, Greece. Their organizer asked me to write articles to start motivating young people. I translated one of the blogs into English and published it on Medium. The Lead of GDG Thessaloniki read them and asked me to become a facilitator for a Cloud Study Jams (CSJ) workshop. I accepted and then traveled to Athens to train three people so that they could also become CSJ facilitators. At the end of the CSJ, I was asked if I wanted to lead a Google Developer Group chapter. I agreed. Maria Encinar and Katharina Lindenthal interviewed me, and I got it!

What would be one piece of advice you have for someone looking to learn more about a specific technology?

Learning has to be an amusing and fun process. And that’s how it’s done with Google Developer Groups all over the world. Join mine, here. It’s the best one. (Wink, wink.)

Want to start growing your career and coding knowledge with developers like Ilias? Then join a Google Developer Group near you, here.

Modernizing your Google App Engine applications

Posted by Wesley Chun, Developer Advocate, Google Cloud

Modernizing your Google App Engine applications header

Next generation service

Since its initial launch in 2008 as the first product from Google Cloud, Google App Engine, our fully-managed serverless app-hosting platform, has been used by many developers worldwide. Since then, the product team has continued to innovate on the platform: introducing new services, extending quotas, supporting new languages, and adding a Flexible environment to support more runtimes, including the ability to serve containerized applications.

With many original App Engine services maturing to become their own standalone Cloud products along with users’ desire for a more open cloud, the next generation App Engine launched in 2018 without those bundled proprietary services, but coupled with desired language support such as Python 3 and PHP 7 as well as introducing Node.js 8. As a result, users have more options, and their apps are more portable.

With the sunset of Python 2, Java 8, PHP 5, and Go 1.11, by their respective communities, Google Cloud has assured users by expressing continued long-term support of these legacy runtimes, including maintaining the Python 2 runtime. So while there is no requirement for users to migrate, developers themselves are expressing interest in updating their applications to the latest language releases.

Google Cloud has created a set of migration guides for users modernizing from Python 2 to 3, Java 8 to 11, PHP 5 to 7, and Go 1.11 to 1.12+ as well as a summary of what is available in both first and second generation runtimes. However, moving from bundled to unbundled services may not be intuitive to developers, so today we’re introducing additional resources to help users in this endeavor: App Engine “migration modules” with hands-on “codelab” tutorials and code examples, starting with Python.

Migration modules

Each module represents a single modernization technique. Some are strongly recommended, others less so, and, at the other end of the spectrum, some are quite optional. We will guide you as far as which ones are more important. Similarly, there’s no real order of modules to look at since it depends on which bundled services your apps use. Yes, some modules must be completed before others, but again, you’ll be guided as far as “what’s next.”

More specifically, modules focus on the code changes that need to be implemented, not changes in new programming language releases as those are not within the domain of Google products. The purpose of these modules is to help reduce the friction developers may encounter when adapting their apps for the next-generation platform.

Central to the migration modules are the codelabs: free, online, self-paced, hands-on tutorials. The purpose of Google codelabs is to teach developers one new skill while giving them hands-on experience, and there are codelabs just for Google Cloud users. The migration codelabs are no exception, teaching developers one specific migration technique.

Developers following the tutorials will make the appropriate updates on a sample app, giving them the “muscle memory” needed to do the same (or similar) with their applications. Each codelab begins with an initial baseline app (“START”), leads users through the necessary steps, then concludes with an ending code repo (“FINISH”) they can compare against their completed effort. Here are some of the initial modules being announced today:

  • Web framework migration from webapp2 to Flask
  • Updating from App Engine ndb to Google Cloud NDB client libraries for Datastore access
  • Upgrading from the Google Cloud NDB to Cloud Datastore client libraries
  • Moving from App Engine taskqueue to Google Cloud Tasks
  • Containerizing App Engine applications to execute on Cloud Run

Examples

What should you expect from the migration codelabs? Let’s preview a pair, starting with the web framework: below is the main driver for a simple webapp2-based “guestbook” app registering website visits as Datastore entities:

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(LIMIT)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

A “visit” consists of a request’s IP address and user agent. After visit registration, the app queries for the latest LIMIT visits to display to the end-user via the app’s HTML template. The tutorial leads developers a migration to Flask, a web framework with broader support in the Python community. An Flask equivalent app will use decorated functions rather than webapp2‘s object model:

@app.route('/')
def root():
'main application (GET) handler'
store_visit(request.remote_addr, request.user_agent)
visits = fetch_visits(LIMIT)
return render_template('index.html', visits=visits)

The framework codelab walks users through this and other required code changes in its sample app. Since Flask is more broadly used, this makes your apps more portable.

The second example pertains to Datastore access. Whether you’re using App Engine’s ndb or the Cloud NDB client libraries, the code to query the Datastore for the most recent limit visits may look like this:

def fetch_visits(limit):
'get most recent visits'
query = Visit.query()
visits = query.order(-Visit.timestamp).fetch(limit)
return (v.to_dict() for v in visits)

If you decide to switch to the Cloud Datastore client library, that code would be converted to:

def fetch_visits(limit):
'get most recent visits'
query = DS_CLIENT.query(kind='Visit')
query.order = ['-timestamp']
return query.fetch(limit=limit)

The query styles are similar but different. While the sample apps are just that, samples, giving you this kind of hands-on experience is useful when planning your own application upgrades. The goal of the migration modules is to help you separate moving to the next-generation service and making programming language updates so as to avoid doing both sets of changes simultaneously.

As mentioned above, some migrations are more optional than others. For example, moving away from the App Engine bundled ndb library to Cloud NDB is strongly recommended, but because Cloud NDB is available for both Python 2 and 3, it’s not necessary for users to migrate further to Cloud Datastore nor Cloud Firestore unless they have specific reasons to do so. Moving to unbundled services is the primary step to giving users more flexibility, choices, and ultimately, makes their apps more portable.

Next steps

For those who are interested in modernizing their apps, a complete table describing each module and links to corresponding codelabs and expected START and FINISH code samples can be found in the migration module repository. We are also working on video content based on these migration modules as well as producing similar content for Java, so stay tuned.

In addition to the migration modules, our team has also setup a separate repo to support community-sourced migration samples. We hope you find all these resources helpful in your quest to modernize your App Engine apps!

India’s Google Developer Groups meet up to ace their Google Cloud Certifications


Posted by Biswajeet Mallik, Program Manager, Google Developers India.

Image from Cloud Community Days India

Earlier this year, ten Google Developer Groups in India came together to host Google Cloud Community Days India, a two day event helping developers study for their upcoming Cloud Certification exams. To address the rising demand for professional certifications, the virtual event hosted over 63,000 developers, covered four main exam areas, and welcomed nine speakers. This was the second edition to the event series which started in 2019 in India.

By providing expert learning materials and mentorship, the event uniquely prepared developers for the Associate Cloud Engineer, Professional Data Engineer, Professional Cloud Machine Learning Engineer, and Professional Cloud Architect exams. Learn more below.

Acing the four key certifications

The Cloud Community Days event focused on helping developers study for four milestone certifications, tailored to engineers at four different stages of their career. The goal: help Google Developer Group members obtain the right credentials to improve their job prospects.

The event broke participants into breakout sessions based on which exam they were preparing to take. Since the certifications targeted professionals of all skill levels, study groups ranged from early career associates to late career executives. The learning groups were organized around the following certifications:

  1. Associate Cloud Engineer:

    This learning session was created to help early career developers complete the first stepping stone exam. In particular, learning materials and speakers were curated to guide participants who had no prior experience, or very little, working on the Google Cloud Platform.

    Workshops were mainly dedicated to assisting programmers who were familiar with building different applications but wished to show employers that they could deploy them on Google Cloud Platform.

    Watch more from: Day 1, here. And day 2, here.

  2. Professional Data Engineers:

    The next group brought together were data practitioners with special interests in data visualization and decision making. Workshops and learning activities helped these developers hone their large scale data and data driven decision making abilities.

    Improving these skills are essential for passing the Professional Data Engineers certification and growing a programmer’s early career.

    Watch more from: Day 1, here. And day 2, here.

  3. Professional Cloud Machine Learning Engineer:

    For these sessions, the Google Developer Group Cloud community paired experienced programmers with a significant interest in ML to form their study groups. The main driver in these learning activities was to help seasoned developers gain a deeper understanding of how to utilize Google Cloud ML services.

    With significant emphasis being placed on machine learning in the ecosystem right now, Google Developer Group community leaders felt this certification could help developers make the leap into new leadership roles.

    Watch more from: Day 1, here. And day 2, here.

  4. Professional Cloud Architect:

    Lastly, this event paired experienced Cloud executives and professionals working in leading capacities for their organizations. For these sessions, speakers and activities had a specific scope: help high level professions be at the forefront of Google Cloud Platforms innovative capabilities.

    Specifically, the Professional Cloud Architect Certification was created to help senior software engineers better design, scale and develop highly secure and robust applications.

    Day 1, here. And day 2, here.

Reactions from the community

Overall, the community put together these resources to help developers feel more confident in their abilities, obtain tangible credentials, and in turn increase access to better job opportunities. As two participants recalled the event,

“The session on Qwiklabs was so helpful, and taught me how to anticipate problems and then solve them. Cloud Community Days inspired me to take the next step with DevOps and Google Cloud.”

“This was the first time I attended the Google Developer Group event! It is an awesome package for learning in one place. All the fun activities were engaging and the panelist discussion was also very insightful. I feel proud to be a part of this grand GDG event.”

Start learning with Google Developer Groups

With Google Developer Groups, find a space to learn alongside a group of curious developers, all coming together to advance their careers from withinside a caring community of peers.

Want to know more about what Cloud Community days were like? Then watch their live recording below.


Ready to find a community event near you? Then get started at gdg.community.dev

Announcing gRPC Kotlin 1.0 for Android and Cloud


Posted by Louis Wasserman, Software Engineer and James Ward, Developer Advocate

Kotlin is now the fourth “most loved” programming language with millions of developers using it for Android, server-side / cloud backends, and various other target runtimes. At Google, we’ve been building more of our apps and backends with Kotlin to take advantage of its expressiveness, safety, and excellent support for writing asynchronous code with coroutines.

Since everything in Google runs on top of gRPC, we needed an idiomatic way to do gRPC with Kotlin. Back in April 2020 we announced the open sourcing of gRPC Kotlin, something we’d originally built for ourselves. Since then we’ve seen over 30,000 downloads and usage in Android and Cloud. The community and our engineers have been working hard polishing docs, squashing bugs, and making improvements to the project; culminating in the shiny new 1.0 release! Dive right in with the gRPC Kotlin Quickstart!

For those new to gRPC & Kotlin let’s do a quick runthrough of some of the awesomeness. gRPC builds on Protocol Buffers, aka “protos” (language agnostic & high performance data interchange) and adds the network protocol for efficiently communicating with protos. From a proto definition the servers, clients, and data transfer objects can all be generated. Here is a simple gRPC proto:

message HelloRequest {
string name = 1;
}

message HelloReply {
string message = 1;
}

service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply) {}
}

In a Kotlin project you can then define the implementation of the Greeter’s SayHello service with something like:

object : GreeterGrpcKt.GreeterCoroutineImplBase() {
override suspend fun sayHello(request: HelloRequest) =
HelloReply
.newBuilder()
.setMessage("hello, ${request.name}")
.build()
}

You’ll notice that the function has `suspend` on it because it uses Kotlin’s coroutines, a built-in way to handle async / reactive IO. Check out the server example project.

With gRPC the client “stubs” are generated making it easy to connect to gRPC services. For the protoc above, the client stub can be used in Kotlin with:

val stub = GreeterCoroutineStub(channel)
val request = HelloRequest.newBuilder().setName("world").build()
val response = stub.sayHello(request)
println("Received: ${response.message}")

In this example the `sayHello` method is also a `suspend` function utilizing Kotlin coroutines to make the reactive IO easier. Check out the client example project.

Kotlin also has an API for doing reactive IO on streams (as opposed to requests), called Flow. gRPC Kotlin generates client and server stubs using the Flow API for stream inputs and outputs. The proto can define a service with unary streaming or bidirectional streaming, like:

service Greeter {
rpc SayHello (stream HelloRequest) returns (stream HelloReply) {}
}

In this example, the server’s `sayHello` can be implemented with Flows:

object : GreeterGrpcKt.GreeterCoroutineImplBase() {
override fun sayHello(requests: Flow<HelloRequest>): Flow<HelloReply> {
return requests.map { request ->
println(request)
HelloReply.newBuilder().setMessage("hello, ${request.name}").build()
}
}
}

This example just transforms each `HelloRequest` item on the flow to an item in the output / `HelloReply` Flow.

The bidirectional stream client is similar to the coroutine one but instead it passes a Flow to the `sayHello` stub method and then operates on the returned Flow:

val stub = GreeterCoroutineStub(channel)
val helloFlow = flow {
while(true) {
delay(1000)
emit(HelloRequest.newBuilder().setName("world").build())
}
}

stub.sayHello(helloFlow).collect { helloResponse ->
println(helloResponse.message)
}

In this example the client sends a `HelloRequest` to the server via Flow, once per second. When the client gets items on the output Flow, it just prints them. Check out the bidi-streaming example project.

As you’ve seen, creating data transfer objects and services around them is made elegant and easy with gRPC Kotlin. But there are a few other exciting things we can do with this…

Android Clients

Protobuf compilers can have a “lite” mode which generates smaller, higher performance classes which are more suitable for Android. Since gRPC Kotlin uses gRPC Java it inherits the benefits of gRPC Java’s lite mode. The generated code works great on Android and there is a `grpc-kotlin-stub-lite` artifact which depends on the associated `grpc-protobuf-lite`. Using the generated Kotlin stub client is just like on the JVM. Check out the stub-android example and android example.

GraalVM Native Image Clients

The gRPC lite mode is also a great fit for GraalVM Native Image which turns JVM-based applications into ahead-of-time compiled native images, i.e. they run without a JVM. These applications can be smaller, use less memory, and start much faster so they are a good fit for auto-scaling and Command Line Interface environments. Check out the native-client example project which produces a nice & small 14MB executable client app (no JVM needed) and starts, connects to the server, makes a request, handles the response, and exits in under 1/100th of a second using only 18MB of memory.

Google Cloud Ready

Backend services created with gRPC Kotlin can easily be packaged for deployment in Kubernetes, Cloud Run, or really anywhere you can run docker containers or JVM apps. Cloud Run is a cloud service that runs docker containers and scales automatically based on demand so you only pay when your service is handling requests. If you’d like to give a gRPC Kotlin service a try on Cloud Run:

  1. Deploy the app with a few clicks
  2. In Cloud Shell, run the client to connect to your app on the cloud:
    export PROJECT_ID=PUT_YOUR_PROJECT_ID_HERE
    docker run -it gcr.io/$PROJECT_ID/grpc-hello-world-mvn
    "java -cp target/classes:target/dependency/* io.grpc.examples.helloworld.HelloWorldClientKt YOUR_CLOUD_RUN_DOMAIN_NAME"

Here is a video of what that looks like:

Check out more Cloud Run gRPC Kotlin examples

Thank You!

We are super excited to have reached 1.0 for gRPC Kotlin and are incredibly grateful to everyone who filed bugs, sent pull requests, and gave the pre-releases a try! There is still more to do, so if you want to help or follow along, check out the project on GitHub.

Also huge shoutouts to Brent Shaffer, Patrice Chalin, David Winer, Ray Tsang, Tyson Henning, and Kevin Bierhoff for all their contributions to this release!

Google named a Leader in the Gartner 2020 Magic Quadrant for Cloud AI Developer Services

The enterprise applications for artificial intelligence and machine learning seem to grow by the day. To take advantage of everything AI/ML technologies have to offer, it’s important to have a platform that supports your needs fully—whether you’re a developer, a data scientist, an analyst, or just interested in AI. But with so many features and services to consider, it can be difficult to sort through it all. This is where analyst reports can provide valuable research to help you get the answers you need.

Today, Gartner named Google a Leader in the Gartner 2020 Magic Quadrant for Cloud AI Developer Services report. This designation is based on Gartner’s evaluation of Google’s language, vision, conversation, and structured data products, including AutoML, all of which we deliver through Google Cloud. Let’s take a closer look at some of Gartner’s findings.

Vision AI for every enterprise use case

You don’t need to be an ML expert to reap the benefits that our AI portfolio offers. Our vision and video APIs, along with AutoML Vision and Video products, let developers of any experience level build perception AI into their applications. These products help you understand and derive insights from your images and videos with industry-leading prediction accuracy in the cloud or at the edge.

Our Computer Vision products provide many features to help you understand your visual content and create powerful custom machine learning models: 

  • Through REST and RPC APIs, the Vision API provides access to pretrained models that are ready to use to quickly classify images. 

  • AutoML Vision automates the training of your own custom machine learning models with an easy-to-use graphical interface. It lets you optimize your models for accuracy, latency, and size, and export them to your application in the cloud, or to an array of devices at the edge.

  • The Video Intelligence API has pre-trained machine learning models that automatically recognize a vast number of objects, places, and actions in stored and streaming video. 

  • AutoML Video Intelligence lets developers quickly and easily train custom models to classify and track objects within videos, regardless of their level of ML experience. 

  • The What-If Tool, an open-source visualization tool for inspecting any machine learning model, enhances your model’s interpretability, offering insights into how it’s making decisions for AutoML Vision and our data-labeling services.

While powerful pre-trained APIs and custom model creation capabilities are part of meeting all of an enterprise’s ML needs, it’s equally important to be able to deploy these models wherever the business needs them. To that end, our AutoML Vision models can be deployed via container wherever it works best for you: in a virtual private cloud, on-premises, and in our public cloud. 

Easier and better custom ML models for your structured data 

AutoML Tables enables your entire team of data scientists, analysts, and developers to automatically build and deploy state-of-the-art machine learning models on structured data at a massively increased speed and scale. To create ML models, developers usually need training data that’s as complete and clean as possible. AutoML Tables provides information about and automatically handles missing data, high cardinality, and distribution for each feature in a dataset. Then, in training, it automates a range of feature engineering tasks, from normalization of numeric features and creation of one-hot encoding, to embeddings for categorical features.

In addition, AutoML Tables also provides codeless GUI and python SDK options, as well as automated data preprocessing, feature engineering, hyperparameter and neural/tree architecture search, evaluation, model explainability, and deployment functionality. All of these features significantly reduce the amount of time it takes to bring a custom ML model to production from months to days.

Ready for global scale 

As business becomes more and more global, being able to serve customers wherever they are or whatever language they speak is a key differentiator. To that end, many of our products support more languages than other providers. For example:

With such strong language support, Google Cloud makes it easier to grow your business globally.

As the uses for AI continue to expand, more organizations are turning to Google to help build out their AI capabilities. At Google Cloud, we’re passionate about helping developers in organizations of all sizes to build AI/ML into their workflows quickly and easily, wherever they may be on their AI journey. To learn more about how to make AI work for you, download a complimentary copy of the Gartner 2020 Magic Quadrant for Cloud AI Developer Services report.


Disclaimer: Gartner, Magic Quadrant for Cloud AI Developer Services, Van Baker, Bern Elliot, Svetlana Sicular, Anthony Mullen, Erick Brethenoux, 24 February 2020. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Dreaming big, traveling far, and expanding access to technology

Editor’s note: In honor of Black History Month, we’re talking to Cloud Googlers about what identity means to them and how their personal histories shape their work to influence the future of cloud technology. 

Albert Sanders, senior counsel for government affairs and public policy at Google Cloud, has worked in the White House, negotiated bipartisan deals in Congress, and recently addressed the United Nations General Assembly. His personal and professional travels have taken him to five continents—and he’s visited 11 (and counting!) countries in Africa. 

We sat down with Albert to hear more about his journey, some of his favorite moments, and advice on navigating career.

google cloud albert sanders.jpg

Why did you choose a career in public policy?

I’ve seen the real-life benefit when policymakers and government agencies get it right—and the troubling consequences when they do not. For example, I went to a high school where most students qualified for free, publicly funded meals. I didn’t fully appreciate it at the time, but that meant many of my classmates were living at or below the poverty line, so school was often the only place they’d receive balanced, hot meals on a consistent basis. 

We had some incredibly dedicated teachers and administrators, but my high school also operated at about double its maximum capacity. There were sometimes not enough seats or textbooks, so some of us had to stand in class and often we were prohibited from taking textbooks home. 

I learned early on that the decisions made in city halls, capitol buildings, and government agencies have a direct impact—sometimes positive, sometimes negative—on real people. Later in life I’d learn that this was not just true in education but all across society. So, I knew from an early age that I didn’t want to be a bystander. I wanted to have a direct impact on these decisions. 

Tell us about your path to working in government.

My entree to public service was law school. I wanted to learn how the system worked, gain some expertise, and figure out how I could add value. I started out at a corporate law firm, working long hours learning the law, advising clients, honing my written and oral communication skills, and experiencing first-hand how various laws and regulations were directly impacting my clients. It was incredibly challenging and rewarding work. But, one day the phone rang with the proverbial “offer I could not refuse.”

After a series of interviews, Senator Dick Durbin of Illinois asked me to join his Senate staff. At the time, he was the second-highest ranking U.S. Senator, who in 2004 had introduced a Senate candidate by the name of Barack Obama to the Democratic Convention. I was a twenty-something lawyer whose political “experience” was basically comprised of watching each one of those conventions from the age of 8—and telling my parents how to vote thereafter. 

Taking the job was a no-brainer. Adjusting to the 60% pay cut that came with it was much harder. Looking back, I’m so glad that I pursued my passion and chose to follow the path that gave me a chance to have the most impact—even if that meant waiting until later to maximize my earning potential. Money is important and individual circumstances differ, but no amount of money could purchase the experiences, opportunities, or relationships that blossomed during my time on Capitol Hill. 

What did you learn from your time on Capitol Hill?

Television pundits, reporters, social media influencers, folks at the barber shop and others all across America were debating the things I was working on with Sen. Durbin each day. We were working incredibly hard to improve the lives of everyday Americans. And I loved every minute of it! Some days I was working on issues about which I had deep knowledge. Other days, I worked on issues that forced me out of my comfort zone, requiring me to lean on outside experts for insight. 

Both were equally valuable to my growth because they helped me build—and trust—my own instincts. I learned how to assess the character, knowledge, and motives of the external stakeholders trying to sway us one way or another on an issue. Having and exercising good judgment, especially where you have limited information or time, is a learned skill.

I also saw the power of personal stories to compel people to action. When writing policy, we would look to the facts and the figures. But when it was time to advocate and persuade, Sen. Durbin encouraged us to find and share the stories of people who would be helped or harmed by a given approach. 

We did this in 2011, when I helped him build and lead the bipartisan coalition to pass the FDA Food Safety Modernization Act—the most comprehensive reform of our nation’s food safety laws in more than 50 years. It would not have happened without the courageous kids, adults, and seniors who came to Congress to talk about the loved ones they had lost or the physical and emotional consequences they endured as a result of foodborne illnesses. Those compelling voices, combined with a well-organized coalition of bipartisan advocates and a handful of policymakers willing to tackle the problem, got that bill through both houses of Congress and to President Obama’s desk for signature.

What was it like working in the White House? 

I could talk about that experience for hours! I’ll never forget the day I received the phone call offering me the job of Associate Counsel to the President in the Office of White House Counsel. I am smiling as I reflect on it now. I was pacing in my bedroom, trying to process some bad news, when the phone rang. In an instant, that call changed my mood, and the course of my career! The opportunity to work for President Obama in the White House was literally a dream come true. 

My portfolio included oversight and investigations, cybersecurity and privacy, and high-stakes litigation. The substantive work was tough and invigorating, and offered an opportunity to apply lessons from each of my prior roles. The people on our team were some of the most brilliant and dedicated public servants I’d ever met. Their backgrounds and personal stories were so impressive, but I recall being even more impressed by their humility and work ethic. 

Working at the White House involved late nights, long weekends, and its fair share of stress. But I was reminded of the privilege I had and the gravity of my responsibility every time I parked my car on The Ellipse, chatted with Secret Service agents as I swiped my badge or gave a West Wing tour. I’ll never forget the smiles on the faces of the D.C. high school students we hosted in the basement bowling alley one weekend. Some of them came from high schools similar to mine, and I could see in their eyes just how special this moment was for them.

We heard you have a goal to visit every African country. Can you tell us more?

I do! That’s another topic I could speak on at length. I’ve been to 11 countries in Africa so far, and my goal is to spend quality time in all 54. 

My first trip was to South Africa several years ago. During that trip, we would barely scratch the surface of the culture, history, energy, challenges and opportunities of this beautiful, complicated county. But the depth of connection we felt, the openness of the people, and the overall richness of that initial experience made a lasting impression.

I’ve tried many times—often unsuccessfully—to explain the special connection that I and many other African Americans feel to the continent of Africa. Many Americans may take for granted that they can trace their family origins to places outside the United States. One of the many enduring legacies of slavery is that most African Americans don’t have that direct connection to their family history. We were the only group of people to arrive on American soil en masse against their will, and it’s often difficult to trace family history even four or five generations. This creates a void that is often uncomfortable to discuss, because it’s a stark reminder of the present-day impact of our nation’s brutal history.

Traveling through Africa is intensely personal. It’s a way to connect with a rich and textured personal history about which so many of us know so little. My visits are, in some ways, a small, personal tribute to that history and those who lived it. I may not know the names of my ancestors or the place of their birth, but I’m reminded regularly that they passed on to us a resilience, faith, and determination that could not be shackled. When they were praying for freedom in the bowels of a slave ship, nursing wounds from a vicious beating, or hoping for a better tomorrow—those prayers and hopes were for my generation and all the others that have followed. I stand on their shoulders and I can only hope that I make them proud. 

Traveling through Africa is also just incredibly fun. Every country I visit is packed with new discoveries, incredible adventures, amazing food, unforgettable people, rich culture and so much more! I’ve walked with gorillas in Rwanda’s Volcanoes National Park, scaled Sahara Desert sand dunes in Merzouga, Morocco, and I’ve run my fingertips over the hieroglyphics on Nubian Pyramids in Meroe, Sudan. I celebrated Eid al Fitr, the feast that marks the end of Ramadan, in Dakar, Senegal with a family who met me one day and welcomed me into their home the next. And I’ll never forget standing in the doorway and looking out into the expanse of the Atlantic Ocean from the Point of No Return at Cape Coast Castle in Accra, Ghana—the very same doorway through which many enslaved Africans began their horrific journey to the United States 400 years ago. 

How have your experiences shaped your work at Google? 

As the lead for global infrastructure public policy, I partner with subject matter experts, attorneys, engineers, and other Googlers from all over the world. Ultimately, we strive to help more people benefit from cloud computing. 

There used to be a huge technology barrier to building a business. With cloud computing, all you need is an internet connection and you can have the same computing power, data analytics, artificial intelligence, and secure infrastructure that powers Google products like Gmail, YouTube, and Google Maps. Google Cloud tools don’t only improve business outcomes, they expand technology access—and thereby opportunity. I’m pleased to help bring our cutting-edge technology to more organizations globally and support policymakers, NGOs, and other organizations that leverage our cloud tools to drive innovation, improve local economies, and enhance digital literacy.

For someone so passionate about public service, moving into the private sector was definitely a change. But I continue to be guided by a personal mission statement of working for individuals, or in the case of Google, a company, with a mission I support and values I share.

Do you have any career advice to share?

Along with following a personal mission statement, I’ve gotten other advice from mentors and colleagues. First, it’s important to embrace the uncomfortable and unprecedented. Three years ago, I was the first hire on the public policy team for Google Cloud. Since then, our team has experienced exponential growth and global distribution. I still remember some of the early challenges, but it’s been an incredible journey and I’m happy I stepped up to the plate. 

Second, don’t be afraid to advocate for yourself. Suffering in silence or being reluctantly agreeable doesn’t win allies. It only builds internal resentment and deprives your existing allies of the opportunity to help you resolve issues. 

Third, representation matters. One of the reasons I do my best every day is because I’m aware that I must excel for myself—and for other people of color who are still terribly underrepresented in our industry. I appreciate Google’s various initiatives to address this issue. I’m committed to doing my part to support those efforts, ensure accountability, and demonstrate through my own work product and work ethic what’s possible when diverse perspectives and people have a seat at the table.

Hitting the Silicon Slopes with a new Salt Lake City region, now open

Today, we’re launching our newest Google Cloud Platform region in Salt Lake City, bringing a third region to the western United States, the sixth nationally, and our global total to 22.

GCP SLC region.gif

A region for the Silicon Slopes

Utah’s Silicon Slopes area is home to many digitally savvy companies. Now open to Google Cloud customers, the Salt Lake City region (us-west3) provides you with the speed and availability you need to innovate faster, build high-performing applications, and best serve local customers. Additionally, the region gives you added flexibility to distribute your workloads across the western U.S., including our existing cloud regions in Los Angeles and Oregon.

The Salt Lake City region offers immediate access to three zones, for high availability workloads, and our standard set of products, including Compute Engine, Kubernetes Engine, Bigtable, Spanner, and BigQuery. Our private backbone connects Salt Lake City to our global network quickly and securely. In addition, you can integrate your on-premises workloads with our new region using Cloud Interconnect. This means that Salt Lake City-based customers can expand globally from their front door, and those based outside the region can easily reach their users in the mountain West.

GCP SLC service .jpg
Visit our cloud locations page for a complete list of services available in the Salt Lake City region.

What customers are saying

Industries including healthcare, financial services, and IT are investing in Salt Lake City. Organizations across these verticals have turned to the Google Cloud to innovate faster and help solve their most complex challenges.

PayPal, a leading technology platform and digital payments company, is migrating key portions of its payments infrastructure to the new region. For more on PayPal’s journey with Google Cloud, read today’s press release for details. 

Overstock, a 20-year-old tech company that provides best-in-class retail customer experiences, has been in the technology space long before enterprise cloud environments became a reality. 

“Our home-grown infrastructure was built in a pre-cloud world and needed upgrading. In our search for a cloud partner, we had a specific set of criteria in mind given our industry and global customer base. We were able to maintain site-wide performance while updating our legacy systems to a custom public/private cloud hybrid with Google’s systems. With this new region, we expect to achieve higher availability, lower latency, greater business continuity, and improved quality of our service going forward,” said Joel Weight, CTO, Overstock.  

Recursion, a digital biology company based in Salt Lake City that focuses on industrializing drug discovery, selected Google Cloud as its primary public cloud provider as it builds a drug discovery platform that has the potential to cut the time to discover and develop a new medicine by a factor of 10. 

“Google Cloud’s continued investment in the area is a clear indicator that Salt Lake City is a force to be reckoned with as an influential tech hub. With the new cloud region, companies like ours have access to faster, scalable computing infrastructure to better serve their customers. We look forward to the opportunities that are ahead in collaboration with Google,” said Ben Mabey, Chief Technical Officer, Recursion.

StorageCraft, a data protection and recovery provider headquartered in Draper, Utah, will deploy Google Cloud to support business growth and future-proof its data protection and recovery product cloud services portfolio. 

“StorageCraft Cloud Solutions are a central part of our product offering and growth strategy. As our business expands, we will continue to deploy technology that optimizes the performance of our solutions to the benefit of our partners and our customers. Collaborating with Google Cloud close to our headquarters will help ensure that we can easily scale the capacity of our offerings with high-performing cloud services. This is a critical requirement of partners and customers who rely on StorageCraft solutions to always keep their data safe, accessible and optimized,” said Jawaad Tariq, VP of Engineering, StorageCraft. 

What’s next

We are excited to welcome you to our new cloud region in Salt Lake City, and eagerly await to see what you build with our platform. Stay tuned for more region announcements and launches this year, starting with our next U.S. region in Las Vegas. For more information, contact sales to get started with Google Cloud today.

Introducing BigQuery Flex Slots for unparalleled flexibility and control

Organizations of all sizes look to BigQuery to meet their growing analytics needs. We hear that customers value BigQuery’s radically innovative architecture, serverless delivery model, and integrated advanced capabilities in machine learning, real-time analytics, and business intelligence. To help you balance explosive demand for analytics with the need for predictable spend, central control, and powerful workload management, we recently launched BigQuery Reservations.

Today we are introducing Flex Slots, a new way to purchase BigQuery slots for short durations, as little as 60 seconds at a time. A slot is the unit of BigQuery analytics capacity. Flex Slots let you quickly respond to rapid demand for analytics and prepare for business events such as retail holidays and app launches. Flex Slots are rolling out to all BigQuery Reservations customers in the coming days!

BigQuery Flex Slot.jpg

Flex Slots give BigQuery Reservations users immense flexibility without sacrificing cost predictability or control.

  • Flex Slots are priced at $30 per slot per month, and are available in increments of 500 slots.
  • It only takes seconds to deploy Flex Slots in BigQuery Reservations. 
  • You can cancel after just 60 seconds, and you will only be billed for the seconds Flex Slots are deployed.

Benefits of Flex Slots

You can seamlessly combine Flex Slots with existing annual and monthly commitments to supplement steady-state workloads with bursty analytics capability. You may find Flex Slots especially helpful for short-term uses, including:

  • Planning for major calendar events, such as the tax season, Black Friday, popular media events, and video game launches. 
  • Meeting cyclical periods of high demand for analytics, like Monday mornings.
  • Completing your data warehouse evaluations and dialing in the optimal number of slots to use.

Major calendar events. For many businesses, specific days or weeks of the year are crucial. Retailers care about Black Friday and Cyber Monday, gaming studios focus on the first few days of launching new titles, and financial services companies worry about quarterly reporting and tax season. Flex Slots enable such organizations to scale up their analytics capacity for the few days necessary to sustain the business event, and scale down thereafter, only paying for what they consumed.

Payment technology provider Global Payments plans to add even more flexibility to their usage with this feature. “BigQuery has been a steady engine driving our Merchant Portal Platform and analytics use cases. As a complex multinational organization, we were anxious to leverage BigQuery Reservations to manage BigQuery cost and resources. We had been able to manage our resources effectively in most areas but were missing a few,” says Mark Kubik, VP BI, data and analytics, application delivery at Global Payments. “With Flex Slots, we can now better plan for automated test suites, load testing, and seasonal events and respond to rapid growth in our business. We are eager to implement this new feature in our workloads to drive efficiency, customer experience, and improved testing.”

Cyclical demand. If the majority of your users log into company systems at nine every Monday morning to check their business dashboards, you may spin up Flex Slots to rapidly respond to increased demand on your data warehouse. This is something that the team at Forbes has found helpful. 

“Moving to BigQuery Reservations enabled us to self-manage our BigQuery costs,” says David Johnson, vice president, business intelligence, Forbes. “Flex Slots will give us an additional layer of flexibility—we can now bring up slots whenever we have a large processing job to complete, and only pay for the few minutes they were needed.”

Evaluations. Whether you’re deciding on BigQuery as your cloud data warehouse or trying to understand the right number of BigQuery slots to purchase, Flex Slots provide the flexibility to quickly experiment with your environment.

BigQuery Flex Shot Evaluations.jpg

The BigQuery advantage

Flex Slots are especially powerful considering BigQuery’s unique architecture and true separation of storage and compute. Because BigQuery is serverless, provisioning Flex Slots doesn’t require instantiating virtual machines. It’s a simple back-end configuration change, so acquiring Flex Slots happens very quickly. And because BigQuery doesn’t rely on local disk for performance, there is no warm-up period with poor and unpredictable performance. Flex Slots perform optimally from the moment they’re provisioned. 

Flex Slots is an essential part of our BigQuery Reservations platform. BigQuery Reservations give intelligence-hungry enterprises the control necessary to enable their organizations with a powerful tool like BigQuery while minimizing fiscal and security risks:

  • With Reservations, administrators can centrally decide who in their organization can make purchasing decisions, neutralizing the fear of shadow IT.  

  • Users can manage and predict their organizations’ BigQuery spend and conformism to fixed budgets.

  • Administrators can optionally manage how their departments, teams, and workloads get access to BigQuery in order to meet their specific analytics needs. 

  • Flex Slots offer BigQuery users an unparalleled level of flexibility—purchase slots for short bursts to complement your steady-state workloads. 

Getting started with Flex Slots

Flex Slots are rolling out as we speak, and will be available in the coming days in the BigQuery Reservations UI.

You can purchase Flex Slots alongside monthly and annual commitment types, with the added benefit of being able to cancel them at any time after the first 60 seconds. To get started right away, try the BigQuery sandbox. If you are thinking about migrating to BigQuery from other data warehouses, check out our data warehouse migration offer

Learn more about: