Migrating from App Engine Memcache to Cloud Memorystore (Module 13)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The previous Module 12 episode of the Serverless Migration Station video series demonstrated how to add App Engine Memcache usage to an existing app that has transitioned from the webapp2 framework to Flask. Today’s Module 13 episode continues its modernization by demonstrating how to migrate that app from Memcache to Cloud Memorystore. Moving from legacy APIs to standalone Cloud services makes apps more portable and provides an easier transition from Python 2 to 3. It also makes it possible to shift to other Cloud compute platforms should that be desired or advantageous. Developers benefit from upgrading to modern language releases and gain added flexibility in application-hosting options.

While App Engine Memcache provides a basic, low-overhead, serverless caching service, Cloud Memorystore “takes it to the next level” as a standalone product. Rather than a proprietary caching engine, Cloud Memorystore gives users the option to select from a pair of open source engines, Memcached or Redis, each of which provides additional features unavailable from App Engine Memcache. Cloud Memorystore is typically more cost efficient at-scale, offers high availability, provides automatic backups, etc. On top of this, one Memorystore instance can be used across many applications as well as incorporates improvements to memory handling, configuration tuning, etc., gained from experience managing a huge fleet of Redis and Memcached instances.

While Memcached is more similar to Memcache in usage/features, Redis has a much richer set of data structures that enable powerful application functionality if utilized. Redis has also been recognized as the most loved database by developers in StackOverflow’s annual developers survey, and it’s a great skill to pick up. For these reasons, we chose Redis as the caching engine for our sample app. However, if your apps’ usage of App Engine Memcache is deeper or more complex, a migration to Cloud Memorystore for Memcached may be a better option as a closer analog to Memcache.

Migrating to Cloud Memorystore for Redis featured video


Performing the migration

The sample application registers individual web page “visits,” storing visitor information such as IP address and user agent. In the original app, the most recent visits are cached into Memcache for an hour and used for display if the same user continuously refreshes their browser during this period; caching is a one way to counter this abuse. New visitors or cache expiration results new visits as well as updating the cache with the most recent visits. Such functionality must be preserved when migrating to Cloud Memorystore for Redis.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how the most recent visits are cached into Memcache. After completing the migration, the underlying caching infrastructure has been swapped out in favor of Memorystore (via language-specific Redis client libraries). In this migration, we chose Redis version 5.0, and we recommend the latest versions, 5.0 and 6.x at the time of this writing, as the newest releases feature additional performance benefits, fixes to improve availability, and so on. In the code snippets below, notice how the calls between both caching systems are nearly identical. The bolded lines represent the migration-affected code managing the cached data.

Switching from App Engine Memcache to Cloud Memorystore for Redis

Wrap-up

The migration covered begins with the Module 12 sample app (“START”). Migrating the caching system to Cloud Memorystore and other requisite updates results in the Module 13 sample app (“FINISH”) along with an optional port to Python 3. To practice this migration on your own to help prepare for your own migrations, follow the codelab to do it by-hand while following along in the video.

While the code migration demonstrated seems straightforward, the most critical change is that Cloud Memorystore requires dedicated server instances. For this reason, a Serverless VPC connector is also needed to connect your App Engine app to those Memorystore instances, requiring more dedicated servers. Furthermore, neither Cloud Memorystore nor Cloud VPC are free services, and neither has an “Always free” tier quota. Before moving forward this migration, check the pricing documentation for Cloud Memorystore for Redis and Serverless VPC access to determine cost considerations before making a commitment.

One key development that may affect your decision: In Fall 2021, the App Engine team extended support of many of the legacy bundled services like Memcache to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache even when upgrading to 3.x so long as you retrofit your code to access bundled services from next-generation runtimes.

A move to Cloud Memorystore and today’s migration techniques will be here if and when you decide this is the direction you want to take for your App Engine apps. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we plan to cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Helping Developers Create Meaningful Voice Interactions with Android

Helping Developers Create Meaningful Voice Interactions with Android

Posted by Rebecca Nathenson, Director, Product Management

As we recently announced at I/O, we’re investing in new ways to make Google Assistant your go-to conversational helper for everyday tasks. And we couldn’t do that without a rich community of developers. While Conversational Actions were an excellent way to experiment with voice, the ecosystem has evolved significantly over the last 5 years and we’ve heard some important feedback: users want to engage with their favorite apps using voice, and developers want to build upon their existing investments in Android.

In response to that feedback, we’ve decided to focus our efforts on making App Actions with Android the best way for developers to create deeper, more meaningful voice-forward experiences. As a result, we will turn down Conversational Actions one year from now, in June 2023.

Improving voice-forward experiences

Whether someone asks Assistant to start a workout, order food, or schedule a grocery pickup, we know users are looking for ways to get things done more naturally using voice. To allow developers to integrate those helpful voice experiences into existing Android content more easily – without having to build from scratch – we’re committed to working with them to build App Actions with Android. This will give users more ways to engage with an app’s content – like voice queries and proactive suggestions – and access the app features they already know and love.

We’re continuing to expand the reach of App Actions in the following ways:

  • Integrating voice capabilities across Android devices such as mobile, auto, wearables and other devices in the home;
  • Bringing more traffic without more development work (i.e. Assistant can now direct users to apps even when queries don’t mention an app name);
  • Driving users to the app’s Play Store page if they don’t have the app installed yet; and
  • Surfacing in ‘All Apps’ search for Pixel 6 users.

App Actions not only make your apps easier to discover; you can offer deeper voice experiences by allowing users to simply ask for what they need in their queries. Moreover, we’ll continue investing in all of the popular Assistant experiences users love, like Timers, Media, Home Automation, Communications, and more.

Supporting our developers

We know that these changes aren’t easy, which is why we’re giving developers a year to prepare for the turndown of Conversational Actions. We’re here to help you navigate this transition with these helpful resources:

Building the future together

Looking ahead, we envision a platform that is intuitive, natural, and voice-forward – and one that allows developers to leverage the entire Android ecosystem of devices so they can easily reach more users. We’re always looking to improve the Assistant experience and we’re confident that App Actions is the best way to do that. We’re grateful for all you’ve done to build the Google Assistant ecosystem over the past 5 years and we’re here to help navigate the changes as we continue to make it even better. We’re excited about what lies ahead and we’re grateful to build it together.

How to use App Engine Memcache in Flask apps (Module 12)

Posted by Wesley Chun

Background

In our ongoing Serverless Migration Station series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another objective is to help developers learn how to move away from App Engine legacy APIs (now called “bundled services”) to Cloud standalone equivalent services. Once this has been accomplished, apps are much more portable, making them flexible enough to:

In today’s Module 12 video, we’re going to start our journey by implementing App Engine’s Memcache bundled service, setting us up for our next move to a more complete in-cloud caching service, Cloud Memorystore. Most apps typically rely on some database, and in many situations, they can benefit from a caching layer to reduce the number of queries and improve response latency. In the video, we add use of Memcache to a Python 2 app that has already migrated web frameworks from webapp2 to Flask, providing greater portability and execution options. More importantly, it paves the way for an eventual 3.x upgrade because the Python 3 App Engine runtime does not support webapp2. We’ll cover both the 3.x and Cloud Memorystore ports next in Module 13.

Got an older app needing an update? We can help with that.

Adding use of Memcache

The sample application registers individual web page “visits,” storing visitor information such as the IP address and user agent. In the original app, these values are stored immediately, and then the most recent visits are queried to display in the browser. If the same user continuously refreshes their browser, each refresh constitutes a new visit. To discourage this type of abuse, we cache the same user’s visit for an hour, returning the same cached list of most recent visits unless a new visitor arrives or an hour has elapsed since their initial visit.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how each visit is registered. After the update, the app attempts to fetch these visits from the cache. If cached results are available and “fresh” (within the hour), they’re used immediately, but if cache is empty, or a new visitor arrives, the current visit is stored as before, and this latest collection of visits is cached for an hour. The bolded lines represent the new code that manages the cached data.

Adding App Engine Memcache usage to sample app

Wrap-up

Today’s “migration” began with the Module 1 sample app. We added a Memcache-based caching layer and arrived at the finish line with the Module 12 sample app. To practice this on your own, follow the codelab doing it by-hand while following the video. The Module 12 app will then be ready to upgrade to Cloud Memorystore should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Cloud Memorystore, stay tuned for the Module 13 video or try its codelab to get a sneak peek. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we hope to one day cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Coral, Google’s platform for Edge AI, chooses ASUS as OEM partner for global scale

We launched Coral in 2019 with a mission to make edge AI powerful, private, and efficient, and also accessible to a wide variety of customers with affordable tools that reliably go from prototype to production. In these first few years, we’ve seen a strong growth in demand for our products across industries and geographies, and with that, a growing need for worldwide availability and support.

That’s why we’re pleased to announce that we have signed an agreement with ASUS IoT, to help scale our manufacturing, distribution and support. With decades of experience in electronics manufacturing at a global scale, ASUS IoT will provide Coral with the resources to meet our growth demands while we continue to develop new products for edge computing.

ASUS IoT is a sub-brand of ASUS dedicated to the creation of solutions in the fields of AI and the internet of things (IoT). Their mission is to become a trusted provider of embedded systems and the wider AI and IoT ecosystem. ASUS IoT strives to deliver best-in-class products and services across diverse vertical markets, and to partner with customers in the development of fully-integrated and rapid-to-market applications that drive efficiency – providing convenient, efficient, and secure living and working environments for people everywhere.

ASUS IoT already has a long-standing history of collaboration with Coral, being the first partner to release a product using the Coral SoM when they launched the Tinker Edge T development board. ASUS IoT has also integrated Coral accelerators into their enterprise class intelligent edge computers and was the first to release a multi Edge TPU device with the award winning AI Accelerator PCIe Card. Because we have this history of collaboration, we know they share our strong commitment to new innovation in edge computing.

ASUS IoT also has an established manufacturing and distribution processes, and a strong reputation in enterprise-level sales and support. So we’re excited to work with them to enable scale and long-term availability for Coral products.

With this agreement, the Coral brand and user experience will not change, as Google will maintain ownership of the brand and product portfolio. The Coral team will continue to work with our customers on partnership initiatives and case studies through our Coral Partnership Program. Those interested in joining our partner ecosystem can visit our website to learn more and apply.

Coral.ai will remain the home for all product information and documentation, and in the coming months ASUS IoT will become the primary channel for sales, distribution and support. With this partnership, our customers will gain access to dedicated teams for sales and technical support managed by ASUS IoT.

ASUS IoT will be working to expand the distribution network to make Coral available in more countries. Distributors interested in carrying Coral products will be able to contact ASUS IoT for consideration.

We continue to be impressed by the innovative ways in which our customers use Coral to explore new AI-driven solutions. And now with ASUS IoT bringing expanded sales, support and resources for long-term availability, our Coral team will continue to focus on building the next generation of privacy-preserving features and tools for neural computing at the edge.

We look forward to the continued growth of the Coral platform as it flourishes and we are excited to have ASUS IoT join us on our journey.

How can App Engine users take advantage of Cloud Functions?

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction

Recently, we discussed containerizing App Engine apps for Cloud Run, with or without Docker. But what about Cloud Functions… can App Engine users take advantage of that platform somehow? Back in the day, App Engine was always the right decision, because it was the only option. With Cloud Functions and Cloud Run joining in the serverless product suite, that’s no longer the case.

Back when App Engine was the only choice, it was selected to host small, single-function apps. Yes, when it was the only option. Other developers have created huge monolithic apps for App Engine as well… because it was also the only option. Fast forward to today where code follows more service-oriented or event-driven architectures. Small apps can be moved to Cloud Functions to simplify the code and deployments while large apps could be split into smaller components, each running on Cloud Functions.

Refactoring App Engine apps for Cloud Functions

Small, single-function apps can be seen as a microservice, an API endpoint “that does something,” or serve some utility likely called as a result of some event in a larger multi-tiered application, say to update a database row or send a customer email message. App Engine apps require some kind web framework and routing mechanism while Cloud Function equivalents can be freed from much of those requirements. Refactoring these types of App Engine apps for Cloud Functions will like require less overhead, helps ease maintenance, and allow for common components to be shared across applications.

Large, monolithic applications are often made up of multiple pieces of functionality bundled together in one big package, such as requisitioning a new piece of equipment, opening a customer order, authenticating users, processing payments, performing administrative tasks, and so on. By breaking this monolith up into multiple microservices into individual functions, each component can then be reused in other apps, maintenance is eased because software bugs will identify code closer to their root origins, and developers won’t step on each others’ toes.

Migration to Cloud Functions

In this latest episode of Serverless Migration Station, a Serverless Expeditions mini-series focused on modernizing serverless apps, we take a closer look at this product crossover, covering how to migrate App Engine code to Cloud Functions. There are several steps you need to take to prepare your code for Cloud Functions:

  • Divest from legacy App Engine “bundled services,” e.g., Datastore, Taskqueue, Memcache, Blobstore, etc.
  • Cloud Functions supports modern runtimes; upgrade to Python 3, Java 11, or PHP 7
  • If your app is a monolith, break it up into multiple independent functions. (You can also keep a monolith together and containerize it for Cloud Run as an alternative.)
  • Make appropriate application updates to support Cloud Functions

    The first three bullets are outside the scope of this video and its codelab, so we’ll focus on the last one. The changes needed for your app include the following:

    1. Remove unneeded and/or unsupported configuration
    2. Remove use of the web framework and supporting routing code
    3. For each of your functions, assign an appropriate name and install the request object it will receive when it is called.

    Regarding the last point, note that you can have multiple “endpoints” coming into a single function which processes the request path, calling other functions to handle those routes. If you have many functions in your app, separate functions for every endpoint becomes unwieldy; if large enough, your app may be more suited for Cloud Run. The sample app in this video and corresponding code sample only has one function, so having a single endpoint for that function works perfectly fine here.

    This migration series focuses on our earliest users, starting with Python 2. Regarding the first point, the app.yaml file is deleted. Next, almost all Flask resources are removed except for the template renderer (the app still needs to output the same HTML as the original App Engine app). All app routes are removed, and there’s no instantiation of the Flask app object. Finally for the last step, the main function is renamed more appropriately to visitme() along with a request object parameter.

    This “migration module” starts with the (Python 3 version of the) Module 2 sample app, applies the steps above, and arrives at the migrated Module 11 app. Implementing those required changes is illustrated by this code “diff:”

    Migration of sample app to Cloud Functions

    Next steps

    If you’re interested in trying this migration on your own, feel free to try the corresponding codelab which leads you step-by-step through this exercise and use the video for additional guidance.

    All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 as well as content for the next-generation Cloud Functions service, so stay tuned. If you’re curious whether it’s possible to write apps that can run on App Engine, Cloud Functions, or Cloud Run with no code changes at all, the answer is yes. Hope this content is useful for your consideration when modernizing your own serverless applications!

How Jira for Google Chat uses the latest platform features for app and bot building

Posted by By Kyle Zhao, Software Engineer and Charles Maxson, Developer Advocate

Nothing breaks the flow of getting work done like having to stop what you’re doing in one application and switch over to another to look up information, log an event, create a ticket or check on the status of a project task. For Google Workspace users who also rely on Atlassian’s Jira Software for their issue tracking and project management needs, Jira for Chat helps bridge the gap between having conversations with your team in Google Chat and seamlessly staying on top of issues and tasks from Jira while keeping everyone in the loop.

Recently, there have been a number of enhancements to the Google Chat framework for developers that allows them to make connections between applications like Jira and Google Chat a whole lot better. And in this post, we’ll take a look at how the latest version of Jira for Chat takes advantage of some of those newer enhancements for building apps and bots for Chat. Whether you are thinking about building or upgrading your own integration with Chat, or are simply interested in getting more out of using Jira with Google Workspace for you and your team, we’ll cover how Jira for Chat brings those newer features to life.

Connections made easy: Improved Connection Flow

One of the most important steps for getting users to leverage any integration is to make it as easy as possible to set up. Setting up Jira to integrate with Chat requires two applications to be installed, 1) the Google Chat bot for Jira Cloud from the Atlassian Marketplace and 2) Jira for Chat (unfortunately there are no direct links available, but you can navigate to it in the Chat catalog) located in the Google Chat application under the “+” icon to start a chat.

In the earlier version of Jira for Chat, the setup required a number of steps that were somewhat less intuitive. That’s changed, with the redesign of the new connection flow process that’s built around an improved connection wizard that provides detailed visual information to connect Jira for Chat to your Jira instance.

The new wizard (made possible by enhancements with the Chat dialogs feature) takes the guesswork of trudging through a number of tedious steps, shows actionable errors if something has been misconfigured or isn’t working and makes it easier by parse out Jira URLs guiding users along the way. See the connection wizard in action below. Now anyone can set it up like a pro!

Jira for Chat Connection Flow Wizard Dialog

Batched Notifications: Taking care of notification fatigue

A user favorite feature of Jira for Chat is its ability to keep you informed via Google Chat of updates to your team’s projects, tickets and tasks. But nobody likes a ‘chatty’ app either and notification fatigue is real—and really annoying. Notifications are only useful when they provide valuable information in a timely fashion without being overburdening – otherwise they run the risk of being ignored or even turned off.

To avoid notification fatigue, the Jira Chat bot enables batched notifications that optimizes sending notifications in batches based on the time elapsed since the last activity in an issue. Jira for Chat will send all updates to a ticket with a single card to Google Chat if a lot of activity is happening in Jira until at least 15 seconds have passed since the last update to the issue or 60 seconds have passed since the first update in the group. The latter keeps notifications fresh in case a lot of continuous activity is happening.

Updates to the same Jira issue are grouped in one notification card, until one of the following conditions is true:

  1. 15 seconds have passed without any additional updates to the issue.

    Example: Alice reassigned issue X at 6:00:00, and then added a comment at 6:00:10. Both the “assignee change” and the “new comment” will be grouped into a single notification, sent at 6:00:25.

  1. 60 seconds have passed since the first update in the group (to ensure a timely delivery)

    Example: Alice reassigned issue X at 6:00:00, and kept adding comments every 10 seconds. A notification card should be posted around 6:01:00, with all the changes in the past 60 seconds.

Example, Batching Notifications from 5 down to 1

Link Unfurling: Relevant context where you need it

One of the goals of integrating applications with Google Workspace is streamlining the flow of information with less clicks and fewer open tabs, making the new Link Unfurling feature a welcome addition to any Chat bot. Link Unfurling (also known as Link Previews) preemptively includes contextual information associated with a link passed to a Chat message, keeping the information inline and in context to the conversation while eliminating the need to interrupt your focus by following the link out of the conversation to its original source to gather more information.

Specifically with Jira for Chat, this means when a teammate posts a Jira link in Chat or pings you asking about more information about one of your tickets they’ve just linked in a message, you can now see that information immediately in the conversation along with the link, saving the steps of having to resort back to Jira every time. Link unfurling with the Jira Chat bot happens automatically once the app has been added and configured within a Chat conversation, there’s nothing additional that users need to do, and any links that Jira can preview will automatically get previewed within Chat.

Link Unfurling example in Jira for Chat

Create Issue Dialog: Take action from within Chat

Imagine you are in a lengthy conversation thread with colleagues in Google Chat, when you come to the conclusion that the topic you are discussing warrants a new ticket being created in your Jira instance. Instead of pivoting away from the conversation in Chat to create a new ticket in Jira, you can now quickly create a new Jira issue in Chat thanks to Jira for Chat.

To create an issue from Chat, simply invoke the slash command /jira_create to bring up the Create Issue dialog (enabled by the Chat dialogs feature). Then specify the Project that you would like to assign the ticket to, select Ticket Type, and enter a brief Summary. The rest of the fields are optional such as labels and description, and those, as well as advanced fields can always be filled out within your Jira instance at a later time if you would like. This way you can jump right back into the conversation, knowing you won’t forget to get this ticket logged, but also without missing a beat with what your team is talking about.

Create a Jira Issue Dialog

Takeaway and More Resources

The new enhancements to Jira for Chat make it a super useful companion for teams that rely on Google Workspace and Jira Software to manage their work. Whether it’s the new and improved connection flow, the less-is-more batched notifications handling, or the instant gratification of creating issues directly from Chat, it’s more than just a productivity booster, but also a great showcase for how the types of apps you can build with Google Chat are evolving.

Get started with Jira for Chat today or learn how you can build your own apps for Google Chat with the developer docs. To keep up with all the news about the Google Workspace Platform, please subscribe to our newsletter.

Hyper-local ads targeting made easy and automated with Radium

Posted by Álvaro Lamas, Natalija Najdova

Location targeting helps your advertising to focus on finding the right customers for your business. Are you, as a digital marketer, spending a lot of time optimizing your location targeting settings for your digital marketing campaigns? Are your ads running only in locations where you can deliver your services to your users or outside as well?

Read further to find out how Radium can help automate your digital marketing campaigns location targeting and make sure you only run ads where you deliver your services.

The location targeting settings challenge

Configuring accurate location targeting settings in Marketing Platforms like Google Ads allows your ads to appear in the geographic locations that you choose: countries, cities and zip codes OR radius around a location. As a result, precise geo targeting could help increase the KPIs of your campaigns such as the return on investment (ROI), the cost per acquisition (CPA) at high volumes, etc.

Mapping your business area to the available targeting options (country, city and zip code targeting or radius targeting) in Marketing Platforms is a challenge that every business doing online marketing campaigns has faced. This challenge becomes critical if you offer a service that is only available in a certain geographical area. This is particularly relevant for Food or Grocery Delivery Apps or organizations that run similar business models.

Adjusting these location targeting settings is a time consuming process. In addition, manually translating your business or your physical stores delivery areas into geo targeting settings is also an error prone process. And not having optimal targeting options might lead to ads shown to users that you cannot really deliver your services to, so you would likely lose money and time setting location targeting manually.

How can Radium help you?

Radium is a simple Web Application, based on App Scripts, that can save you money and time. Its UI helps you automatically translate a business area into radius targeting settings, one of the three options for geo targeting in Google Ads. It also provides you with an overview of the geographical information about how well the radius targeting overlaps with your business delivery area.

It has a few extra features like merging a few areas into one and generating the optimal radius targeting settings for those.

How does it work?

You can get your app deployed and running in less than an hour following these instructions. Once you’re done, in no time you can customize and improve your radius targeting settings to better meet your needs and optimize your marketing efforts.

Per delivery area that you provide, you will be able to visualize different circles in the UI, select one from the default circles or opt in for custom circle radius settings:

  • Large Circle: Pre-generated circle that englobes the rectangle that surrounds the targeting area
  • Small Circle: Pre-generated circle contained in the rectangle that surrounds the targeting area, touching its sides
  • Threshold Circle: Pre-generated circle with the minimum radius to cover at least the 90% of your delivery area, to maximize targeting and minimize waste
  • Custom Circle: Circle which center and radius can be customized manually by drag-and-drop and using the controls of the UI

    Large Circle

    Small Circle

    Threshold Circle

    Custom Circle

Take advantage of metrics to compare between all the radius targeting options and select the best fit for your needs. In red you can see the visualization of the business targeting area and, overlapped in gray, the generated radius targeting.

Metrics:

  • Radius: radius of the circle, in km
  • % Intersection: area of the Business Targeting Area inside the circle / total Business Targeting Area size
  • % Waste: area of circle excluding the Business Targeting Area / total Business Targeting Area size
  • Circle Size: area of the circle, in km2
  • Intersection Size: area of the Business Targeting Area, in km2
  • Waste Size: area of the circle excluding the Business Targeting Area, in km2
  • Circle Score: % Intersection – % Waste. The highest score represent the sweet spot, maximizing the targeting area and minimizing the waste area

Once you are done optimizing the radius settings, it’s time to activate them in your marketing campaigns. Radium offers you different ways of storing and activating this output, so you can use the one that better fits your needs:

  • Export your data to a Spreadsheet. This will allow you to have a mapping of readable names for each delivery area and its targeting settings, to generate the campaign settings in the csv format expected by Google Ads and to bulk upload them using Google Ads Editor
  • Directly download the csv file that can be uploaded to Google Ads via Google Ads Editor to bulk upload the settings of your campaigns
  • Upload them manually using the Google Ads UI

Find all the details about how to activate your location targeting settings in this step by step guide

Getting started with Radium

There are only 2 things you need to have in order to benefit from Radium:

  • Very easy to generate Maps JavaScript API Key
  • Map of your business’ delivery areas in either format:
    • KML file representing the polygon shaped targeting areas (see sample file)
    • CSV file with lat-lng, radius and name of the area it belongs to, more oriented to physical stores and restaurants (see sample file)

To get you started please visit the Radium repository on Github.

Summary

So, in conclusion, Radium helps you automate the location targeting configuration and optimization for your Google Ads campaigns, saving you time and minimizing errors of manual adjustments.

Prediction Framework, a time saver for Data Science prediction projects

Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil

Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.

Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.

Wouldn’t it be great to have a common reusable structure and just add the specific code for each of the stages?

Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.

Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.

The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.

Prediction Framework Architecture

To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.

Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:

  • Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
  • Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
  • Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
  • Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
  • Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.

One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.

In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.

For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.

How to get started in cloud computing

Posted by Google Cloud training & certifications team

Validated cloud skills are in demand. With Google Cloud certifications, employers know that certified individuals have proven knowledge of various professional roles within the cloud industry. Google Cloud certifications have also been recognized as some of the highest-paying IT certifications for the past several years. This year, the Google Cloud Certified Professional Data Engineer topped the list with an average salary of $171,749, while the Google Cloud Certified Professional Cloud Architect came in second place, with an average salary of $169,029.

You may be wondering what sort of background you need to take advantage of these opportunities: What sort of classes should you take? How exactly do you get started in the cloud without experience? Here are some tips to start learning about Google Cloud and build your cloud computing skills.

Get hands-on experience with cloud computing

Google Cloud training offers a wide range of learning paths featuring comprehensive courses and hands-on labs, so you get to practice with the real Google Cloud console. For instance, If you wanted to take classes to prepare for the Professional Data Engineer certification mentioned above, there is a complete learning path featuring four courses and 31 hands-on labs to help familiarize you with relevant topics like BigQuery, machine learning, IoT, TensorFlow, and more.

There are nine learning paths providing you with a launch pad to all major pillars of cloud computing, from networking, cloud security, database management, and hybrid cloud infrastructure. Each broader learning path contains specific learning paths to help you specifically train for job roles like Machine Learning Engineer. Visit the Google Cloud training page to find the right path for you.

Learn live from cloud experts

Google Cloud regularly hosts a half-day live training event called Cloud OnBoard which features hands-on learning led by experts. All sessions are also available to watch on-demand after the event.

If you’re a developer new to cloud computing, we recommend you start with Google Cloud Fundamentals, an entry-level course to learn about the basics of Google Cloud. Experts guide you through hands-on labs where you can practice using the Google Console, Google Cloud Shell, and more.

You’ll be introduced to core components of Google Cloud and given an overview of how its tools impact the entire cloud computing landscape. The curriculum covers Compute Engine and how to create VM instances from scratch and from existing templates, how to connect them together, and end with projects that can talk to each other safely and securely. You will also learn about the different storage and database options available on Google Cloud.

Other Cloud OnBoard event topics include cloud architecture, Kubernetes, data analytics, and cloud application development.

Explore Google Cloud infrastructure

Cloud infrastructure is the backbone of the internet. Understanding cloud infrastructure is a good starting point to start digging deeper into cloud concepts because it will give you a taste of the various aspects of cloud computing to figure out what you like best, whether it’s networking, security, or application development.

Build your foundational Google Cloud knowledge with our on-demand infrastructure training in the cloud infrastructure learning path. This learning path will provide you with practical experience through expert-guided labs which dive into Cloud Storage and other key application services like Google Cloud’s operations suite and Cloud Functions.

Show off your skills

Once you have a strong grasp on Google Cloud basics, you can start earning skill badges to demonstrate your experience.

Skill badges are digital credentials that recognize your ability to solve real-world problems with your cloud knowledge. You can share them on your resume or social profile so your professional network sees your technical skills. This can be useful for recruiters or employers as you transition to cloud computing work.Skill badges also enable you to get in-depth, hands-on experience with different Google Cloud offerings on the way to earning the credential.

You can also use them to start preparing for Google Cloud certifications which are more intensive and show employers that you are a cloud expert. Most Google Cloud certifications recommend having at least 6 months or up to several years of industry experience depending on the material.

Ready to get started in the cloud? Visit the Google Cloud training page to see all your options from in-person classes, online courses, special events, and more.

Promote your Google Workspace Marketplace apps with the new badge for developers

Posted by Elena Kingbo, Program Manager

Recently, Google Workspace Marketplace launched new features for developers to improve the application search and evaluation experience in the Marketplace for our Google Workspace admins and end users.

Continuing with updates to further improve the developer experience, we are excited to announce Google Workspace Marketplace badges.

The new badges will allow developers to promote their published Google Workspace Marketplace applications on their own websites. Users will be taken directly to the Marketplace application listing, where they can review application details, privacy policy, terms of service, and more. These users will then be able to securely install applications directly from the Google Workspace Marketplace.

This promotional tool can help you reach more potential users and grow your business. By being part of the Google Workspace Marketplace ecosystem, you receive direct access to the more than 3 billion Google Workspace users, where they can browse, search, and install solutions they need. To date, we’ve seen a stunning 4.8 billion apps installed in Google Workspace, transforming the way users connect, create, and collaborate.

The new Google Workspace Marketplace badge for developers.

Use the badge to show your existing and potential customers where they can find your application, backed by Google Workspace’s industry-leading standards of quality, security, and compliance. For developers with products available globally, the badge is available in 29 languages including Arabic, Spanish, Vietnamese, and more.

The new Google Workspace Marketplace badge creation page for developers.

Refresh your Marketplace application listing and marketing assets

When you are ready to incorporate the new badge onto your website to feature your Marketplace app, we recommend that you also review the creatives and information used in your app’s listing in the Google Workspace Marketplace. You should also include well designed feature graphics in your listing and localize your featured graphics, screenshots, and description to improve conversions globally.

Stay on top of developer news

To make sure you’re aware of the latest Google Workspace Platform updates for developers, make sure you subscribe to our Google Workspace Developers Newsletter for monthly updates, follow @workspacedevs, and join in on the community conversation.

Personalize user journeys by Pushing Dynamic Shortcuts to Assistant

Posted by Jessica Dene Earley-Cha, Developer Relations Engineer

Like many other people who use their smartphone to make their lives easier, I’m way more likely to use an app that adapts to my behavior and is customized to fit me. Android apps already can support some personalization like the ability to long touch an app and a list of common user journeys are listed. When I long press my Audible app (an online audiobook and podcast service), it gives me a shortcut to the book I’m currently listening to; right now that is Daring Greatly by Brené Brown.

Now, imagine if these shortcuts could also be triggered by a voice command – and, when relevant to the user, show up in Google Assistant for easy use.

Wouldn’t that be lovely?

Dynamic shortcuts on a mobile device

Well, now you can do that with App Actions by pushing dynamic shortcuts to the Google Assistant. Let’s go over what Shortcuts are, what happens when you push dynamic shortcuts to Google Assistant, and how to do just that!

Android Shortcuts

As an Android developer, you’re most likely familiar with shortcuts. Shortcuts give your users the ability to jump into a specific part of your app. For cases where the destination in your app is based on individual user behavior, you can use a dynamic shortcut to jump to a specific thing the user was previously working with. For example, let’s consider a ToDo app, where users can create and maintain their ToDo lists. Since each item in the ToDo list is unique to each user, you can use Dynamic Shortcuts so that users’ shortcuts can be based on their items on their ToDo list.

Below is a snippet of an Android dynamic shortcut for the fictional ToDo app.

val shortcut = = new ShortcutInfoCompat.Builder(context, task.id)
.setShortLabel(task.title)
.setLongLabel(task.title)
.setIcon(Icon.createWithResource(context, R.drawable.icon_active_task))
.setIntent(intent)
.build()

ShortcutManagerCompat.pushDynamicShortcut(context, shortcut)

Dynamic Shortcuts for App Actions

If you’re pushing dynamic shortcuts, it’s a short hop to make those same shortcuts available for use by Google Assistant. You can do that by adding the Google Shortcuts Integration library and a few lines of code.

To extend a dynamic shortcut to Google Assistant through App Actions, two jetpack modules need to be added, and the dynamic shortcut needs to include .addCapabilityBinding.

val shortcut = = new ShortcutInfoCompat.Builder(context, task.id)
.setShortLabel(task.title)
.setLongLabel(task.title)
.setIcon(Icon.createWithResource(context, R.drawable.icon_active_task))
.addCapabilityBinding("actions.intent.GET_THING", "thing.name", listOf(task.title))
.setIntent(intent)
.build()

ShortcutManagerCompat.pushDynamicShortcut(context, shortcut)

The addCapabilityBinding method binds the dynamic shortcut to a capability, which are declared ways a user can launch your app to the requested section. If you don’t already have App Actions implemented, you’ll need to add Capabilities to your shortcuts.xml file. Capabilities are an expression of the relevant feature of an app and contains a Built-In Intent (BII). BIIs are a language model for a voice command that Assistant already understands, and linking a BII to a shortcut allows Assistant to use the shortcut as the fulfillment for a matching command. In other words, by having capabilities, Assistant knows what to listen for, and how to launch the app.

In the example above, the addCapabilityBinding binds that dynamic shortcut to the actions.intent.GET_THING BII. When a user requests one of their items in their ToDo app, Assistant will process their request and it’ll trigger capability with the GET_THING BII that is listed in their shortcuts.xml.

<shortcuts xmlns:android="http://schemas.android.com/apk/res/android">
<capability android:name="actions.intent.GET_THING">
<intent
android:action="android.intent.action.VIEW"
android:targetPackage="YOUR_UNIQUE_APPLICATION_ID"
android:targetClass="YOUR_TARGET_CLASS">
<!-- Eg. name = the ToDo item -->
<parameter
android:name="thing.name"
android:key="name"/>
</intent>
</capability>
</shortcuts>

So in summary, the process to add dynamic shortcuts looks like this:

1. Configure App Actions by adding two jetpack modules ( ShortcutManagerCompat library and Google Shortcuts Integration Library). Then associate the shortcut with a Built-In Intent (BII) in your shortcuts.xml file. Finally push the dynamic shortcut from your app.

2. Two major things happen when you push your dynamic shortcuts to Assistant:

  1. Users can open dynamic shortcuts through Google Assistant, fast tracking users to your content
  2. During contextually relevant times, Assistant can proactively suggest your Android dynamic shortcuts to users, displaying it on Assistant enabled surfaces.

Not too bad. I don’t know about you, but I like to test out new functionality in a small app first. You’re in luck! We recently launched a codelab that walks you through this whole process.

Dynamic Shortcuts Codelab

Looking for more resources to help improve your understanding of App Actions? We have a new learning pathway that walks you through the product, including the dynamic shortcuts that you just read about. Let us know what you think!

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AppActions to share what you’re working on. Can’t wait to see what you build!

Improve your development workflow with Interactive Canvas DevTools

Posted by Nick Felker, Developer Relations Engineer

Interactive Canvas helps developers build multimodal apps for games, storytelling, and educational experiences on smart displays like the Nest Hub and Nest Hub Max. Setting up an Interactive Canvas app involves building a web app, a conversational model, and a server-based fulfillment to respond to voice commands. All three of these components need to be built out to test your apps using the simulator in the Actions Console. This works for teams who are able to build all three at once… but it means that everything has to be hooked up, even if you just want to test out the web portion of your app. In many cases, the web app provides the bulk of the functionality of an Interactive Canvas app. We recently published Interactive Canvas DevTools, a new Chrome extension that helps unbundle this development process.

Interactive Canvas DevTool Extension

Using Interactive Canvas DevTools

After installing the Interactive Canvas DevTools from the Chrome Web Store, you’ll see a new Interactive Canvas tab when you Open Chrome DevTools.

When you load your web app in your browser, from a publicly hosted URL, localhost, or a remote device on your network, this tab lets you directly interface with the Interactive Canvas callbacks registered on the page to quickly and iteratively test your experience. Suggestion chips are created after every execution to let you replay the same command later.

To get started even faster, you can go to the Preferences tab and click the Import /SDK button. This will open a directory picker. You can select your project’s SDK folder. The extension will identify JSON payloads and TTS marks in your project and surface them as suggestion chips automatically.

JSON historical object changes

When the fields of the JSON object changed, you can view the changes in a colored diff.

Methods that send data to your webhook are instead rerouted to the History tab. This tab hosts a list of every text query and canvas state change in reverse chronological order. This allows you to view how changes in your web app would affect your conversational state. Each time the canvas state changes, you can see a visual representation of which fields changed.

Different levels of notice when using an operation unsupported in Interactive Canvas

Different levels of notice when using an operation unsupported in Interactive Canvas.

There are a number of other features that enhance the developer experience. For example, for browser methods that are not supported in Interactive Canvas, you can optionally log a warning or throw an error when you try to use them. This will make it easier to identify compatibility issues sooner and enforce these policies while debugging.

Nest Hub devices in the Device list

You are able to set the window to match the Nest Hub screen.

You can also add a header element onto your page that will help you optimize your layout for a smart display. Combined with the Nest Hub and Nest Hub Max as new device presets in Chrome DevTools, you are able to set your development environment to be an accurate representation of how your Action will look and behave on a physical device.

Interactive Canvas tab on a remote device

You can also send data to your remote device.

This extension also works if you have a physical device. If it is on the same network, you can connect to your smart display and open up the Interactive Canvas tab on that remote device. From there, you are able to send commands using the same interface.

You can install the extension now through the Chrome Web Store. We’re also happy to announce that the DevTools are Open Source! The source code for this extension has been published on GitHub, where you can find instructions on how to set up the project locally, and how to submit pull requests.

Thanks for reading! To share your thoughts or questions, join us on Reddit at /r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!