How to add Touch interactions to Android-based Fire TV applications

If you are building an app for Amazon Fire TV, it is good practice to implement the navigation and interaction of your app UI with the remote control. This can be achieved in multiple ways, but the key concept is to map the controls of the movement and interactions inside your apps to the D-Pad and buttons present on the Fire TV remote. The Fire TV documentation on UX best practices covers this basic aspects of app navigation and input.

With Fire TV expanding to automotive, it’s important to now add touch interactions to Fire TV apps on top of the D-Pad navigation. Customers will both be able to use remote controls, and touch on these devices.

In this tutorial we will see how to modify Android-based Fire TV apps designed for D-Pad to add touch interaction and how to provide a good touch-based UX.

The tutorial will cover 5 key steps:

 

  • Step 1: Understanding UI Directional Navigation and focus with D-Pad
  • Step 2: Applying Directional Navigation to TV apps layouts
  • Step 3: Adding touch interactions for clicks: OnClickListeners
  • Step 4: Managing view focus between D-Pad and touch interactions
  • Step 5: Additional best practices and testing touch on Fire TV

 

Step 1: Understanding UI Directional Navigation and focus with D-Pad

Android-based applications for Fire TV should follow clear platform-level patterns for remote-based navigation. Since Fire OS is built on top of Android, it follows the same layout and design patterns as standard Android apps.

In order to map app UI navigation automatically to the remote D-Pad, and specify in what order the different Android Views should be navigated by the customer, we need to use Android’s Directional Navigation (see Android’s documentation here). This is a best practice, and if your Android application doesn’t follow this pattern, it’s important to apply the following changes as this impacts how the Touch behaviour connects to the D-Pad behaviour.

The Directional Navigation requires us to specify for each “focusable” view what is the next or previous view that needs to be selected. This will allow the system to automatically map what is the next view that will be focused on when a user presses the navigation buttons on their remote D-Pad (Up, Down, Left, Right).

This is achieved by adding to our layout XML files views the following tags:

android:nextFocusDown, android:nextFocusRight, android:nextFocusLeft, android:nextFocusUp

followed by the id of the view we want the app to select next based on the order.

So for example

<TextView android:id="@+id/Item1"
    	  android:nextFocusRight="@+id/Item2"/>

Allows the TextView called Item1 to move to the view called Item2 when the customer presses the “Right” button on their D-Pad. This applies to all directions in navigation.

 

Step 2: Applying Directional Navigation to TV apps layouts

Before we apply touch interactions to our Fire TV app, we need to make sure that we create a consistent Directional Navigation experience between D-pad and touch navigation.

This is quite simple for basic apps interfaces, but Media and Entertainment TV Apps interfaces can be quite complex, as more often than not they need to display dynamic content. Hence, most views might be generated at runtime.

In order to achieve this, most developers use Android Views that can easily hold dynamic content. A good example of this is the RecyclerView. The RecyclerView is a component which allows dynamic content to be parsed from an adapter. RecyclerView is efficient and it implements one of the standard Android patterns: the ViewHolder.

However, since the content of a RecyclerView is dynamic, we need to make sure that the navigation between the views which are generated inside RecyclerViews is correct.

In order to demonstrate this, we created a simple application which simulates the standard implementation of a TV interface. This application has two main UI components:

  • A first LinearLayout called “menuLayout”, containing a RecyclerView called “recyclerViewMenu” which itself contains the left-side menu with all the categories
  • A second LinearLayout called “rowsLayout” containing other RecyclerViews which instead contain all the movies and content that can be played

On the left you can see the menuLayout in black, and on the right the rowsLayout in grey

 

While this is an oversimplification for the sake of this tutorial, as your app might have more complex nesting for its views, this represents the skeleton of a dynamic Media/TV App UI.

What you want to do now is to define how the directional navigation works on this layout.

The first thing that we want to make sure of is that we can actually move from our categories menu to the content rows. So what we do is to set nextFocusRight to the first row RecyclerView for our LinearLayout:

<LinearLayout
   android:id="@+id/menuLayout"
   [...]
   android:nextFocusRight="@id/rowRecyclerView1">

This way, once the user clicks on the right button, it will automatically move the focus to the first RecyclerView on the right.

Another thing that we need to do is to set up how the navigation between the items of the RecyclerView itself works. Since the views of a RecyclerView are created dynamically at runtime, it is not practical to manually set the direction of navigation on each individual view; it cannot be achieved using the XML layout anyway. Instead, in order to do this, we need to use a very specific tag on the RecyclerView called descendantFocusability:

<androidx.recyclerview.widget.RecyclerView
   android:id="@+id/recyclerViewMenu"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:descendantFocusability="afterDescendants" />

By setting descendantFocusability to afterDescendants we ensure that automatically, once the views are dynamically generated, the RecyclerView itself provides focus to the items inside the RecyclerView (in this case, it gives focus to the categories defined inside the RecyclerView menu).

Note: it is important to apply this to all our RecyclerViews, as this automatically defines the relationship between the items. The great news is that we don’t have to manually define the relationship between each item because Android automatically takes care of that for us as a framework level feature.

We need to apply this to all the RecyclerViews in our right side layout as well.We need to define the Directional Navigation between each one of the RecyclerView (for sake of simplicity, in our example we defined 4 rows through 4 dedicated RecyclerView)

At the end, our RecyclerViews should look like this:

<androidx.recyclerview.widget.RecyclerView
   android:id="@+id/rowRecyclerView1"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:descendantFocusability="afterDescendants"
   android:nextFocusDown="@id/rowRecyclerView2" />

<androidx.recyclerview.widget.RecyclerView
   android:id="@+id/rowRecyclerView2"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:descendantFocusability="afterDescendants"
   android:nextFocusDown="@id/recyclerView3" />

<androidx.recyclerview.widget.RecyclerView
   android:id="@+id/rowRecyclerView3"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:descendantFocusability="afterDescendants"
   android:nextFocusDown="@id/rowRecyclerView4" />

<androidx.recyclerview.widget.RecyclerView
   android:id="@+id/rowRecyclerView4"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:descendantFocusability="afterDescendants"/>

Notice how the last RecyclerView, rowRecyclerView4, doesn’t have the next Focus down target. The reason for that is there is no further RecyclerView to navigate to.

Once we complete this step, we now have a fully navigable UI using the D-Pad. We can now look into how to properly touch-enable our interface by modifying the content of our RecyclerViews.

You can see how the UI is fully navigable using the remote even though the views are generated dynamically.

 

Step 3: Adding touch interactions for clicks: OnClickListeners

The next step is to add touch interactions when the user clicks on an item. Luckily for us, Android was built for touch in mind. To add click actions to our applications UI, we can use standard Android components which will allow us to cover both the D-Pad interaction and the touch interaction. 

The best and easiest way to implement click actions or touch actions on a view is to use the standard Android OnClickListener. OnClickListener allows both touch clicks and d-pad button clicks to be performed on a view. onClickListener triggers a method called onClick() where you can execute any desired operation.

Note: If you have implemented the click action on your or D-Pad based UI in any other way, you might need to add the OnClickListener on top of any custom implementation you have. This is to ensure D-Pad clicks and touch clicks both execute the desired operation.

While on a D-Pad style navigation the size of the view is not important, , it is very relevant for touch interactions. Therefore, it’s better to use larger views and areas in our UI to provide a good user experience.

In our simple application we are going to apply the OnClickListener to the layout itself and not to the views inside of the layout. This  could be achieved also by expanding the internal views to fill the entire layout area and apply the OnClickListener to the individual views like TextView or ImageView, however applying it to the entire layout it’s a simple solution that allows us to fulfil our goal and doesn’t require to change the look and feel of the UI at all.

The views are dynamic and created by the RecyclerViews, so we need to apply individual on click listeners to each one of the created elements of each RecyclerView.We do this by modifying the code for the RecyclerView adaptors, getting a reference to the layout of each individual item of the RecyclerView and applying the onClickListener in the onBindViewHolder() method of the adapter:

public class MenuItemsAdapter extends RecyclerView.Adapter<MenuItemsAdapter.ViewHolder> {

  private String[] localDataSet;

   /**
    * Provide a reference to the type of views that you are using
    * (custom ViewHolder).
    */
   public class ViewHolder extends RecyclerView.ViewHolder {
       private final TextView textView;
       private final ConstraintLayout menuConstraintLayout;

       public ViewHolder(View view) {
           super(view);
           // Define click listener for the ViewHolder's View
           textView = (TextView) view.findViewById(R.id.textView);
           menuConstraintLayout = view.findViewById(R.id.menuconstraintLayout);
       }

       public TextView getTextView() {
           return textView;
       }

       public ConstraintLayout getMenuConstraintLayout() {
           return menuConstraintLayout;
       }
   }

   /**
    * Initialize the dataset of the Adapter.
    *
    * @param dataSet String[] containing the data to populate views to be used
    * by RecyclerView.
    */
   public MenuItemsAdapter(String[] dataSet) {
       localDataSet = dataSet;
   }

   // Create new views (invoked by the layout manager)
   @Override
   public ViewHolder onCreateViewHolder(ViewGroup viewGroup, int viewType) {
       // Create a new view, which defines the UI of the list item
       View view = LayoutInflater.from(viewGroup.getContext())
               .inflate(R.layout.menulayout, viewGroup, false);

       return new ViewHolder(view);
   }



   // Replace the contents of a view (invoked by the layout manager)
   @Override
   public void onBindViewHolder(ViewHolder viewHolder, final int position) {

       // Get element from your dataset at this position and replace the
       // contents of the view with that element
       viewHolder.getTextView().setText(localDataSet[position]);
       viewHolder.getMenuConstraintLayout().setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
             //In this sample app we are just logging the Click, 
//but here you could for example open a new Activity or select 
//a specific Category  

Log.e("Click ", "Clicked "+localDataSet[position]);
           }
       });
   }

   // Return the size of your dataset (invoked by the layout manager)
   @Override
   public int getItemCount() {
       return localDataSet.length;
   }
}

In order to show if an item is focused on or clicked, it’s important to use backgrounds and drawables that contain the different states in which a specific view can find itself.

This is easily achieved using a selector drawable, which can contain multiple states like focused and pressed (clicked)

Menuselectordrawable.xml (used as background for our Menu Layout)

<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
   <item android:state_pressed="true"
       android:drawable="@android:color/holo_blue_bright" /> <!-- pressed -->
   <item android:state_focused="true"
       android:drawable="@android:color/holo_orange_light" /> <!-- focused -->
   <item android:state_hovered="true"
       android:drawable="@android:color/holo_green_light" /> <!-- hovered -->
   <item android:drawable="@android:color/background_dark" /> <!-- default -->
</selector>

In the menulayout.xml

<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
   xmlns:app="http://schemas.android.com/apk/res-auto"
   xmlns:tools="http://schemas.android.com/tools"
   android:id="@+id/menuconstraintLayout"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:background="@drawable/menuselectordrawable"
   android:focusable="true"
   android:focusableInTouchMode="true">

At this point, all of our UI views are clickable and will show a different background color when clicked on (blue) and when focused on (orange)

You can see the onClickListeners being correctly triggered via the remote control.

 

Step 4: Managing view focus between D-Pad and touch interactions

The next step is to build a consistent experience between the d-pad and touch interactions. This means making sure that the directional navigation works consistently if we are using a remote or if we are interacting using touch.

As we mentioned above, Android was built with touchscreen and touch interactions in mind.  This means that the underlying layer managing our apps UI and views is mostly touch-enabled already.

In Android, most of the views are visible and focusable by default. Actually, views inherit a parameter called focusable, which by default is set to “auto”, meaning that is up to the platform itself to determine if this view should be focusable or not. Views like Buttons, TextView, EditText are focusable by default as those are main UI components, while layouts and layout inflators are usually not automatically focusable by default, as they are usually used just to define the UI structure.

In order to make our app fully touch-enabled, we need to make sure that the most important views in our app are focusable, and also we need to make sure that they can be focused on when the customer uses touch.

For this reason, we are going to edit two parameters in our views: focusable and focusableInTouchMode.

Going back to our sample app, we created two new separate layouts which are used to populate the individual items inside the “Categories” RecyclerView and the “rows” RecyclerView.

 

We need to make sure that:

  1. The whole layout is treated as touch surface.
  2. We are enabling users to focus on the layout both using the D-Pad and touch.

We do this by setting both focusable and focusableInTouchMode to true.

 

menuLayout.xml (defines the individual content of the Categories on the left, which contains only a TextView)

<androidx.constraintlayout.widget.ConstraintLayout 
   [...]
   android:id="@+id/menuconstraintLayout"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:background="@drawable/menuselectordrawable"
   android:focusable="true"
   android:focusableInTouchMode="true">

   <TextView
       android:id="@+id/textView"
       android:layout_width="wrap_content"
       android:layout_height="wrap_content"
      [...] />

cardLayout.xml (defines the individual content of the Movies rows on the right. Each card contains a ImageView and a TextView)

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout 
[...]
   android:id="@+id/cardconstraintLayout"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:background="@drawable/selectordrawable"
   android:focusable="true"
   android:focusableInTouchMode="true"
>

   <ImageView
       android:id="@+id/imageView"
       android:layout_width="150dp"
       android:layout_height="100dp"
       [...] />

   <TextView
       android:id="@+id/textView"
       android:layout_width="wrap_content"
       android:layout_height="wrap_content"
       [...] />
</androidx.constraintlayout.widget.ConstraintLayout>

By doing this, we ensure that if the user touches the UI or if they navigate through the d-pad controllers, in both cases the right UI element will be focused on. See clip below for a demonstration.

In this clip you can see how the views are correctly clicked and focused on using touch interaction.

 

Step 5: Additional best practices and testing Touch on Fire TV

After completing the steps above, we will have successful touch-enabled the most important components of our app UI.

There are additional simple steps to ensure that we are providing a great user experience for both touch and d-pad navigation:

 

  • Ensuring that all the views that need any kind of interaction have an OnClickListener assigned to them and can be focused on
  • Remember that touch interaction also includes the gestures and scrolling. Therefore do not just rely on the standard behaviours of views using D-Pad Navigation but make sure that least include ways to be scrolled through gestures (for example using scroll views and RecyclerView is where possible).
  • Secondary activities of your applications (for example a Detail page or the Playback UI) need to be also touch enabled so make sure that if you have any settings page those are touch enabled as well and use the same patterns described above.

 

How can you test Touch on Fire TV devices without a touchscreen?

The easiest solution is to connect a wireless mouse to your Fire TV. Mouse on Android simulates touch interaction. You can do this by:

  1. Going to Settings
  2. Go to Remote and Bluetooth Devices
  3. Go to Other Bluetooth devices
  4. Follow the on-screen instructions on how to connect your Bluetooth mouse
  5. After connecting the mouse, go back to your app. The mouse will show a cursor on screen you can use to simulate touch interactions, including clicks and gestures.

 

Conclusion

This tutorial gave you a practical first overview on how to touch enable your Fire TV application. 

For more details on these please check out our documentation:

Introducing “Serverless Migration Station” Learning Modules

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Helping users modernize their serverless apps

Earlier this year, the Google Cloud team introduced a series of codelabs (free, online, self-paced, hands-on tutorials) designed for technical practitioners modernizing their serverless applications. Today, we’re excited to announce companion videos, forming a set of “learning modules” made up of these videos and their corresponding codelab tutorials. Modernizing your applications allows you to access continuing product innovation and experience a more open Google Cloud. The initial content is designed with App Engine developers in mind, our earliest users, to help you take advantage of the latest features in Google Cloud. Here are some of the key migrations and why they benefit you:

  • Migrate to Cloud NDB: App Engine’s legacy ndb library used to access Datastore is tied to Python 2 (which has been sunset by its community). Cloud NDB gives developers the same NDB-style Datastore access but is Python 2-3 compatible and allows Datastore to be used outside of App Engine.
  • Migrate to Cloud Run: There has been a continuing shift towards containerization, an app modernization process making apps more portable and deployments more easily reproducible. If you appreciate App Engine’s easy deployment and autoscaling capabilities, you can get the same by containerizing your App Engine apps for Cloud Run.
  • Migrate to Cloud Tasks: while the legacy App Engine taskqueue service is still available, new features and continuing innovation are going into Cloud Tasks, its standalone equivalent letting users create and execute App Engine and non-App Engine tasks.

The “Serverless Migration Station” videos are part of the long-running Serverless Expeditions series you may already be familiar with. In each video, Google engineer Martin Omander and I explore a variety of different modernization techniques. Viewers will be given an overview of the task at hand, a deeper-dive screencast takes a closer look at the code or configuration files, and most importantly, illustrates to developers the migration steps necessary to transform the same sample app across each migration.

Sample app

The baseline sample app is a simple Python 2 App Engine NDB and webapp2 application. It registers every web page visit (saving visiting IP address and browser/client type) and displays the most recent queries. The entire application is shown below, featuring Visit as the data Kind, the store_visit() and fetch_visits() functions, and the main application handler, MainHandler.


import os
import webapp2
from google.appengine.ext import ndb
from google.appengine.ext.webapp import template

class Visit(ndb.Model):
'Visit entity registers visitor IP address & timestamp'
visitor = ndb.StringProperty()
timestamp = ndb.DateTimeProperty(auto_now_add=True)

def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()

def fetch_visits(limit):
'get most recent visits'
return (v.to_dict() for v in Visit.query().order(
-Visit.timestamp).fetch(limit))

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(10)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

app = webapp2.WSGIApplication([
('/', MainHandler),
], debug=True)

Baseline sample application code

Upon deploying this application to App Engine, users will get output similar to the following:

image of a website with text saying VisitMe example

VisitMe application sample output

This application is the subject of today’s launch video, and the main.py file above along with other application and configuration files can be found in the Module 0 repo folder.

Next steps

Each migration learning module covers one modernization technique. A video outlines the migration while the codelab leads developers through it. Developers will always get a starting codebase (“START”) and learn how to do a specific migration, resulting in a completed codebase (“FINISH”). Developers can hit the reset button (back to START) if something goes wrong or compare their solutions to ours (FINISH). The hands-on experience helps users build muscle-memory for when they’re ready to do their own migrations.

All of the migration learning modules, corresponding Serverless Migration Station videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. While there’s an initial focus on Python 2 and App Engine, you’ll also find content for Python 3 users as well as non-App Engine users. We’re looking into similar content for other legacy languages as well so stay tuned. We hope you find all these resources helpful in your quest to modernize your serverless apps!

In-App Search using the Video Skills Kit (VSK) on Fire TV

 

Video Skills Kit (VSK) for Fire TV allows customers to more easily discover, find, and engage with content in your app without relying on a remote. A prerequisite to integrating your app with the VSK is to first ingest your catalog and then integrate with Universal Search and Browse on Fire TV. Catalog integration is currently only available to selected developers. VSK works together with Catalog Integration to surface your content in multiple ways across Fire TV.

VSK allows you to voice-enable the experience in your app. Customers can say “Alexa, watch Batman” (SearchAndPlay) or “Alexa, find Batman” (SearchAndDisplayResults) and Alexa will send a directive to your app, where you can take the customerinto playback or show search results. Customers can skip the use of a verb and simply say “Batman“, which is treated by Alexa as a search request. Alexa also supports more ambiguous requests such as “Alexa, find Tom Hanks movies” to find movies by actor, or “Alexa, find comedies” to search by genre.

 

How does VSK work on Fire TV?

With the App-only implementation for VSK, the Alexa Video Skill API sends directives directly to your app on a Fire TV device. Fire TV has a service called the “VSK Agent” that receives the Alexa directives and broadcasts them as an intent, which your app then handles through a BroadcastReceiver. The entire VSK integration is done directly within your Android code. If you already have Catalog Integration completed, and your app already has logic to handle text inputs for searches and other lookups, the integration can be completed in a short amount of time (a couple of weeks or less).

Alexa service in the cloud does the hard work of interpreting the customer’s request, determining the intent, and then packaging it into a directive so that you can process the requests with your own app’s logic. Directives sent to your app contain a structured and resolved representation of the request, where we include entitles such as Video, Actor, Genre and MediaType. See SearchAndPlay and SearchAndDisplayResults for a more comprehensive list of search and play utterances and the directives sent.

 

Best practices for building a great search experience

Here are some of the best practices for building a great search experience using VSK.

  • Declare search as a static capability if your app allows all customers to browse content regardless of the customer’s state (signed in) and other factors (subscription level). Otherwise, you can declare search as a dynamic capability to gate the feature.
  • Customers can say “Alexa, find Breaking Bad season 2 episode 3” to search (or watch) a TV series by Season and Episode. You can use Season and Episode fields to take the customer to the episode. If you’re missing either season or episode number, you should determine this based on the customer’s last watched episode.
  • Customers can say “Alexa, find Tom Hanks movies” to find movies by the actor, or “Alexa, find comedies” to search by genre. You have the option to show search results using the Actor, Franchise or Genre fields, or take the customer to special pages dedicated to that particular entity. If your app does not support lookup by these fields, you should fallback to a literal text search of the value sent through in the directive.
  • Leverage the SearchText field to help improve the relevancy of results that customers see in your app. For ambiguous requests (those that do not contain a title name), search text will give you an unstructured and more complete view of what the customer has asked – this includes additional entities and unresolved words. For example, “Alexa, watch popular comedy tv shows in HD” will give you the transcribed value “h.d. popular comedy tv shows”. Note that there is no word order or formatting guarantee. See SearchText for more information.
  • Search results that you present to the customer should include relevant artwork applicable to the titles. The artwork should make it easier for the customer to identify the content you are recommending in the search.

 

Handling SearchAndDisplayResults directives

Here is a SearchAndDisplayResults directive Alexa might send in response to a customer’s request to search for “Alexa, find Batman”.

EXTRA_DIRECTIVE_NAMESPACE: Alexa.RemoteVideoPlayer
EXTRA_DIRECTIVE_NAME: SearchAndDisplayResults
EXTRA_DIRECTIVE_PAYLOAD_VERSION: 3
EXTRA_DIRECTIVE_PAYLOAD: payload

payload contains the following:

{
    "payload": {
        "entities": [
            {
                "externalIds": {
                    "ENTITY_ID": "0"
                },
                "type": "Franchise",
                "uri": "entity://avers/franchise/Batman",
                "value": "Batman"
            }
        ],
        "searchText": [
            {
                "transcribed": "batman"
            }
        ],
        "timeWindow": {
            "end": "2016-09-07T23:59:00+00:00",
            "start": "2016-09-01T00:00:00+00:00"
        }
    }
}

With VSK integration, you create a BroadcastReciever class in your app. The VSK Agent packages the directive in an intent, which is passed into the onReceive method. With the support of a JSON parser, you can retrieve both the customer’s transcribed search request under transcribed and the entities object, which contains an array of entity objects to search, such as a Title, Genre, Actor, Franchise, Season, Episode, or MediaType. In this example, you should show the customer search results for “batman” in your app.

Once your app handled the directive successfully (or not), your BroadcastReciever class should send a success intent back with a status in the form of a true (for success) or false (for failure) value.

 

Get Started

Learn more about integrating VSK through our the developer documentation here. You can also follow this high-level video tutorial about integrating the VSK into your Fire TV app.

Add dialogs and slash commands to your Google Workspace Chat bots

Posted by Charles Maxson, Developer Advocate

Developing your own custom Google Chat bot is a great way for users and teams to interact with your solutions and services both directly and within context as they collaborate in Chat. More specifically, Chat bots can be used in group conversations to streamline workflows, assist with activities in the context of discussions, and provide information and notifications in real time. Chat bots can also be used in direct messages, offering a new way to optimize workflows and personal productivity, such as managing project tasks or reporting time activity. Because use cases for bots are varied, you can consistently reach a growing audience of Chat users over time, directly where they work and uh-hum, chat.

Once you’ve identified your specific use case for your custom Chat bot, how you design the bot itself is super important. Bots that are intuitive and easy to use see better adoption and develop a more loyal following. Those that are not as fluid or approachable, or come across as confusing and complicated to use, will likely miss the mark of becoming an essential “sticky” tool even if your back end is compelling. To help you build an engaging, must-have Google Chat bot, we recently added a one-two feature punch to the Chat bot framework that allows you to build a much richer bot experience than ever before!

(Re)Introducing slash commands for Google Chat bots

The first new(er) feature that you can leverage to enhance the usability of your Chat bots are slash commands. Released a few months back, slash commands simplify the way users interact with your Chat bot, offering them a visual leading way to discover and execute your bot’s primary features. Unlike bots created prior to slash commands, where users had to learn what features a bot offered and then invoke the bot and type the command correctly to execute them, slash commands make Chat bot usage faster and help users get the most out of your bot.

Users can now simply type “/” in the message line to reveal a list of all the functions offered by the bots available to the room or direct message, and select the one to their liking to execute it. Slash commands can be invoked standalone (e.g. /help) or include user added text as parameters (e.g. /new_task review project doc ) that the developer can handle when invoked. To help make bot command discovery even simpler, the slash commands list filters matching commands once the user starts typing beyond the / (e.g. “/h” shows all commands beginning with H). This is super helpful as more and more bots are added to a room, and as more bots with slash commands are introduced by developers. Also included directly in the Slash Command UI is a description of what each command does (up to 50 characters), easing the guesswork out of learning.

Example of implementing slashbot in Google Chat

As a developer, slash commands are straightforward to implement, and daresay essential in offering a better bot experience. In fact, if you have an existing Google Chat bot you’ve built and deployed, it’s likely more than worthwhile to revise your bot to include slash commands in an updated release.

To add slash commands to any Chat bot, you will need to register your commands in the Hangouts Chat API configuration page. (e.g. https://console.cloud.google.com/apis/api/chat.googleapis.com/hangouts-chat?project=<?yourprojectname?>) There is a section for slash commands that allows you to provide the /name and the description the user will see, along with the important Command Id unique identifier (a number between 1-1000) that you will later need to handle these events in your code.

Example of editing slash command

When a user invokes your bot via a Slash Command, there is a slashCommand field attached to the message sent to the bot that indicates the call was initiated from a Slash Command. Remember users can still @mention your bot to call it directly by name without a / command and this helps you distinguish the difference. The message also includes the corresponding commandId for the invoked command based on what you set up in the bot configuration page, allowing you to identify the user’s requested command to execute. Finally, the message also offers additional annotations about the event and includes any argumentText supplied by the user already parsed from the command text itself.

{
...
"message": {
"slashCommand": {
"commandId": 4
},
"annotations": [
{
"length": 6,
"slashCommand": {
"type": "INVOKE",
"commandId": 4,
"bot": {
"type": "BOT",
"displayName": "Slashbot"
},
"commandName": "/debug"
},
"type": "SLASH_COMMAND"
}
],
...
"argumentText": " show code",
"text": "/debug show code",
...
}

Here is a simple example used to determine if a Slash Command was invoked by the user, and if so, runs the requested command identified by its Command Id.

function onMessage(event) {

if (event.message.slashCommand) {

switch (event.message.slashCommand.commandId) {
case 1: // Command Id 1
return { 'text': 'You called commandId 1' }

case 2: // Command Id 2
return { 'text': 'You called commandId 2' }

case 3: // Help
return { 'text': 'You asked for help' }

}
}
}

Introducing dialogs for Google Chat bots

The second part of the one-two punch of new Google Chat bots features are dialogs. This is a brand new capability being introduced to the Chat bot framework that allows developers to build user interfaces to capture inputs and parameters in a structured, reliable way. This is a tremendous step forward for bot usability because it will simplify and streamline the process of users interacting with bot commands. Now with dialogs, users can be led visually to supply inputs via prompts, versus having to rely on wrapping bot commands with natural language inputs — and hoping they correctly executed syntax the bot could decipher.

For developers, you can design UIs that are targeted to work precisely with the inputs you need users to supply your commands, without having to parse out arguments and logically infer the intent of users. In the end, dialogs will greatly expand the type of solution patterns and use cases that Chat bots can handle, as well as making the experience truly richer and more rewarding for users and developers alike.

Slashbot project notifier

Technically, Chat bot dialogs leverage the aforementioned slash commands combined with the existing Google Workspace Add-on Card framework to support the creation and handling of dialogs. To get started, you create a Slash Command that will invoke your dialog by designating it’s Slash command triggers a dialog setting to true in the Slash Command configuration process, as seen below:

Example of enabling the slash command triggers a dialog setting

Once you have configured a Slash Command to trigger a dialog, it will send an onMessage event when it’s invoked as it would before, but now it includes new details that indicate it is representing a dialog request. To handle this event you can use the example above with non-dialog Slash Command, using the commandId you can use a switch to determine what the user requested.

Designing the actual elements that the dialog renders is where you draw from the Google Workspace Add-on Card-based framework. If you’ve built a new generation of Google Workspace Add-on, this part will be familiar where you construct widgets, add headers and sections, create events, etc. In fact, you can even reuse or share some of your Add-on UIs within your Chat bots, but do note there currently is a lighter subset of elements available for bots. The benefits of using Cards allows you to build modern, consistently-styled user interfaces for your bots that doesn’t require that you get bogged down in low level details like managing tags or CSS. You can learn more about working with Cards starting here. To make building your Cards-based interfaces for Add-ons and Chat bots even easier, we have also just introduced the GWAO Card Builder tool, which employs a drag-n-drop visual designer to boost your development efforts.

Once you’ve assembled your Card’s widgets, to make it render as a dialog when invoked you must specify that its a DIALOG type within the action_response as seen stubbed out here below:

{
"action_response": {
"type": "DIALOG",
"dialog_action": {
"dialog": {
"body": {
"sections": [
{
"widgets": [
{
"textInput": {
"label": "Email",
"type": "SINGLE_LINE",
"name": "fieldEmail",
"hintText": "Add others using a comma separator",
...

Now with a working dialog, all there is left to do is handle user events once it’s displayed. Again this is similar to how you would handle events working with Cards within Add-ons. Your bot will receive an event that is type CARD_CLICKED with a DialogEventType set to SUBMIT_DIALOG. The actionMethodName value will let you know what element the user clicked to process the request, e.g. ‘assign’ as depicted below. The response includes the formInputs details which are the user provided inputs returned from the dialog, which you can process as your solution needs to.

{ dialogEventType: 'SUBMIT_DIALOG',
type: 'CARD_CLICKED',
action: { actionMethodName: 'assign' },
...
common:
{ hostApp: 'CHAT',
formInputs:
{ 'whotochoose-dropdown': [Object],
whotochoose: [Object],
email: [Object] },
invokedFunction: 'assign' },
isDialogEvent: true }

Once your bot is finished processing its task, it can respond back to the user in one of two ways. The first is with a simple acknowledgement (aka OK) response letting them know their action was handled correctly and close out the dialog.

{
"action_response": {
"type": "DIALOG",
"dialog_action": {
"action_status": "OK",
...

The other option is to respond with another dialog, allowing you to follow-up with a new or revised dialog useful for complex or conditional input scenarios. This is accomplished as it was originally when you called a dialog using a dialog card within an ActionResponse to get started.

{
"action_response": {
"type": "DIALOG",
"dialog_action": {
"dialog": {
...

Next Steps

To get started building Google Workspace Chat bots, or to add slash commands and dialogs to your existing Chat bots, please explore the following resources:

Coming Soon: Amazon Appstore Small Business Accelerator Program

Authored by Palanidaran Chidambaram, Director, Amazon Appstore

 

To further support our developers, today we are announcing the Amazon Appstore Small Business Accelerator Program. This new program enables developers to build a scalable business by reducing cloud infrastructure costs, while also offering better revenue share to help them get started on their own day one.

Starting in Q4, for developers that earn less than $1 million in revenue in the previous calendar year, we are increasing developer revenue share and adding AWS credit options. This brings total program benefits up to an equivalent of 90 percent of revenue. 

When the program launches, all qualifying small developers will receive an 80/20 revenue share by default. Additionally, we will provide AWS promotional credits in an amount equivalent to 10 percent of revenue, so that developers can take advantage of the benefits of building on the cloud.

By helping small businesses get started with AWS through credits, we are making it easier for them to build and grow their app businesses. AWS gives developers easy access to a broad range of technologies so they can innovate faster and build nearly anything they can imagine.

In a recent survey of mobile developers over 94% indicated they use cloud services in their application development efforts.

With AWS, developers can access more than 200 fully featured services so that they can spend less time managing infrastructure and can focus more attention on customer feedback and growing their app businesses. They can deploy technology services in a matter of minutes, and get from idea to implementation several orders of magnitude faster than before.

We also will help developers promote their content in new ways by highlighting smaller developers within our Appstore experience via a new dedicated application row.

We believe these investments in our global developer community will generate more innovation within Amazon Appstore and increase the selection of apps for our customers.

The Small Business Accelerator Program is one of many projects we are undertaking to grow and support the developer community across Amazon so that developers of all sizes can continue to build for our Appstore and for our customers. We’ll have more implementation details to announce later this year.

 

Thank you for building with us.


FAQs:

 

How does eligibility for the Appstore Small Business Accelerator program work?

  • Developers who earned up to $1 million in the prior calendar year and developers new to Amazon Appstore are eligible.
  • If an eligible developer’s revenue exceeds $1 million in the current year, they will revert to the standard royalty rate and no longer receive AWS credits for the rest of that year.
  • If a developer’s revenue falls below $1 million in a future year, the developer will be eligible in the next calendar year.

 

How will the AWS credits work?

Developers with less than $1 million in Appstore revenue in a calendar year will receive 10% of their revenue as promotional credit for AWS services from infrastructure technologies like compute, storage, and databases–to emerging technologies, such as machine learning and artificial intelligence, data lakes and analytics, and Internet of Things. To receive the credit, when the program launches, developers can provide their AWS Account ID in the Appstore developer portal. The credit will be sent monthly and developers can choose which eligible services to apply the credit to.

 

How long are the AWS credits valid?

The AWS promotional credits can be used for 12 months from the date they were granted.

Our latest updates on Fully Homomorphic Encryption

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office.

Privacy protection illustration

As developers, it’s our responsibility to help keep our users safe online and protect their data. This starts with building products that are secure by default, private by design, and put users in control. Everything we make at Google is underpinned by these principles, and we’re proud to be an industry leader in developing, deploying, and scaling new privacy-preserving technologies that make it possible to learn valuable insights and create helpful experiences while protecting our users’ privacy.

That’s why today, we are excited to announce that we’re open-sourcing a first-of-its-kind, general-purpose transpiler for Fully Homomorphic Encryption (FHE), which will enable developers to compute on encrypted data without being able to access any personally identifiable information.

A deeper look at the technology

With FHE, encrypted data can travel across the Internet to a server, where it can be processed without being decrypted. Google’s transpiler will enable developers to write code for any type of basic computation such as simple string processing or math, and run it on encrypted data. The transpiler will transform that code into a version that can run on encrypted data. This then allows developers to create new programming applications that don’t need unencrypted data. FHE can also be used to train machine learning models on sensitive data in a private manner.

For example, imagine you’re building an application for people with diabetes. This app might collect sensitive information from its users, and you need a way to keep this data private and protected while also sharing it with medical experts to learn valuable insights that could lead to important medical advancements. With Google’s transpiler for FHE, you can encrypt the data you collect and share it with medical experts who, in turn, can analyze the data without decrypting it – providing helpful information to the medical community, all while ensuring that no one can access the data’s underlying information.

In the next 10 years, FHE could even help researchers find associations between specific gene mutations by analyzing genetic information across thousands of encrypted samples and testing different hypotheses to identify the genes most strongly associated with the diseases they’re studying.

Making more products private by design

Our principle to make our products private by design drives us to build ground-breaking computing technologies that enable personalized experiences while protecting your private information. Privacy-preserving technologies are on the cutting-edge of Google’s innovations, and they have already shown great potential to help shape a more private internet.

In 2016, Google researchers invented Federated Learning, a technique that helps preserve privacy by keeping as much personal information on your device as possible. And in 2019, Google made its differential privacy library freely available to any organization or developer, an advanced anonymization technology that enables developers to learn from their data privately. No one has scaled the use of Differential Privacy more than we have.

We’ve been thrilled to see these technologies put to use across the globe; in France, for example, a startup called Arkhn has been able to accelerate scientific discovery using differential privacy to share data across hospitals.

We still have a ways to go before most computations happen with FHE — but much as it took some time for HTTPS to take off and be widely adopted, today’s announcement is an important step towards bringing users helpful products that preserve their privacy and keep their data safe.

At Google, we know that open-sourcing our technologies with the developer community for feedback and use helps make them better. We will continue to invest and lead the privacy-preserving technology field by publishing new work, and open-sourcing it for everyone to use at scale – and we’re excited to continue this practice by sharing this latest advancement with developers everywhere. We can’t wait to see what you’ll build, and we look forward to collaborating on the journey towards a safer Internet.

#IamaGDE: Josue Gutierrez


Posted by Alicja Heisig

#IamaGDE series presents: Google Maps

The Google Developers Experts program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

Meet Josue Gutierrez — Maps, Web, Identity and Angular Google Developer Expert.

Josue currently works at the German company Boehringer Ingelheim and lives near Frankfurt. Before moving to Germany, Josue was working as a software engineer in Mexico, and before that, he spent almost a year in San Francisco as a senior front-end developer at Sutter Health.

Image of Josue Gutierrez

Josue Gutierrez

Josue studied computer science and engineering as an undergraduate and learned algorithms and programming. His first language was C++, and he learned C and Python, but was drawn to web technologies.

“When I saw a web browser for the first time, it stuck with me,” he says, “It was changing in real time as you’re developing. That feeling is really cool. That’s why I went into frontend development.”

Josue has worked on multiple ecommerce projects focused on improving customers’ trade experience. He sees his role as creating something from scratch to help people improve lives.

“These opportunities we have as developers are great — to travel, work for many verticals, and learn many businesses,” he says. “In my previous job, I developed tech-oriented trade tools for research companies, to manipulate strings or formulas. I was on the team involved in writing these kinds of tools, so it was more about the trade experience for doctors.”

Getting involved in the developer community

Josue’s first trip outside Mexico, to San Francisco, exposed him to the many developer communities in the area, and he appreciated the supportive communities of people trying to learn together. Several of the people he met suggested he start his own meetup in Mexico City, to get more involved in Google technologies, so he launched an Angular community there. As he hunted for speakers to come to his Angular meetup, Josue found himself giving talks, too.

Then, the GDG Mexico leader invited Josue to give talks on Google for startups.

“That helped me get involved in the ecosystem,” Josue says. “I met a lot of people, and now many of them are good friends. It’s really exciting because you get connected with people with the same interests as you, and you all learn together.”

“I’m really happy to be part of the Google Maps ecosystem,” Josue says. “It’s super connected, with kind people, and now I know more colleagues in my area, who work for different companies and have different challenges. Seeing how they solve them is a good part of being connected to the product. I try to share my knowledge with other people and exchange points of view.”

Josue says 2020 provided interesting opportunities.

“This year was weird, but we also discovered more tools that are evolving with us, more functionalities in Hangouts and Meetup,” Josue says. “It’s interesting how people are curious to get connected. If I speak from Germany, I get comments from countries like Bolivia and Argentina. We are disconnected but increasing the number of people we engage with.”

He notes that the one missing piece is the face-to-face, spontaneous interactions of in-person workshops, but that there are still positives to video workshops.

“I think as communities, we are always trying to get information to our members, and having videos is also cool for posterity,” he says.

He is starting a Maps developer community in Germany.

“I have colleagues interested in trying to get a community here with a solid foundation,” he says. “We hope we can engage people to get connected in the same place, if all goes well.”

Favorite Maps features and current projects

As a frontend developer, Josue regards Google Maps Platform as an indispensable tool for brands, ecommerce companies, and even trucking companies.

“Once you start learning how to plant coordinates inside a map, how to convert information and utilize it inside a map, it’s easy to implement,” he says.

In 2021, Josue is working on some experiments with Maps, trying to make more real-time actualization, using currently available tools.

“Many of the projects I’ve been working on aren’t connected with ecommerce,” he says. “Many customers want to see products inside a map, like trucking products. I’ve been working in directories, where you can see the places related to categories — like food in Mexico. You can use Google Maps functionalities and extend the diversification of maps and map whatever you want.”

“Submission ID is really cool,” he adds. “You can do it reading the documentation, a key part of the product, with examples, references, and a live demo in the browser.”

Future plans

Josue says his goal going forward is to be as successful as he can at his current role.

“Also, sharing is super important,” he says. “My company encourages developer communities. It’s important to work in a place that matches your interests.”

Image of Josue Gutierrez

Follow Josue on Twitter at @eusoj |Check out Josue’s projects on GitHub.

For more information on Google Maps Platform, visit our website or learn more about our GDE program.

新世代「Fire HD 10」と「Fire HD 10 Plus」が登場:Fire OSに2画面表示機能を新搭載

(本ブログは、こちらの英語記事を翻訳したものです)

 

Amazonは4月27日(火)、新世代タブレット「Fire HD 10」と「Fire HD 10 Plus」を発表しました。エンターテインメントやビデオ通話をワイドスクリーンで楽しめる洗練されたデザインになっています。映画はもちろん、ZoomやAlexaでの通話や、ウェブブラウジング、ゲーム、読書など、この新しいタブレット1台で日々のあらゆるシーンに対応可能。家族や友達とも手軽につながることができ、新機能としてマルチタスクを可能にする2画面表示機能も搭載されています。また今回の発売に合わせて、取り外し可能なMade for Amazonのキーボード付きカバーが新たに登場。生産性アプリの数も拡充されています。

この新世代タブレットは開発者の皆さまにとっても、新規ユーザーとの関係を築いたり、アプリの利用・リーチを促進したりなど、新たな価値の創出に役立つはずです。

 

生産性アプリの高まるニーズ

Fireタブレットシリーズはこれまで、主にゲームやエンターテインメントの用途で使用されてきましたが、2020年になって業務効率化やコラボレーションの促進に役立つアプリが人気を博してきました。昨年、Amazonアプリストアにおける生産性アプリのユーザーエンゲージメントが前年比226%向上を記録。同アプリの月間アクティブユーザー数も昨年をとおして62%増加しました(本ブログでの指標は、全世界における数値を基にしています)。キーボード付きカバーとMicrosoft 365 Personal 1年版がセットになった「エッセンシャルセット」(Fire HD 10版Fire HD 10 Plus版)で、快適な入力操作と生産性アプリをぜひお試しください。

Fireタブレットでのマルチタスクに役立つアプリを開発するなら、今が絶好の機会です。やることリストの作成やリマインダーの設定、Eメールへの返信、コラボレーションの促進などを行えるアプリを開発している場合は、Amazonアプリストアでの配信をご検討ください。利用を促せるよう、アプリを新機能に対応させることも大切です。

 

Fire OSの新機能:2画面表示機能

Fire HD 10タブレットには、Fire OSの新たな機能として、2画面表示機能が搭載されています。10.1インチのディスプレイで、2画面表示に対応している2つのアプリを同時に開くことができるほか、ウィンドウ間でファイルをドラッグアンドドロップすることも可能です。この新機能の登場でマルチタスクが容易・快適になり、アプリのエンゲージメント向上も見込めます。

2画面表示機能の実装については、2画面表示機能のサポート宣言をご参照ください。

 

性能と耐久性の向上

新Fire HD 10は、高速かつ応答性に優れたパワフルなタブレットです。オクタコアプロセッサを搭載し、RAMも前世代機より50%増量。2メガピクセル以上の解像度で10%明るくなったディスプレイで、アプリを鮮やかに表示することができます。
新タブレットの詳細や開発に関しては、 以下のリソースをご参照ください。

 

 

Starting your Google Career in IT | Kate Grant

Posted by Max Saltonstall

A little over 10 years ago, we launched the IT Residency Program (ITRP) at Google with a twofold mission: to provide exceptional tech support for Googlers and to empower the next generation of IT pioneers.

ITRP’s founding principle is learning and development. In addition to formal on-the-job IT support training, the program takes its residents through a focused career development path, including a hands-on rotation in a specific Google function in their chosen specialty. For their part, the residents, who come from a wide spectrum of often non-traditional backgrounds, bring with them a passion for learning. ITRP converts that passion into real-world experience and equips them for a lifelong career in tech.

Today, hundreds of Googlers are ITRP alums, working in disciplines ranging from site reliability engineering to security and privacy to program management and all points in between. Looking back on the program’s 10-plus years, we wanted to share some of their stories, their experiences and their triumphs. Look for more installments in this series in the weeks to come.

Kate Grant

While studying anthropology in college, Kate Grant had a part time help desk job, helping solve people’s computer problems (and sometimes the computer’s people problems). And it was through that job that she realized she really liked the satisfaction she got from helping people so directly. At school, Kate could help students, faculty and staff — both technical folks and non-technical, and it was great to save the day for them, teach them, and constantly learn about new technologies.

picture of Kate Grant

Joining the Pride March in NYC as part of Google’s presence

As graduation loomed close, Kate needed something to do afterwards, and she stumbled upon the ITRP posting. It looked like a great way to continue doing what she loved, and in a much bigger company, with more to learn. She applied, thinking “there’s no way in hell” she would get this job.

Turns out she was wrong!

Growth in ITRP

Starting at Google in August 2012, Kate worked in Mountain View, CA helping Googlers of all types with a broad range of tech challenges. She got to spend some time with the Search team at Google working on technical documentation, helping engineers at Google as well as web designers outside Google better understand Google search features. This experience gave Kate something cool she could show her family too, a very tangible “I made this” moment, which can be hard to come by in IT operations work.

During her time in the program, Kate also had the chance to spend some time working in the New York office, which felt good because it brought her closer to New Jersey, where she grew up. It was the first step towards coming back to the Northeast, a welcome return to a comfortable place. Eventually she’d move to New York to work in Google’s NYC office full time, helping with IT operations and later managing junior IT help desk folks as the team expanded.

picture of Google sign

Neon welcome sign in the lobby of the Google NYC office

Working across teams and projects and help desks in ITRP helped Kate develop all kinds of skills, from improving proficiency with Linux, Windows, MacOS and Chrome to also developing better judgment and analysis skills when solving novel problems that walked into the support desk. And as she began helping to train new employees in orientation, covering IT, Security and Technology topics, she got practice improving her public speaking; by the end of it she was speaking to over 100 people at a time on Mondays in Mountain View as new employees learned the essentials of their work at Google.

After ITRP

While she had been considering a tech writing career path before, Kate realized that the work in ITRP helping people day-to-day let her wear many hats. She was writing documentation, that was one key component. But she also wrote code, managed programs and projects, mentored junior members of the team, and analyzed data. The breadth of the job matched her growth goals, and Kate ended up continuing her career in Techstop, transitioning into a role as a Corporate Operations Engineer in 2014.

The work remained in the front-line support team and gave Kate tremendous exposure to the wide array of people in the Google headquarters. She focused more on projects to improve the onboarding and daily operations of the support teams, helping to keep systems and services healthy. This involved scheduling, mentoring, training and helping the newer members of the team, mostly later cohorts of IT Residents.

Kate’s experience training new employees, and mentoring new IT Residents, made her a great fit for a new opportunity on the onboarding team, where she worked to make the new employee experience more consistent and reliable between offices across Google globally. This began a traversal of different parts of Google’s IT org, where she learned about how the company, and its many teams, operate. Kate’s work focused on helping to create better, smoother, more automated processes for the teams in IT, and help to scale the successes they had already achieved.

Making the jump to Operations Manager

Driving improvements to Google’s onboarding infrastructure was fun and satisfying, but Kate missed working with junior techs day to day, and mentoring them on their own career growth. Luckily in 2016 a new manager position opened up in NYC, and Kate jumped at it. She started leading a team of newer support technicians, including IT Residents. Now she could focus on making a really great support experience for Googlers, and give back to the program she had enjoyed by helping the folks on her team grow.

In 2019 Kate had the opportunity to move to Austin, TX to build out a new ITRP Hub location from scratch, and continues to manage IT Residences and Corporate Operations Engineers there today.

Reflections

Coming full circle now Kate shared some great advice with us for those thinking about ITRP:

“Don’t screen yourself out of the opportunity. Folks come into the program from all walks of life. You don’t have to have a super fancy computer science degree. If you’re excited about technology and helping people, ITRP could be a great next step for you.”

She reinforced that you get out of the program what you put into it: “If you take the time to be curious, to ask questions, to investigate, you will learn so much. It’s really an endless amount of material you could absorb, and nobody can get through it all. But when you put in the work, it pays you back”.

And as people go through the ITR Program, they end up in all sorts of places. “There’s no cookie cutter outcome… It doesn’t mean you’re doing something wrong if your path doesn’t look like someone else’s. The right way is what’s right for you and your goals.”

Getting Started with Smart Home Notifications and Follow-up Responses


Posted by Toni Klopfenstein, Developer Advocate

Alerts for important device events, such as a delivery person arriving or the back door failing to lock, create a more beneficial and reassuring experience for your smart home device users.

As we announced at I/O, you can now add proactive notifications and follow-up responses to your Smart Home Action to alert users to events in a timely, relevant and helpful fashion and better engage with your end users.

proactive notifications flowchart

Notifications can either alert a user to an event that has occurred without them proactively issuing a request through the Assistant, or as a follow-up to verify that the user’s request has been fulfilled. Each device event that triggers one of these notifications has a unique event id, which helps the Assistant route it to the appropriate Home Graph users and Google Home Smart Speakers or Nest Smart Displays, depending on the notification type and priority. Notifications and follow-up responses can also provide users with additional information, such as error and exception codes, or timestamps for the event.

You can enable notifications on your existing devices once users opt-in to receive alerts by updating the device definition and requesting a SYNC intent. You can then send device notifications along with any applicable device state changes using the Home Graph API.

We are adding support for traits where asynchronous requirements are a core use case.The following device traits now support follow-up responses to user queries:

Additionally, we are launching proactive notification alerts for the following traits:

For more information, check out the developer guides and samples, or check out the Notifications video.

We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

Machine Learning GDEs: Q1 2021 highlights, projects and achievements

Posted by HyeJung Lee and MJ You, Google ML Ecosystem Community Managers. Reviewed by Soonson Kwon, Developer Relations Program Manager.

Google Developers Experts is a community of passionate developers who love to share their knowledge with others. Many of them specialize in Machine Learning (ML). Despite many unexpected changes over the last months and reduced opportunities for various in person activities during the ongoing pandemic, their enthusiasm did not stop.

Here are some highlights of the ML GDE’s hard work during the Q1 2021 which contributed to the global ML ecosystem.

ML GDE YouTube channel

ML GDE YouTube page

With the initiative and lead of US-based GDE Margaret Maynard-Reid, we launched the ML GDEs YouTube channel. It is a great way for GDEs to reach global audiences, collaborate as a community, create unique content and promote each other’s work. It will contain all kinds of ML related topics: talks on technical topics, tutorials, interviews with another (ML) GDE, a Googler or anyone in the ML community etc. Many videos have already been uploaded, including: ML GDE’s intro from all over the world, tips for TensorFlow & GCP Certification and how to use Google Cloud Platform etc. Subscribe to the channel now!!

TensorFlow Everywhere

TensorFlow Everywhere logo

17 ML GDEs presented at TensorFlow Everywhere (a global community-led event series for TensorFlow and Machine Learning enthusiasts and developers around the world) hosted by local TensorFlow user groups. You can watch the recorded sessions in the TensorFlow Everywhere playlist on the ML GDE Youtube channel. Most of the sessions cover new features in Tensorflow.

International Women’s Day

Many ML GDEs participated in activities to celebrate International Women’s Day (March 8th). GDE Ruqiya Bin Safi (based in Saudi Arabia) cooperated with WTM Saudi Arabia to organize “Socialthon” – social development hackathons and gave a talk “Successful Experiences in Social Development“, which reached 77K viervers live and hit 10K replays. India-based GDE Charmi Chokshi participated in GirlScript’s International Women’s Day event and gave a talk: “Women In Tech and How we can help the underrepresented in the challenging world”. If you’re looking for more inspiring materials, check out the “Women in AI” playlist on our ML GDE YouTube channel!

Mentoring

ML GDEs are also very active in mentoring community developers, students in the Google Developer Student Clubs and startups in the Google for Startups Accelerator program. Among many, GDE Arnaldo Gualberto (Brazil) conducted mentorship sessions for startups in the Google Fast Track program, discussing how to solve challanges using Machine Learning/Deep Learning with TensorFlow.

TensorFlow

Practical Adversarial Robustness in Deep Learning: Problems and Solutions
ML using TF cookbook and ML for Dummies book

Meanwhile in Europe, GDEs Alexia Audevart (based in France) and Luca Massaron (based in Italy) released “Machine Learning using TensorFlow Cookbook”. It provides simple and effective ideas to successfully use TensorFlow 2.x in computer vision, NLP and tabular data projects. Additionally, Luca published the second edition of the Machine Learning For Dummies book, first published in 2015. Her latest edition is enhanced with product updates and the principal is a larger share of pages devoted to discussion of Deep Learning and TensorFlow / Keras usage.

YouTube video screenshot

On top of her women-in-tech related activities, Ruqiya Bin Safi is also running a “Welcome to Deep Learning Course and Orientation” monthly workshop throughout 2021. The course aims to help participants gain foundational knowledge of deep learning algorithms and get practical experience in building neural networks in TensorFlow.

TensorFlow Project showcase

Nepal-based GDE Kshitiz Rimal gave a talk “TensorFlow Project Showcase: Cash Recognition for Visually Impaired” on his project which uses TensorFlow, Google Cloud AutoML and edge computing technologies to create a solution for the visually impaired community in Nepal.

Screenshot of TF Everywhere NA talk

On the other side of the world, in Canada, GDE Tanmay Bakshi presented a talk “Machine Learning-powered Pipelines to Augment Human Specialists” during TensorFlow Everywhere NA. It covered the world of NLP through Deep Learning, how it’s historically been done, the Transformer revolution, and how using the TensorFlow & Keras to implement use cases ranging from small-scale name generation to large-scale Amazon review quality ranking.

Google Cloud Platform

Google Cloud Platform YouTube playlist screenshot

We have been equally busy on the GCP side as well. In the US, GDE Srivatsan Srinivasan created a series of videos called “Artificial Intelligence on Google Cloud Platform”, with one of the episodes, “Google Cloud Products and Professional Machine Learning Engineer Certification Deep Dive“, getting over 3,000 views.

ML Analysis Pipeline

Korean GDE Chansung Park contributed to TensorFlow User Group Korea with his “Machine Learning Pipeline (CI/CD for ML Products in GCP)” analysis, focused on about machine learning pipeline in Google Cloud Platform.

Analytics dashboard

Last but not least, GDE Gad Benram based in Israel wrote an article on “Seven Tips for Forecasting Cloud Costs”, where he explains how to build and deploy ML models for time series forecasting with Google Cloud Run. It is linked with his solution of building a cloud-spend control system that helps users more-easily analyze their cloud costs.

If you want to know more about the Google Experts community and all their global open-source ML contributions, visit the GDE Directory and connect with GDEs on Twitter and LinkedIn. You can also meet them virtually on the ML GDE’s YouTube Channel!

Grow your indie game with help from Google Play


Posted by Patricia Correa, Director, Global Developer Marketing

Indie Games Accelerator graphic

At Google Play we’re committed to helping all developers thrive, whether these are large multinational companies or small startups and indie game studios. They are all critical to providing the services and experiences that people around the world look for on their Android devices. The indie game developer community, in particular, constantly pushes the boundaries with their creativity and passion, and bring unique and diverse content to players everywhere.

To continue supporting indies, today we’re opening submissions for two of our annual developer programs – the Indie Games Accelerator and the Indie Games Festival. These programs are designed to help small games studios grow on Google Play, no matter what stage they are in:

  • If you are a small games studio looking for help to launch a new title, apply for the Accelerator to get mentorship and education;
  • Or, if you have already created and launched a high quality game that is ready for the spotlight, enter the Festival for a chance to win promotions.

This year the programs come with some changes, including more eligible markets and fully digital event experiences. Learn more below and apply by July 1st.

Accelerator: Get education and mentorship to supercharge your growth

If you’re an indie developer, early in your journey – either close to launching a new game or recently launched a title, this is the program for you. We’ll provide education and mentorship that will help you build, launch and grow successfully.

This year we have nearly doubled the eligible markets, with developers from over 70 countries being eligible to apply for the 2021 program.

Selected participants will be invited to take part in a 12-week online acceleration program. During this time you’ll get exclusive access to a community of Google and industry experts, as well as a network of other passionate developers from around the world looking to supercharge their growth.

Festival: win promotions that put your game in the spotlight

If you’re an indie game developer who has recently launched a high quality game, this is your chance to have your game discovered by industry experts and players worldwide.

This year we will, again, host three competitions for developers from Japan, South Korea, and selected European countries.

Prizes include featuring on Google Play store, promotional campaigns worth 100,000 EUR, and more.

How useful did you find this blog post?

Play Logo