#BazelCon 2021 Wrap Up

Posted by Joe Hicks, Product Manager, Core Developer

The apps, platforms, and systems that the Bazel community builds with Bazel touch the lives of people around the world in ways we couldn’t have imagined. Through BazelCon, we aim to connect Bazel enthusiasts, the Bazel team, maintainers, contributors, users, and friends in an inclusive and welcoming environment. At BazelCon, the community demonstrates the global user impact of the community—with some quirky and carefully crafted talks, a readout on the State-of-Bazel, an upfront discussion on “Implicit Bias Mitigation,” and community sharing events that remind us that we are not alone in our efforts to build a better world, one line of code at a time.

At BazelCon, the community shared over 24 technical sessions with the 1400+ registrants, which you can watch here at your own pace. Make sure you check out:

Attendees were able to interact with the community and engage with the Bazel team through a series of “Birds of a Feather” (BoF) sessions and a live Q&A session. You can find all of the BoF presentations and notes here.

As announced, soon we will be releasing Bazel 5.0, the updated version of our next generation, multi-language, multi-platform build functionality that includes a new external dependency system, called bzlmod, for you to try out.

We’d like to thank everyone who helped make BazelCon a success: presenters, organizers, Google Developer Studios, contributors, and attendees. If you have any questions about BazelCon, you can reach out to [email protected].

We hope that you enjoyed #BazelCon and “Building Better with Bazel”.

Personalize user journeys by Pushing Dynamic Shortcuts to Assistant

Posted by Jessica Dene Earley-Cha, Developer Relations Engineer

Like many other people who use their smartphone to make their lives easier, I’m way more likely to use an app that adapts to my behavior and is customized to fit me. Android apps already can support some personalization like the ability to long touch an app and a list of common user journeys are listed. When I long press my Audible app (an online audiobook and podcast service), it gives me a shortcut to the book I’m currently listening to; right now that is Daring Greatly by Brené Brown.

Now, imagine if these shortcuts could also be triggered by a voice command – and, when relevant to the user, show up in Google Assistant for easy use.

Wouldn’t that be lovely?

Dynamic shortcuts on a mobile device

Well, now you can do that with App Actions by pushing dynamic shortcuts to the Google Assistant. Let’s go over what Shortcuts are, what happens when you push dynamic shortcuts to Google Assistant, and how to do just that!

Android Shortcuts

As an Android developer, you’re most likely familiar with shortcuts. Shortcuts give your users the ability to jump into a specific part of your app. For cases where the destination in your app is based on individual user behavior, you can use a dynamic shortcut to jump to a specific thing the user was previously working with. For example, let’s consider a ToDo app, where users can create and maintain their ToDo lists. Since each item in the ToDo list is unique to each user, you can use Dynamic Shortcuts so that users’ shortcuts can be based on their items on their ToDo list.

Below is a snippet of an Android dynamic shortcut for the fictional ToDo app.

val shortcut = = new ShortcutInfoCompat.Builder(context, task.id)
.setShortLabel(task.title)
.setLongLabel(task.title)
.setIcon(Icon.createWithResource(context, R.drawable.icon_active_task))
.setIntent(intent)
.build()

ShortcutManagerCompat.pushDynamicShortcut(context, shortcut)

Dynamic Shortcuts for App Actions

If you’re pushing dynamic shortcuts, it’s a short hop to make those same shortcuts available for use by Google Assistant. You can do that by adding the Google Shortcuts Integration library and a few lines of code.

To extend a dynamic shortcut to Google Assistant through App Actions, two jetpack modules need to be added, and the dynamic shortcut needs to include .addCapabilityBinding.

val shortcut = = new ShortcutInfoCompat.Builder(context, task.id)
.setShortLabel(task.title)
.setLongLabel(task.title)
.setIcon(Icon.createWithResource(context, R.drawable.icon_active_task))
.addCapabilityBinding("actions.intent.GET_THING", "thing.name", listOf(task.title))
.setIntent(intent)
.build()

ShortcutManagerCompat.pushDynamicShortcut(context, shortcut)

The addCapabilityBinding method binds the dynamic shortcut to a capability, which are declared ways a user can launch your app to the requested section. If you don’t already have App Actions implemented, you’ll need to add Capabilities to your shortcuts.xml file. Capabilities are an expression of the relevant feature of an app and contains a Built-In Intent (BII). BIIs are a language model for a voice command that Assistant already understands, and linking a BII to a shortcut allows Assistant to use the shortcut as the fulfillment for a matching command. In other words, by having capabilities, Assistant knows what to listen for, and how to launch the app.

In the example above, the addCapabilityBinding binds that dynamic shortcut to the actions.intent.GET_THING BII. When a user requests one of their items in their ToDo app, Assistant will process their request and it’ll trigger capability with the GET_THING BII that is listed in their shortcuts.xml.

<shortcuts xmlns:android="http://schemas.android.com/apk/res/android">
<capability android:name="actions.intent.GET_THING">
<intent
android:action="android.intent.action.VIEW"
android:targetPackage="YOUR_UNIQUE_APPLICATION_ID"
android:targetClass="YOUR_TARGET_CLASS">
<!-- Eg. name = the ToDo item -->
<parameter
android:name="thing.name"
android:key="name"/>
</intent>
</capability>
</shortcuts>

So in summary, the process to add dynamic shortcuts looks like this:

1. Configure App Actions by adding two jetpack modules ( ShortcutManagerCompat library and Google Shortcuts Integration Library). Then associate the shortcut with a Built-In Intent (BII) in your shortcuts.xml file. Finally push the dynamic shortcut from your app.

2. Two major things happen when you push your dynamic shortcuts to Assistant:

  1. Users can open dynamic shortcuts through Google Assistant, fast tracking users to your content
  2. During contextually relevant times, Assistant can proactively suggest your Android dynamic shortcuts to users, displaying it on Assistant enabled surfaces.

Not too bad. I don’t know about you, but I like to test out new functionality in a small app first. You’re in luck! We recently launched a codelab that walks you through this whole process.

Dynamic Shortcuts Codelab

Looking for more resources to help improve your understanding of App Actions? We have a new learning pathway that walks you through the product, including the dynamic shortcuts that you just read about. Let us know what you think!

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AppActions to share what you’re working on. Can’t wait to see what you build!

Using Machine Learning for COVID-19 helpline with Krupal Modi #IamaGDE

Welcome to #IamaGDE – a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.

In college, Krupal Modi programmed a robot to catch a ball based on the ball’s color, and he enjoyed it enough that he became a developer. Now, he leads machine learning initiatives at Haptik, a conversational AI platform. He is a Google Developer Expert in Machine Learning and recently built the MyGov Corona Helpdesk module for the Indian government, to help Indians around the country schedule COVID-19 vaccinations. He lives in Gujarat, India.

Meet Krupal Modi, Google Developer Expert in Machine Learning.

Image shows Krupal Modi, machine learning Google Developer Expert

GDE Krupal Modi

The early days

Krupal Modi didn’t set out to become a developer, but when he did some projects in college related to pattern recognition, in which he built and programmed a robot to catch a ball based on the color of the ball, he got hooked.

“Then, it just happened organically that I liked those problems and became a developer,” he says.

Now, he has been a developer for ten years and is proficient in Natural Language Processing, Image Processing, and unstructured data analysis, using conventional machine learning and deep learning algorithms. He leads machine learning initiatives at Haptik, a conversational AI platform where developers can program virtual AI assistants and chat bots.

“I have been there almost seven years now,” he says. “I like that most of my time goes into solving some of the open problems in the state of natural language and design.”

Image shows Krupal on stage holding a microphone giving a presentation on NLP for Chatbots

Machine learning

Krupal has been doing machine learning for nine years, and says advances in Hardware, especially in the past eight years, have made machine learning much more accessible to a wider range of developers. “We’ve come very far with so many advances in hardware,” he says. “I was fortunate enough to have a great community around me.”

Krupal is currently invested in solving the open problems of language understanding.

“Today, nobody really prefers talking with a bot or a virtual assistant,” he says. “Given a choice, you’d rather communicate with a human at a particular business.”

Krupal aims to take language understanding to a new level, where people might prefer to talk to an AI, rather than a human. To do that, his team needs to get technology to the point where it becomes a preferred and faster mode of communication.

Ultimately, Krupal’s dream is to make sure whatever technology he builds can impact some of the fundamental aspects of human life, like health care, education, and digital well being.

“These are a few places where there’s a long way to go, and where the technology I work on could create an impact,” he says. “That would be a dream come true for me.”

Image shows Krupal on stage standing behind a podium. Behind him on the wall are the words Google Developers Machine Learning Bootcamp

COVID in India/Government Corona Help Desk Module

One way Krupal has aimed to use technology to impact health care is in the creation of the MyGov Corona Helpdesk module in India, a WhatsApp bot authorized by the Indian government to combat the spread of COVID-19 misinformation. Indian citizens could text MyGov Corona Helpdesk to get instant information on symptoms, how to seek treatment, and to schedule a vaccine.

“There was a lot of incorrect information on various channels related to the symptoms of COVID and treatments for COVID,” he explains. “Starting this initiative was to have a reliable source of information to combat the spread of misinformation.”

To date, the app has responded to over 100 million queries. Over ten million people have downloaded their vaccination certificates using the app, and over one million people have used it to book vaccination appointments.

Watch this video of how it works.

Image is a graphic for MyGov Corona HelpDesk on WhatsApp. The graphic displays the phone number to contact

Becoming a GDE

As a GDE, Krupal focuses on Machine Learning and appreciates the network of self-motivated, passionate developers.

“That’s one of the things I admire the most about the program—the passionate, motivated people in the community,” Krupal says. “If you’re surrounded by such a great community, you take on and learn a lot from them.”

Advice to other developers

“If you are passionate about a specific technology; you find satisfaction in writing about it and sharing it with other developers across the globe; and you look forward to learning from them, then GDE is the right program for you.”

AI Fest in Spain: Exploring the Potential of Artificial Intelligence in Careers, Communities, and Commerce

Posted by Alessandro Palmieri, Regional Lead for Spain Developer Communities

Google Developer Groups (GDGs) around the world are in a unique position to organize events on technology topics that community members are passionate about. That’s what happened in Spain in July 2021, where two GDG chapters decided to put on an event called AI Fest after noticing a lack of conferences dedicated exclusively to artificial intelligence. “Artificial intelligence is everywhere, although many people do not know it,” says Irene Ruiz Pozo, the organizer of GDG Murcia and GDG Cartagena. While AI has the potential to transform industries from retail to real estate with products like Dialogflow and Lending DocAI, “there are still companies falling behind,” she notes.

Image of Irene standing on stage at AI Fest Spain

Irene and her GDG team members recognized that creating a space for a diverse mix of people—students, academics, professional developers, and more—would not only enable them to share valuable knowledge about AI and its applications across sectors and industries, but it could also serve as a potential path for skill development and post-pandemic economic recovery in Spain. In addition, AI Fest would showcase GDGs in Spain as communities offering developer expertise, education, networking, and support.

Using the GDG network to find sponsors, partners, and speakers

The GDGs immediately got to work calling friends and contacts with experience in AI. “We started calling friends who were great developers and worked at various companies, we told them who we are, what we wanted to do, and what we wanted to achieve,” Irene says.

The GDG team found plenty of organizations eager to help: universities, nonprofit organizations, government entities, and private companies. The final roster included the Instituto de Fomento, the economic development agency of Spain’s Murcia region; the city council of Cartagena; Biyectiva Technology, which develops AI tools used in medicine, retail, and interactive marketing; and the Polytechnic University of Cartagena, where Irene founded and led the Google Developer Student Club in 2019 and 2020. Some partners also helped with swag and merchandising and even provided speakers. “The CEOs and different executives and developers of the companies who were speakers trusted this event from the beginning,” Irene says.

A celebration of AI and its potential

The event organizers lined up a total of 55 local and international speakers over the two-day event. Due to the ongoing COVID-10 pandemic, in-person attendance was limited to 50 people in a room at El Batel Auditorium and Conference Center in Cartagena, but sessions—speakers, roundtables, and workshops—were also live-streamed on YouTube on three channels to a thousand viewers.

Some of the most popular sessions included economics professor and technology lab co-founder Andrés Pedreño on “Competing in the era of Artificial Intelligence,” a roundtable on women in technology; Intelequa software developer Elena Salcedo on “Happy plants with IoT”; and Google Developer Expert and technology firm CEO Juantomás García on “Vertex AI and AutoML: Democratizing access to AI.” The sessions were also recorded for later viewing, and in less than a week after the event, there were more than 1500 views in room A, over 1100 in room B and nearly 350 views in the Workshops room.

The event made a huge impact on the developer community in Spain, setting an example of what tech-focused gatherings can look like in the COVID-19 era and how they can support more education, collaboration, and innovation across a wide range of organizations, ultimately accelerating the adoption of AI. Irene also notes that it has helped generate more interest in GDGs and GDSCs in Spain and their value as a place to learn, teach, and grow. “We’re really happy that new developers have joined the communities and entrepreneurs have decided to learn how to use Google technologies,” she says.

The effect on the GDG team was profound as well. “I have remembered why I started creating events–for people: to discover the magic of technology,” Irene says.

Taking AI Fest into the future—and more

Irene and her fellow GDG members are already planning for a second installment of AI Fest in early 2022, where they hope to be able to expect more in-person attendance. The team would also like to organize events focused on topics such as Android, Cloud, AR /VR, startups, the needs of local communities, and inclusion. Irene, who serves as a Women Techmakers Ambassador, is particularly interested in using her newly expanded network to host events that encourage women to choose technology and other STEM areas as a career.

Finally, Irene hopes that AI Fest will become an inspiration for GDGs around the world to showcase the potential of AI and other technologies. It’s a lot of work, she admits, but the result is well worth it. “My advice is to choose the area of technology that interests you the most, get organized, relax, and have a good team,” she advises.

Improve your development workflow with Interactive Canvas DevTools

Posted by Nick Felker, Developer Relations Engineer

Interactive Canvas helps developers build multimodal apps for games, storytelling, and educational experiences on smart displays like the Nest Hub and Nest Hub Max. Setting up an Interactive Canvas app involves building a web app, a conversational model, and a server-based fulfillment to respond to voice commands. All three of these components need to be built out to test your apps using the simulator in the Actions Console. This works for teams who are able to build all three at once… but it means that everything has to be hooked up, even if you just want to test out the web portion of your app. In many cases, the web app provides the bulk of the functionality of an Interactive Canvas app. We recently published Interactive Canvas DevTools, a new Chrome extension that helps unbundle this development process.

Interactive Canvas DevTool Extension

Using Interactive Canvas DevTools

After installing the Interactive Canvas DevTools from the Chrome Web Store, you’ll see a new Interactive Canvas tab when you Open Chrome DevTools.

When you load your web app in your browser, from a publicly hosted URL, localhost, or a remote device on your network, this tab lets you directly interface with the Interactive Canvas callbacks registered on the page to quickly and iteratively test your experience. Suggestion chips are created after every execution to let you replay the same command later.

To get started even faster, you can go to the Preferences tab and click the Import /SDK button. This will open a directory picker. You can select your project’s SDK folder. The extension will identify JSON payloads and TTS marks in your project and surface them as suggestion chips automatically.

JSON historical object changes

When the fields of the JSON object changed, you can view the changes in a colored diff.

Methods that send data to your webhook are instead rerouted to the History tab. This tab hosts a list of every text query and canvas state change in reverse chronological order. This allows you to view how changes in your web app would affect your conversational state. Each time the canvas state changes, you can see a visual representation of which fields changed.

Different levels of notice when using an operation unsupported in Interactive Canvas

Different levels of notice when using an operation unsupported in Interactive Canvas.

There are a number of other features that enhance the developer experience. For example, for browser methods that are not supported in Interactive Canvas, you can optionally log a warning or throw an error when you try to use them. This will make it easier to identify compatibility issues sooner and enforce these policies while debugging.

Nest Hub devices in the Device list

You are able to set the window to match the Nest Hub screen.

You can also add a header element onto your page that will help you optimize your layout for a smart display. Combined with the Nest Hub and Nest Hub Max as new device presets in Chrome DevTools, you are able to set your development environment to be an accurate representation of how your Action will look and behave on a physical device.

Interactive Canvas tab on a remote device

You can also send data to your remote device.

This extension also works if you have a physical device. If it is on the same network, you can connect to your smart display and open up the Interactive Canvas tab on that remote device. From there, you are able to send commands using the same interface.

You can install the extension now through the Chrome Web Store. We’re also happy to announce that the DevTools are Open Source! The source code for this extension has been published on GitHub, where you can find instructions on how to set up the project locally, and how to submit pull requests.

Thanks for reading! To share your thoughts or questions, join us on Reddit at /r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

How a Student Leader Promotes Neurodiversity Awareness in Brazil and Beyond

Posted by Rodrigo Hirooka, Regional Lead for Brazil Developer Communities

Banner with image of João Victor Ipirajá, lead of the Google Student Developer Club at the Federal Institute of Science and Technology of Ceará

Perceiving that one is not like everyone else can be painful. Yet, the experience can also be illuminating. As a child in Brazil, João Victor Ipirajá, lead of the Google Student Developer Club (GDSC) at the Federal Institute of Science and Technology of Ceará (IFCE), knew he was different. He often felt overwhelmed by physical sensations and missed social cues. When he was eventually diagnosed as being on the autistic spectrum, he was actually relieved. Far from being a limitation, the realization gave him a new perspective on his intellectual strengths—such as his ability to perceive mathematical concepts in a highly visual way and his capacity for logical thinking and computer programming. “I was reborn to a full life shortly after I received this diagnosis,” he said in a video he made about his experiences as a person with ASD.

The World Health Organization estimates that 1 out of every 160 children has an autism spectrum disorder (ASD). Yet despite how relatively common ASD is, the wide diversity of the condition and misunderstandings about neurodiversity can still make it difficult to diagnose.

This newfound understanding of how his mind works helped guide him on his educational path as well as career direction. Instead of attending a traditional high school, which he felt would not play to his natural talents and strengths, João decided to study at IFCE, a technical college that also offered a high school program. There, he learned computer science and computer engineering, picking up new programming languages and honing his developer skills.

But most importantly, he felt he had “discovered his place.” His success at IFCE solving problems, using new tools, and working successfully with others soon outweighed his fears about meeting new people and not fitting in. The experience of finding a community convinced him of the need to encourage others to find theirs–and to help build them as well.

Joining GDSC and expanding awareness of neurodiversity

After high school, João decided to continue at IFCE for college to focus on computer engineering, where he learned new programming languages and tools like TensorFlow and Flutter. He also joined IFCE’s GDSC chapter, which further exposed him to new people and ideas. “It’s an honor to be part of this program, meeting people from all over the world and improving my speaking skills, especially in English,” he says. “For me, it’s something magical. I learned so much.”

At the same time, João was beginning to recognize the lack of understanding about neurodiversity in Brazil, even among technical audiences and employers in general. “Some people think we are crazy or we’re unable to do big projects,” he says. Even “good” stereotypes can be harmful–for example, many neurodiverse people have an ability to “hyperfocus” and work or study uninterrupted for hours on end. “People think it’s a superpower,” he says, but such extreme periods of concentration can also be unhealthy and lead to burnout.

Planting the seeds of change with GDSC events and projects

As the IFCE GDSC lead, João decided to concentrate his efforts on expanding awareness of neurodiversity, as well as other types of diversity—sexual, racial, religious, etc.—to help others find the sense of freedom and belonging he has experienced. “Many people don’t feel free to be whoever they want to be,” he says.

The chapter’s efforts include planning speaker sessions with diversity activists and specialists from the community, creating social media content in partnership with IFCE, creating workshops with other Brazilian GDSC chapters, and making diversity a priority when choosing core positions on the team.

He recently spoke at a DevFest event on the topic of “Understanding the autistic spectrum universe,” in which he explained the range of characteristics and abilities autistic people can display. He also wants to do more speaking events in Portuguese to break stereotypes about autism in Brazil specifically. “It’s just a student club, but we are trying to deconstruct stereotypes and prejudice that are so culturally strong in Brazil,” he says.

Cultivating understanding and acceptance in Brazil and beyond

Ultimately, João feels that providing more opportunities and platforms for diverse people will help others. As the community continues to come together, he might be able to help those who have that same sense of difference João remembers having as a child. João and others on his GDSC team especially hope that these efforts will advance a greater understanding around how to elevate and celebrate members of marginalized groups in his home country. However, his goals go beyond mere acceptance: he notes that people who feel more comfortable about who they are also feel more confident to fully participate in all aspects of society. People with diverse abilities and characteristics offer unique skills and perspectives that can also translate into advantages, especially among technical audiences and employers.

“It’s very important for people to have this opportunity to share their stories, to have these environments to make people understand,” he says. “For me, it’s very important, and I’m very honored.”

Build AI-powered customer conversations in Google Maps and Search with Google’s Business Messages

Posted by Sean Falconer, Staff Developer Relations Engineer Sean Falconer

Google’s Business Messages let customers message a business directly from Google Search, Google Maps, and any brand-managed property. Developers of Business Messages can leverage tools like Dialogflow to create AI-powered conversational experiences, where customers can chat with lifelike virtual agents that understand, interact, and talk in natural ways. Meanwhile the business can leverage real life agents when needed.

In this article, I’ll give a brief overview of Business Messages, how to get started developing with the platform, and then walkthrough how to set up an AI-powered conversion using the Bot-in-a-Box feature.

Let’s get started!

What is Google’s Business Messages?

Business Messages is a mobile conversational channel that combines entry points from Google Maps, Search, and brand websites to create rich, asynchronous messaging experiences.

As shown in the example image below, I’ve searched “Bridgepoint Runners” and the results point me to a local Bridgepoint Runners store, which contains buttons to call, get directions, or go to their website. Since Bridgepoint Runners is enabled for Business Messages, I also see a Chat button, which when tapped opens a conversation with Bridgepoint Runners. In the conversation, the business can automatically answer my questions using AI-powered bots as well as live agents.

Example of Business Messages chat entry point for Bridgepoint Runners

In this simple example, Bridgepoint Runners represents a local business, but Business Messages also works for web-based businesses. Business Messages supports rich conversational features like suggested replies, suggested actions, rich cards, carousels, and images so that you can create complex and feature-rich conversational experiences to support a wide range of customer user journeys.

How do I get started?

To get started with Business Messages, you can register as a development partner on our developer website. You can also get up and running quickly by following our quickstart guide.

Once you’ve worked your way through the quickstart, you’ll have registered a Google Cloud Project and that project will have two APIs enabled, the Business Communications API and Business Messages API.

The Business Communications API is an API for creating and managing business experiences for the Business Messages platform while the Business Messages API is an API for sending and receiving messages to and from users on behalf of a business. Additionally, you’ll have access to the Business Communications Developer Console, which is a web-based tool for creating and managing business experiences on the Business Messages platform. It provides the same functionality as the Business Communications API, but is a faster and more convenient way to get started.

Additionally, after the quickstart, you’ll have configured a webhook and created your first Business Messages agent. An agent is a conversational representation of a brand. Agents include properties like the brand’s logo, the agent’s display name, the welcome message that greets a user, and more that define how the conversation will look and where the chat button will show up once launched.

The quickstart will have you deploy code to Google App Engine and the life of a message for your Echo Bot sample will look something like the image below.

Life of a Business Messages message

After creating an agent on behalf of a business, the chat button isn’t immediately available to Google Search and Maps users. All agents must go through a verification and launch process before the chat button will be shown for businesses in Search and Maps. You can see the full lifecycle from creation to launch of an agent here.

Even without launching an agent, you can test the message flow by using the test URLs from a mobile device that are autogenerated when you create the agent. The test URLs for an agent can be copied or sent to your email from within the Business Communications Developer Console and are also available as a property of the agent if you’re using the API.

When you navigate to the test URL, the conversation with your agent will automatically open. This mimics the experience that a user would see when tapping on a chat button for a launched Business Messages agent.

Agent information editor for a Business Messages agent

AI-powered conversation with Bot-in-a-Box

Business Messages’s Bot-in-a-Box makes getting started with conversational AI easy. Bot-in-a-Box takes advantage of Google AI tools like Dialogflow to easily convert an existing FAQ into an automated Business Messages solution. Within minutes, you could launch a lifelike virtual agent that provides relevant responses to the most common questions a business receives from customers.

FAQ-powered automated conversations

Additionally, you can use Dialogflow’s intents to create and support complex automated user journeys, like appointment booking, shopping, order lookup, and lead capture while taking advantage of Business Messages’s rich features.

Let’s take a look at an example.

Creating a Business Messages Helper bot

For this example, I’m going to create a helper bot that can answer questions about Business Messages. I’m going to create a new Business Messages agent using the Business Communications Developer Console, use the native Bot-in-a-Box feature to automate the conversation powered by an FAQ and Dialogflow, and finally add a custom intent to support an about this bot input.

To get started, since I’m already registered for Business Messages, I’m going to go to the Business Communications Developer Console and create a new agent.

Once the agent is created, I can select the agent to see additional details and access the various configuration options.

Create an agent dialog

Overview of a newly created Business Messages agent

Before setting up my Bot-in-a-Box experience, I want to make sure my agent is properly configured to greet new users. I click on Agent Information and from here I can set a welcome message and up to 5 conversation starters that help the user understand how to interact with the automated agent.

Agent information editor for a Business Messages agent

If I send myself the test URL and open the conversation on my phone, I’ll see that the Helper Bot has the greeting I configured and three conversation starters. Since I haven’t configured Bot-in-a-Box or a webhook to respond to user messages, if I send a message to the bot, nothing will happen.

First time experience with the Business Messages Helper Bot

Now that I have the basics setup, I’m going to click on the Integrations menu item in the developer console and configure Bot-in-a-Box via Dialogflow.

Setting up Bot-in-a-Box

The first step to setting up Bot-in-a-Box is to enable the Dialogflow integration. Currently, Bot-in-a-Box only supports the Dialogflow Essentials (ES) version of Dialogflow. However, you can integrate with Dialogflow Customer Experience (CX) by calling the CX APIs directly from a configured Business Messages webhook and programming the conversion to and from the Business Messages APIs.

From the Integrations section of the console, I click Enable integration. I am prompted to either create a new Dialogflow project or connect to an existing one. I’ve already created a Dialogflow project, so I choose to connect to an existing project and then I follow the prompts to set up the authentication between my Business Messages agent and the Dialogflow project.

Once the authentication is complete, I see an updated integration view like the one below. Next I want to Create a knowledge base and add an FAQ document. Behind the scenes, Dialogflow will use machine learning to process the document and recognize questions similar to what exists in the FAQ.

Enabling the Dialogflow integration

The document can be a URL pointing to an existing FAQ for a business or if you don’t have one, you can create an FAQ using Google Sheets, download it as a CSV, and then upload the CSV to initialize Bot-in-a-Box. For the purposes of this example, I created an FAQ as shown in the document below and uploaded it to Bot-in-a-Box.

Example FAQ sheet created for Business Messages

I downloaded this Sheet as a CSV and uploaded it as the initial data set for Bot-in-a-Box to train with.

Upload an FAQ as training data for Bot-in-a-Box

Now that I have Bot-in-a-Box configured, I go back to the conversation I started with the Business Messages Helper Bot on my phone and try asking a question. The Business Messages agent is able to respond immediately with a matching answer pulled from the FAQ document I created.

First time experience with the Business Messages Helper Bot

With Bot-in-a-Box’s FAQ support, within just a few minutes, without writing any code, I was able to create a sophisticated digital agent that can answer common questions about Business Messages.

Adding in a custom intent

As a final step, we are going to add a custom intent to the Dialogflow project we set up that can respond with rich content when someone taps on the “About this bot” suggestion or enters a similar question in the conversation.

From the Integrations section of the Business Communications Developer Console, I click on View agent, which takes me into Dialogflow ES. I click on the Intents menu item, create a new intent called “About this bot”, enter a few training phrases that represent expressions that should match this intent, and a text response.

Example of creating a custom payload to respond with a Business Messages rich card

Back in my conversation with the helper bot, I enter a message that should match this intent: “Who made this bot?”. Even though this phrase wasn’t explicitly part of the training phrases, my agent should match the intent and produce the response I configured.

Example text-based response from a custom intent

In this example, I’m responding with a simple text message, but what if I want to take advantage of Business Messages’s rich message support and respond with something like a rich card? I can do this by using Dialogflow’s custom payload option and use a valid Business Messages rich card payload in the response to create the card.

Example of creating a custom payload to respond with a Business Messages rich card

After creating the JSON structure for a card, I click Save and re-enter the chat on my phone asking “Who made this bot?” again and see the updated response.

Example rich card response form Helper Bot

Final thoughts

Google’s Business Messages is about enabling all businesses to welcome their customers and open a conversation, where and when they need it, as naturally as when a customer enters a store. Dialogflow is Google’s natural language understanding tool that processes user input, maps it to known intents, and responds with appropriate replies.

With Bot-in-a-Box, you can quickly combine the power of Business Messages that turns search queries into conversations, and Dialogflow to provide a turnkey solution to automate customer interactions with a business.

In this article, I showed how to use an FAQ to get up and running with Business Messages quickly and even create custom intents that can respond with rich responses to user inquiries, all without writing a single line of code. This no-code solution can easily be extended using Dialogflow’s fulfillment feature to pull in business information from a database or API, allowing you to support even more complex user journeys.

To learn more about Business Messages, check out our developer website and join our community. You can also check out the Business Messages Helper Bot powered by this technology available in our developer support section here.

I can’t wait to see what you build!

Manage your passes from Google Pay’s Business Console

Posted by:

Ryan Novas, Product Manager, Google Pay’s Business Console

Jose Ugia, Developer Relations Engineer, Google Pay

Last year we launched Google Pay’s Business Console, a platform that helps developers discover, integrate with, and manage Google Pay features for their businesses. Since then, integrating Google Pay products has become easier and faster, with features like a common business profile and a unified dashboard.

Today, we are adding Passes as a new section to Google Pay’s Business Console, so you can manage all your Google Pay resources from one place. You can find the new Passes section in the console’s left-hand navigation bar, and from there, access your tickets, loyalty programs, offers and other passes resources.

Google Pay’s Business Console features a more familiar and intuitive user interface that helps you reuse common bits of information, like your business information, and lets you easily navigate and discover Google Pay products, such as the Online API. Visit Google Pay’s Business Console today, and start managing your current Google Pay products, or discover and integrate with new ones.

The new Passes section in Google Pay’s Business Console lets you request access to the API and manage your passes alongside other Google Pay resources.
The new Passes section in Google Pay’s Business Console lets you request access to the API and manage your passes alongside other Google Pay resources.

Here is what early users are saying about managing Passes in the console:

“The cleaner and consistent look of Google Pay’s Business Console helps us manage our Google Pay resources more intuitively.” Or Maoz, Senior Director of R&D at EngagedMedia said.

The user management additions also helped EngagedMedia better represent their team in the console:

“The new user roles and controls on Google Pay’s Business Console help us handle permissions more intuitively and accurately, and allow us to assign roles that better reflect our team structure more easily.”

We are committed to continuously evolving Google Pay’s Business Console to make it your go-to place to discover and manage Google Pay integrations. We’d love to hear about your experience. You can share feedback with us from the “Feedback” section in the console. We’re looking forward to learning how we can make Google Pay even more helpful for you in the future.

Learn more

Want to learn more about Google Pay?

Upload massive lists of products to Merchant Center using Centimani

Posted by Hector Parra, Jaime Martínez, Miguel Fernandes, Julia Hernández

Merchant Center lets merchants manage how their in-store and online product inventory appears on Google. It allows them to reach hundreds of millions of people looking to buy products like yours each day.

To upload their products, merchants can make use of feeds, that is, files with a list of products in a specific format. These can be shared with Merchant Center in different ways: using Google Sheets, SFTP or FTP shares, Google Cloud Storage or manually through the user interface. These methods work great for the majority of cases. But, if a merchant’s product list grows over time, they might reach the usage limits of the feeds. Depending on the case, quota extensions could be granted, but if the list continues to grow, it might reach a point where feeds no longer support that scale, and the Content API for Shopping would become the recommended way to go forward.

The main issue is, if a merchant is recommended to stop using feeds and start using the Content API due to scale problems, it means that the number of products is massive, and trying to use the Content API directly will give them usage and quota errors, as the QPS and products per call limits will be exceeded.

For this specific use case, Centimani becomes critical in helping merchants handle the upload process through the Content API in a controlled manner, avoiding any overload of the API.

Centimani is a configurable massive file processor able to split text files in chunks, process them following a strategic pattern and store the results in BigQuery for reporting. It provides configurable options for chunk size and number of retries, and takes care of exponential backoff to ensure all requests have enough retries to overcome potential temporary issues or errors. Centimani comes with two operators: Google Ads Offline Conversions Uploader, and Merchant Center Products Uploader, but it can be extended to other uses easily.

Centimani uses Google Cloud as its platform, and makes use of Cloud Storage for storing the data, Cloud Functions to do the data processing and the API calls, Cloud Tasks to coordinate the execution of each call, and BigQuery to store the audit information for reporting.

Centimani Architecture

To start using Centimani, a couple of configuration files need to be prepared with information about the Google Cloud Project to be used (including the element names), the credentials to access the Merchant Center accounts and how the load will be distributed (e.g., parallel executions, number of products per call). Then, the deployment is done automatically using a deployment script provided by the tool.

After the tool is deployed, a cloud function will be monitoring the input bucket in Cloud Storage, and every time a file is uploaded there, it will be processed. The tool uses the name of the file to select the operator that is going to be used (“MC” indicates Merchant Center Products Uploader), and the particular configuration to use (multiple configurations can be used to connect to Merchant Center accounts with different access credentials).

Whenever a file is uploaded, it will be sliced in parts if it is greater than the number of products allowed per call, they will be stored in the output bucket in Cloud Storage, and Cloud Tasks will start launching the API calls until all files are processed. Any file with errors will be stored in a folder called “slices_failed” to help troubleshoot any issues found in the process. Also, all the information about the executions will be stored temporarily in Datastore and then moved to BigQuery, where it can be used for monitoring the whole process from a centralized place.

Centimani Status Dashboard Architecture

Centimani provides an easy way for merchants to start using the Content API for Shopping to manage their products, without having to deal with the complexity of keeping the system under the limits.

For more information you can visit the Centimani repository on Github.

Taking the leap to pursue a passion in Machine Learning with Leigh Johnson #IamaGDE

Welcome to #IamaGDE – a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.

Leigh Johnson turned her childhood love of Geocities and Neopets into a web development career, and then trained her focus on Machine Learning. Now, she’s a staff software engineer at Slack, a Google Developer Expert in Web and Machine Learning, and founder of Print Nanny, an automated failure detection system and monitoring system for 3D printers.

Meet Leigh Johnson, Google Developer Expert in Web and Machine Learning.

Image shows GDE Leigh Johnson, smiling at the camera and holding a circuit board of some kind

GDE Leigh Johnson

The early days

Leigh Johnson grew up in the Bronx, NY, and got an early start in web development when she became captivated by Geocities and Neopets in elementary school.

“I loved the power of being able to put something online that other people could see, using just HTML and CSS,” she says.

She started college and studied Latin, but it wasn’t the right fit for her, so she dropped out and launched her own business building WordPress sites for small businesses, like local restaurants putting their menus online for the first time or taking orders through a form.

“I was 18, running around a data center trying to rack servers and teaching myself DNS to serve my customer base, which was small business owners,” she says. “I ran my business for five years, until companies like Squarespace and Wix started to edge me out of the market a little bit.”

Leigh went on to chase her dream of working in the video game industry, where she got exposed to low-level C++ programming, graphics engines, and basic statistics, which led her to machine learning.

Image shows GDE Leigh Johnson, smiling at the camera and standing in front of a presentation screen at SFPython

Machine learning

At the video game studio where she worked, Leigh got into Bayesian inference.

“It’s old school machine learning, where you try to predict things based on the probability of previous events,” she explains. “You look at past events and try to predict the probability of future events, and I did this for marketing offers—what’s the likelihood you’d purchase a yellow hat to match your yellow pants?”

In the first month or two of trying email offers, the company made more small dollar sales than they typically made in a year.

“I realized, this is powerful dark magic; I must learn more,” Leigh says.

She continued working for tech startups like Ansible, which was acquired by Red Hat, and Dave.com, doing heavy data lifting.

“Everything about machine learning is powered by being able to manipulate and get data from point A to point B,” she says.

Today, Leigh works on machine learning and infrastructure at Slack and is a Google Developer Expert in machine learning. She also has a side project she runs: Print Nanny.

Image shows circuit board with fan next to image of its schematics

Print Nanny: Monitoring 3D printers

When Leigh got into 3D printing as a hobby during the COVID-19 shutdown, she discovered that 3D printers can be unreliable and lack sophisticated monitoring programs.

“When I assembled my 3D printer myself, I realized that over time, the calibration is going to change,” she says. “It’s a very finicky process, and it didn’t necessarily guarantee the quality of these traditional large batch manufacturing processes.”

She installed a nanny cam to watch her 3D printer and researched solutions, knowing from her machine learning experience that because 3D printers build a print up layer by layer, there’s no one point of failure—failure happens layer by layer, over time. So she wrote that algorithm.

“I saw an opportunity to take some of the traditional machine intelligence strategies used by large manufacturers to ensure there’s a certain consistency and quality to the things they produce, and I made Print Nanny,” she says. “It uses a Raspberry Pi, a credit card-sized computer that costs $30. You can stick a computer vision model on one and do offline inference, which are basically predictions about what the camera sees. You can make predictions about whether a print will fail, help score calculations, and attenuate the print.”

Leigh used Google Cloud Platform AutoML Vision, Google Cloud Platform IoT Core, TensorFlow Model Garden, and TensorFlow.js to build Print Nanny. Using GCP credits provided by Google, she improved and developed Print Nanny with TensorFlow and Google Cloud Platform products.

When Print Nanny detects that a print is failing, the user receives a notification and can remotely pause or stop the printer.

“Print Nanny is an automated failure detection system and monitoring system for 3D printers, which uses computer vision to detect defects and alert you to potential quality or safety hazards,” Leigh says.

Leigh has hired team members who are interested in machine learning to help her with the technical aspects of Print Nanny. Print Nanny currently has 2100 users signed up for a closed beta, with 200 people actively using the beta version. Of that group, 80% are hobbyists and 20% are small business owners. Print Nanny is 100% open source.

Image shows a collection of 3D-Printed parts

Becoming a GDE

Leigh got involved with the GDE program about four years ago, when she began putting machine learning models on Raspberry Pis and building robots. She began writing tutorials about what she was learning.

“The things I was doing were quite hard: TensorFlow Light, the mobile device of TensorFlow—there was a missing documentation opportunity there, and my target platform, the Raspberry Pi, is a hobbyist platform, so there was a little bit of missing documentation there,” Leigh says. “For a hobbyist who wanted to pick up a Raspberry Pi and do a computer vision project for the first time, there was a missing tutorial there, so I started writing about what I was doing, and the response was tremendous.”

Leigh’s work caught the eye of Google staff research engineer Pete Warden, the technical Lead of the TensorFlow Mobile team, who encouraged her, and she leveraged the GDE program to connect to Google experts on TensorFlow and machine learning. Google provides a machine learning course for developers and supports TensorFlow, in addition to its many AI products.

“I had no knowledge of graph programming or what it meant to adapt the low-level kernel operations that would run on a Raspberry Pi, or compiling software, and I learned all that through the GDE program,” Leigh says. “This program changed my life.”

Image shows 1 man and three women smiling at the camera. Leigh is taking the photo selfie-style

Leigh’s favorite part of the GDE program is going to events like TensorFlow World, which she last attended in 2019, and GDE summits. She hadn’t travelled internationally until she was in her 20’s, so the GDE program has connected her to the international community.

“It’s been life-changing,” she says. “I never would have had access to that many perspectives. It’s changed the way I view the world, my life, and myself. It’s very powerful.”

Leigh smiles at the camera in front of a sign that reads TensorFlow for mobile and edge devices

Leigh’s advice to future developers

Leigh recommends that people find the best environment for themselves and adopt a growth mindset.

“The best advice that I can give is to find your motivation and find the environment where you can be successful,” she says. “Surround yourself with people who are lifelong learners. When you cultivate an environment of learning around you, it’s this positive, self-perpetuating process.”

#IamaGDE: Katerina Skroumpelou (Athens, Greece)

The Google Developers Experts program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

Katerina Skroumpelou, who is based in Athens, Greece, is a Senior Software Engineer at Narwhal Technologies, a consulting firm whose product Nx lets developers build multiple full-stack applications and share code among them in the same workspace. She is a Google Developer Expert in Maps and Angular.

Image shows  Katerina Skroumpelou looking straight ahead at the camera. She is seated behind a laptop with a red cover covered in stickers

Becoming a Google Maps Platform developer

Skroumpelou has always had a deep love of maps.

“My father used to have old maps books, and I’ve always been obsessed with knowing where I am and having an understanding of my surroundings, so I’ve always liked maps,” she says.

After learning to code in high school, Skroumpelou decided she wanted to be a programmer and spent a year studying computer engineering at the National Technical School of Athens, but she wasn’t excited about it, so she pursued a Master’s in architectural engineering there, instead. Then she earned her master’s degree in spatial analysis at University College London.

“They have a Center for Advanced Spatial Analysis–everything to do with maps, spatial data, and analysis, which combined my love of maps, space, and programming,” she says. “My master’s combined my passions, and I got into programming and maps–we created them with code.”

Skroumpelou returned to Greece after her master’s and took postgraduate courses at the National Technical School of Athens to get a more solid foundation in programming. She landed a job as a web developer at the National Centre for Scientific Research “Demokritos,” which works on European Union-funded security and safety research projects. Her first project was to build a system to help manage a fire department fleet.

“I had to work with maps and annotate on the map where a fire could break out, and how the fleet would be distributed to put it out, so I started working with Google Maps then,” she says. “For another research project on airport security, I imported information into a Google Map of the airport.”

She also learned Angular on the job. Skroumpelou moved on to several software engineering jobs after that and continued to work with Angular and build Google Maps projects in her spare time.

Getting involved in the developer community

Skroumpelou’s first foray into the Google developer community began when she was learning Angular for her job at the research center and watching conference talks on Angular online.

“I thought, hey, I can do that too, and I started thinking about how to get involved,” she says. “I reached out to the Angular Connect London conference, and my talk was accepted. It wasn’t strictly technical; it was “From Buildings to Software,” describing my journey from architectural engineering to software engineering.”

Since then, Skroumeplou has spoken at Google Developer Groups events, local meetups, and DevFest. She became a GDE in 2018 in Angular, Web Technologies, and Google Maps Platform and finds it incentivizes her to use Google tools for new projects.

“Apart from the feeling that you’re giving back to the community, you gain things for yourself on a personal level, and it’s an incentive to do even more,” she says.

She appreciated meeting other developers who shared her passion.

“I’m a very social person, and it really feels like we have common ground,” she says.

Image shows Katerina Skroumpelou presenting onstage at a conference. Behind her is a podium and a wall covered in a planetary theme

Favorite Google Maps Platform features

Skroumpelo’s favorite Google Maps Platform features are Cloud-based Maps styling, the drawing library, and the drawing manager.

“I used the drawing library a lot when I was working at the research institute, drawing things on the map,” she says. “Being able to export the data as JSON and import it again is cool.”

She used the styling feature while building a friend’s website and styled a map to go with the brand colors.

“It looks neat to have your brand colors on the map,” she says. “You can remove things from the map and add them back, add geometries, points, and other things, and draw it as you want.”

She speaks highly of Google Maps Platform’s out-of-the-box interactive features for users, like the JS repository, which has examples developers can clone, or they can use NPM to generate a new Map application.”

“It makes building a map or using it much easier,” she says, adding, “The Google Maps Platform docs are very good and detailed.”

Image shows a map of London illustrating loneliness in people over the age of 65

Future plans

Skroumpelou plans to stay with Narwhal Technologies for the long term and continue to work with Google Maps Platform as much as possible.

“I really like the company I’m working for, so I hope I stay at this company and progress up,” she says.

Image shows Katerina Skroumpelou looking off-camera with a smile. She is standing behind a podium with a laptop with a red cover covered in stickers

Follow Katerina on Twitter at @psybercity | Check out Katerina’s projects on GitHub

For more information on Google Maps Platform, visit our website or learn more about our GDE program.

Machine Learning Communities: Q3 ‘21 highlights and achievements

Posted by HyeJung Lee, DevRel Community Manager and Soonson Kwon, DevRel Program Manager

Let’s explore highlights and achievements of vast Google Machine Learning communities by region for the last quarter. Activities of experts (GDE, professional individuals), communities (TFUG, TensorFlow user groups), students (GDSC, student clubs), and developers groups (GDG) are presented here.

Key highlights

Image shows a banner for 30 days of ML with Kaggle

30 days of ML with Kaggle is designed to help beginners study ML using Kaggle Learn courses as well as a competition specifically for the participants of this program. Collaborated with the Kaggle team so that +30 the ML GDEs and TFUG organizers participated as volunteers as online mentors as well as speakers for this initiative.

Total 16 of the GDE/GDSC/TFUGs run community organized programs by referring to the shared community organize guide. Houston TensorFlow & Applied AI/ML placed 6th out of 7573 teams — the only Americans in the Top 10 in the competition. And TFUG Santiago (Chile) organizers participated as well and they are number 17 on the public leaderboard.

Asia Pacific

Image shows Google Cloud and Coca-Cola logos

GDE Minori MATSUDA (Japan)’s project on Coca-Cola Bottlers Japan was published on Google Cloud Japan Blog covering creating an ML pipeline to deploy into real business within 2 months by using Vertex AI. This is also published on GCP blog in English.

GDE Chansung Park (Korea) and Sayak Paul (India) published many articles on GCP Blog. First, “Image search with natural language queries” explained how to build a simple image parser from natural language inputs using OpenAI’s CLIP model. From this second “Model training as a CI/CD system: (Part I, Part II)” post, you can learn more about why having a resilient CI/CD system for your ML application is crucial for success. Last, “Dual deployments on Vertex AI” talks about end-to-end workflow using Vertex AI, TFX and Kubeflow.

In China, GDE Junpeng Ye used TensorFlow 2.x to significantly reduce the codebase (15k → 2k) on WeChat Finder which is a TikTok alternative in WeChat. GDE Dan lee wrote an article on Understanding TensorFlow Series: Part 1, Part 2, Part 3-1, Part 3-2, Part 4

GDE Ngoc Ba from Vietnam has contributed AI Papers Reading and Coding series implementing ML/DL papers in TensorFlow and creates slides/videos every two weeks. (videos: Vit Transformer, MLP-Mixer and Transformer)

A beginner friendly codelabs (Get started with audio classification ,Go further with audio classification) by GDSC Sookmyung (Korea) learning to customize pre-trained audio classification models to your needs and deploy them to your apps, using TFlite Model Maker.

Cover image for Mat Kelcey's talk on JAX at the PyConAU event

GDE Matthew Kelcey from Australia gave a talk on JAX at PyConAU event. Mat gave an overview to fundamentals of JAX and an intro to some of the libraries being developed on top.

Image shows overview for the released PerceiverIO code

In Singapore, TFUG Singapore dived back into some of the latest papers, techniques, and fields of research that are delivering state-of-the-art results in a number of fields. GDE Martin Andrews included a brief code walkthrough for the released PerceiverIO code at perceiver– highlighting what JAX looks like, how Haiku relates to Sonnet, but also the data loading stuff which is done via tf.data.

Machine Learning Experimentation with TensorBoard book cover

GDE Imran us Salam Mian from Pakistan published a book “Machine Learning Experimentation with TensorBoard“.

India

GDE Aakash Nain has published the TF-JAX tutorial series from Part 4 to Part 8. Part 4 gives a brief introduction about JAX (What/Why), and DeviceArray. Part 5 covers why pure functions are good and why JAX prefers them. Part 6 focuses on Pseudo Random Number Generation (PRNG) in Numpy and JAX. Part 7 focuses on Just In Time Compilation (JIT) in JAX. And Part 8 covers vmap and pmap.

Image of Bhavesh's Google Cloud certificate

GDE Bhavesh Bhatt published a video about his experience on the Google Cloud Professional Data Engineer certification exam.

Image shows phase 1 and 2 of the Climate Change project using Vertex AI

Climate Change project using Vertex AI by ML GDE Sayak Paul and Siddha Ganju (NVIDIA). They published a paper (Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning) and open-sourced the project with regard to NASA Impact’s ETCI competition. This project made four NeurIPS workshops AI for Science: Mind the Gaps, Tackling Climate Change with Machine Learning, Women in ML, and Machine Learning and the Physical Sciences. And they finished as the first runners-up (see Test Phase 2).

Image shows example of handwriting recognition tutorial

Tutorial on handwriting recognition was contributed to Keras example by GDE Sayak Paul and Aakash Kumar Nain.

Graph regularization for image classification using synthesized graphs by GDE Sayak Pau was added to the official examples in the Neural Structured Learning in TensorFlow.

GDE Sayak Paul and Soumik Rakshit shared a new NLP dataset for multi-label text classification. The dataset consists of paper titles, abstracts, and term categories scraped from arXiv.

North America

Banner image shows students participating in Google Summer of Code

During the GSoC (Google Summer of Code), some GDEs mentored or co-mentored students. GDE Margaret Maynard-Reid (USA) mentored TF-GAN, Model Garden, TF Hub and TFLite products. You can get some of her experience and tips from the GDE Blog. And you can find GDE Sayak Paul (India) and Googler Morgan Roff’s GSoC experience in (co-)mentoring TensorFlow and TF Hub as well.

A beginner friendly workshop on TensorFlow with ML GDE Henry Ruiz (USA) was hosted by GDSC Texas A&M University (USA) for the students.

Screenshot from Youtube video on how transformers work

Youtube video Self-Attention Explained: How do Transformers work? by GDE Tanmay Bakshi from Canada explained how you can build a Transformer encoder-based neural network to classify code into 8 different programming languages using TPU, Colab with Keras.

Europe

GDG / GDSC Turkey hosted AI Summer Camp in cooperation with Global AI Hub. 7100 participants learned about ML, TensorFlow, CV and NLP.

Screenshot from slide presentation titled Why Jax?

TechTalk Speech Processing with Deep Learning and JAX/Trax by GDE Sergii Khomenko (Germany) and M. Yusuf Sarıgöz (Turkey). They reviewed technologies such as Jax, TensorFlow, Trax, and others that can help boost our research in speech processing.

South/Central America

Image shows Custom object detection in the browser using TensorFlow.js

On the other side of the world, in Brazil, GDE Hugo Zanini Gomes wrote an article about “Custom object detection in the browser using TensorFlow.js” using the TensorFlow 2 Object Detection API and Colab was posted on the TensorFlow blog.

Screenshot from a talk about Real-time semantic segmentation in the browser - Made with TensorFlow.js

And Hugo gave a talk about Real-time semantic segmentation in the browser – Made with TensorFlow.js covered using SavedModels in an efficient way in JavaScript directly enabling you to get the reach and scale of the web for your new research.

Data Pipelines for ML was talked about by GDE Nathaly Alarcon Torrico from Bolivia explained all the phases involved in the creation of ML and Data Science products, starting with the data collection, transformation, storage and Product creation of ML models.

Screensho from TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)

TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)“ was hosted by TFUG Santiago (Chile). In this talk the speaker gave a tour of the steps to follow to generate a model capable of being in the top 1% of the Kaggle Leaderboard. The focus was on showing the libraries and“ tricks ”that are used to be able to test many ideas quickly both in implementation and in execution and how to use them in productive environments.

MENA

Screenshot from workshop about Recurrent Neural Networks

GDE Ruqiya Bin Safi (Saudi Arabia) had a workshop about Recurrent Neural Networks : part 1 (Github / Slide) at the GDG Mena. And Ruqiya gave a talk about Recurrent Neural Networks: part 2 at the GDG Cloud Saudi (Saudi Arabia).

AI Training with Kaggle by GDSC Islamic University of Gaza from Palestine. It is a two month training covering Data Processing, Image Processing and NLP with Kaggle.

Sub-Saharan Africa

TFUG Ibadan had two TensorFlow events : Basic Sentiment analysis with Tensorflow and Introduction to Recommenders Systems with TensorFlow”.

Image of Yannick Serge Obam Akou's TensorFlow Certificate

Article covered some tips to study, prepare and pass the TensorFlow developer exam in French by ML GDE Yannick Serge Obam Akou (Cameroon).