Announcing our second Google for Startups Accelerator: Black Founders cohort

Posted by Jason Scott, Head of Startup Developer Ecosystem, USA

Head shots of 2 people. One male and one female

Last year, 12 inspiring entrepreneurs kicked off the inaugural Google for Startups Accelerator for Black Founders. Throughout the three month program, founders met weekly to work on growing their startups and solving tough technical challenges. “There’s so much happening every single day as a startup,” says Ashley Edwards, founder of MindRight Health, whose startup is making mental healthcare accessible to people of color and low-income families. “The program helped us navigate everything from protecting our team from distractions to building out our machine learning and data science models.”

This August, we’ll launch the second Google for Startups Accelerator for Black Founders with 11 more incredible Black-led startups from across North America. This class features startups using technology to solve challenges in medicine, education, water sustainability, real estate, and more:

GIF of Black Founders class

Acclinate (Birmingham, Alabama, USA): A digital health startup using culture and tech to source diverse participants for clinical trials.

Adapdix (Pleasanton, California, USA): An AI/ML startup that works with large industrial semiconductor, electronic and assembly companies.

AllHere Education (Boston, Massachusetts, USA): Fosters student attendance and supports families and students with mobile messaging powered by AI.

Chatdesk (New York, New York, USA): Uses machine learning to scale support teams with the click of a button.

DOSS (Houston, Texas, USA): A digital, voice-activated real estate marketplace that empowers consumers to speak, text or type questions about properties nationwide and receive accurate, easy answers instantly.

Fêtefully (Dallas, Texas, USA): Digitizes wedding planning experiences, allowing planners to generate greater revenue and improve their offerings to customers.

Mommy Monitor (Toronto Ontario, Canada): A maternal care services platform that provides an easily accessible and culturally safe range of services that gives parents extra support customized to their particular needs and wants.

Optimal Technical Corporation (Atlanta, Georgia, USA): Intelligently eliminates electricity waste, lowers operational expenses, and helps to save the planet.

Sugar (Los Angeles, California, USA): Provides software to building owners and managers to transform the residential experience.

Varuna (Chicago, Illinois, USA): The leading water distribution system monitoring company providing real-time visibility, awareness and insights to water systems enabling optimal operations and consumer safety.

Zirtue (Dallas, Texas, USA): The world’s first relationship-based lending application that simplifies loans between friends, family and trusted relationships while giving borrowers the option to pay creditors directly using their borrowed funds.

We are incredibly excited to support this group of founders over the next three months and beyond, connecting them with the best of our people, products, and programming to advance their companies and solutions.

Be sure to join us as we showcase their accomplishments on Thursday, October 21 from 12:30pm – 2:00pm EST at our Google for Startups Accelerator: Black Founders Demo Day 2021.

Migrating from App Engine ndb to Cloud NDB

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Migrating to standalone services

Today we’re introducing the first video showing long-time App Engine developers how to migrate from the App Engine ndb client library that connects to Datastore. While the legacy App Engine ndb service is still available for Datastore access, new features and continuing innovation are going into Cloud Datastore, so we recommend Python 2 users switch to standalone product client libraries like Cloud NDB.

This video and its corresponding codelab show developers how to migrate the sample app introduced in a previous video and gives them hands-on experience performing the migration on a simple app before tackling their own applications. In the immediately preceding “migration module” video, we transitioned that app from App Engine’s original webapp2 framework to Flask, a popular framework in the Python community. Today’s Module 2 content picks up where that Module 1 leaves off, migrating Datastore access from App Engine ndb to Cloud NDB.

Migrating to Cloud NDB opens the doors to other modernizations, such as moving to other standalone services that succeed the original App Engine legacy services, (finally) porting to Python 3, breaking up large apps into microservices for Cloud Functions, or containerizing App Engine apps for Cloud Run.

Moving to Cloud NDB

App Engine’s Datastore matured to becoming its own standalone product in 2013, Cloud Datastore. Cloud NDB is the replacement client library designed for App Engine ndb users to preserve much of their existing code and user experience. Cloud NDB is available in both Python 2 and 3, meaning it can help expedite a Python 3 upgrade to the second generation App Engine platform. Furthermore, Cloud NDB gives non-App Engine apps access to Cloud Datastore.

As you can see from the screenshot below, one key difference between both libraries is that Cloud NDB provides a context manager, meaning you would use the Python with statement in a similar way as opening files but for Datastore access. However, aside from moving code inside with blocks, no other changes are required of the original App Engine ndb app code that accesses Datastore. Of course your “YMMV” (your mileage may vary) depending on the complexity of your code, but the goal of the team is to provide as seamless of a transition as possible as well as to preserve “ndb“-style access.

The difference between the App Engine ndb and Cloud NDB versions of the sample app

The “diffs” between the App Engine ndb and Cloud NDB versions of the sample app

Next steps

To try this migration yourself, hit up the corresponding codelab and use the video for guidance. This Module 2 migration sample “STARTs” with the Module 1 code completed in the previous codelab (and video). Users can use their solution or grab ours in the Module 1 repo folder. The goal is to arrive at the end with an identical, working app that operates just like the Module 1 app but uses a completely different Datastore client library. You can find this “FINISH” code sample in the Module 2a folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. Bonus content migrating to Python 3 App Engine can also be found in the video and codelab, resulting in a second FINISH, the Module 2b folder.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned! Developers should also check out the official Cloud NDB migration guide which provides more migration details, including key differences between both client libraries.

Ahead in Module 3, we will continue the Cloud NDB discussion and present our first optional migration, helping users move from Cloud NDB to the native Cloud Datastore client library. If you can’t wait, try out its codelab found in the table at the repo above. Migrations aren’t always easy; we hope this content helps you modernize your apps and shows we’re focused on helping existing users as much as new ones.

#IamaGDE: Homing Tam

#IamaGDE series presents: Google Maps

Welcome to #IamaGDE – a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.

image of GDE, Homing Tam

Homing Tam is a product manager at Lalamove, an on-demand logistics company. He started at the company as a product manager focusing on location-based systems, talking with developers and business users to enhance the company’s mapping solutions, before moving into product management. Now, Homing handles corporate solutions; takes care of people who want to integrate with his company’s systems; handles the API side of things to help make integration easier; and provides recommendations for developers and other technical teammates.

Becoming a Maps developer

Homing studied geomatics and computing at university, and his 2009 thesis was based on Google’s API backend. His dissertation focused on using the Google Maps API to perform mapping and overlay. His first full-time job was as a GIS analyst at Esri, the largest private software company in the world. A year and a half later, he became a solutions consultant for a different company, helping customers interested in integrating Google Maps with their software.

image of developer community meetup

Getting involved in the developer community

After Homing got involved in the Google Technology User Group (now known as Google Developer Groups), his boss at the time told him about the Google Developer Experts program. For his interview, Homing presented a product using the Google Maps APIs. When he became a GDE, he gave presentations and talks in the greater China region as a surrogate for the Google Maps Platform team. Homing is currently one of the organizers for GDG Hong Kong, organizing and giving community talks.

Favorite Maps features and current projects

Homing says the Maps Styling Wizard, the precursor to the newer Cloud-based Maps Styling features, is one of his favorite features.

“Cartography, which I studied in college, matters a lot, especially to a simple black and white schematic map, or when matching the theme of a map to a site,” he says. “I like that feature a lot.”

In 2020, Homing gave one talk on Android in the Android 11 Meetup and another talk on Maps at the first-ever virtual Hong Kong Devfest, and he’s ready to do more speaking.

“It had been a while since I gave a talk on maps, and the launch of Cloud-based Maps Styling is so exciting that I feel like it’s time to do some presentations and let the community know more about it. Beyond knowing how to use the API, you need to know how you can make the most of the API.”

Homing notes that this year, in particular, more small business owners need to know how to collect customer addresses, allow customers to place on-demand delivery orders, and update customers.

image from GDG developer community meetup in Hong Kong

In 2021, in addition to giving more talks, Homing hopes to work with the GDG organizers in Hong Kong to plan a hackathon or otherwise teach community members more about the new Maps features.

“Can we make an MVP or a really initial stage cycling app to use as a base to explore the new features and use different Google components?’

As his career continues, Homing says he has two priorities: progressing as a product manager and leveraging technology, including maps, to improve lives.

“This year was a year for everyone to become digitally literate,” he says. “With the extra time we spend on technology, we should make good use of technology to make life better.”

For more information on Google Maps Platform, visit our website.

For more information on Google Developer Experts, visit our website.

How students built a web app with the potential to help frontline workers

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

Image of Olly and Daniel from GDSC at Wash U.

Image of Olly and Daniel from Google Developer Student Clubs at Wash U.

When Olly Cohen first arrived on campus at Washington University in St. Louis (Wash U), he knew the school was home to many talented and eager developers, just like him. Computer science is one of the most popular majors at Wash U, and graduates often find jobs in the tech industry. With that in mind, Olly was eager to build a community of peers who wanted to take theories learned in the classroom and put them to the test with tangible, real-life projects. So he decided to start his own Google Developer Student Club, a university-based community group for students interested in learning about Google developer technology.

Olly applied to become Google Developer Student Club Lead so he could start his own club with a faculty advisor, host workshops on developer products and platforms, and build projects that would give back to their community.

He didn’t know it at the time, but starting the club would eventually lead him to the most impactful development project of his early career — building a web application with the potential to help front-line healthcare workers in St. Louis, Missouri, during the pandemic.

Growing a community with a mission

The Google Developer Student Club grew quickly. Within the first few months, Olly and the core team signed up 150 members, hosted events with 40 to 60 attendees on average and began working on five different projects. One of the club’s first successful projects, led by Tom Janoski, was building a tool for the visually impaired. The app provides audio translations of visual media like newspapers and sports games.

This success inspired them to focus their projects on social good missions, and in particular helping small businesses in St. Louis. With a clear goal established, the club began to take off, growing to over 250 members managed by 9 core team members. They were soon building 10 different community-focused projects, and attracting the attention of many local leaders, including university officials, professors and organizers.

Building a web app for front-line healthcare workers

As the St. Louis community began to respond to the coronavirus pandemic in early 2020, some of the leaders at Wash U wondered if there was a way to digitally track PPE needs from front-line health care staff at Wash U’s medical center. The Dean of McKelvey School of Engineering reached out to Olly Cohen and his friend Daniel Sosebee to see if the Google Developer Student Club could lend a hand.

The request was sweeping: Build a web application that could potentially work for the clinical staff of Wash U’s academic hospital, Barnes-Jewish Hospital.

So the students got right to work, consulting with Google employees, Wash U computer science professors, an industry software engineer, and an M.D./Ph.D. candidate at the university’s School of Medicine.

With the team assembled, the student developers first created a platform where they could base their solution. Next, they built a simple prototype with a Google Form that linked to Google Sheets, so they could launch a pilot. Lastly, in conjunction with the Google Form, they developed a serverless web application with a form and data portal that could let all staff members easily request new PPE supplies.

In other words, their solution was showing the potential to help medical personnel track PPE shortages in real time digitally, making it easier and faster to identify and gather the resources doctors need right away. A web app built by students poised to make a true difference, now that is what the Google Developer Student Club experience is all about.

Ready to make a difference?

Are you a student who also wants to use technology to make a difference in your community? Click here to learn more about joining or starting a Google Developer Student Club near you.

The Google Calendar API has changed how we manage API usage

Posted by Charles Maxson, Developer Advocate

The Google Calendar API has changed how it manages API usage. Previously, queries were monitored and limited on a daily basis. As of May 2021, queries started to be monitored and limited on a per-minute basis. This introduces better behavior when your quota is exceeded, as requests are rate-limited until quota is available rather than failing all requests for the rest of the day. This also helps developers recognize issues around quota enforcements faster and shouldn’t affect the performance of existing projects.

To view your usage and quota limits, have a look in the Google API console.

Image of Calendar API

To help you manage your quotas, we’ve put together a few helpful tips:

  • Use push notifications instead of polling.
  • If you cannot avoid polling, make sure you only poll when necessary (for example poll very seldomly at night).
  • Make sure to use randomized timing so that requests from your users spread out evenly instead of creating bursts.
  • Use incremental synchronization with sync tokens for all collections instead of repeatedly retrieving all the entries.
  • Increase page size to retrieve more data at once by using the maxResults parameter.
  • Update events when they change, avoid recreating all the events on every sync.
  • Use exponential backoff for error retries to make rate-limiting work properly.

For further details into managing quotas please review the manage quotas documentation. You can also find more details on error handling on the resolve errors documentation.

To stay on top of news and updates around Google Workspace APIs and developer platform please sign up to our developer newsletter.

Migrating from App Engine webapp2 to Flask

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

graphic showing movement with arrows,. settings, lines, and more

Migrating web framework

The Google Cloud team recently introduced a series of codelabs (free, self-paced, hands-on tutorials) and corresponding videos designed to help users on one of our serverless compute platforms modernize their apps, with an initial focus on our earliest users running their apps on Google App Engine. We kick off this content by showing users how to migrate from App Engine’s webapp2 web framework to Flask, a popular framework in the Python community.

While users have always been able to use other frameworks with App Engine, webapp2 comes bundled with App Engine, making it the default choice for many developers. One new requirement in App Engine’s next generation platform (which launched in 2018) is that web frameworks must do their own routing, which unfortunately, means that webapp2 is no longer supported, so here we are. The good news is that as a result, modern App Engine is more flexible, lets users to develop in a more idiomatic fashion, and makes their apps more portable.

For example, while webapp2 apps can run on App Engine, Flask apps can run on App Engine, your servers, your data centers, or even on other clouds! Furthermore, Flask has more users, more published resources, and is better supported. If Flask isn’t right for you, you can select from other WSGI-compliant frameworks such as Django, Pyramid, and others.

Video and codelab content

In this “Module 1” episode of Serverless Migration Station (part of the Serverless Expeditions series) Google engineer Martin Omander and I explore this migration and walk developers through it step-by-step.

In the previous video, we introduced developers to the baseline Python 2 App Engine NDB webapp2 sample app that we’re taking through each of the migrations. In the video above, users see that the majority of the changes are in the main application handler, MainHandler:

The diffs between the webapp2 and Flask versions of the sample app

The “diffs” between the webapp2 and Flask versions of the sample app

Upon (re)deploying the app, users should see no visible changes to the output from the original version:

VisitMe application sample output

VisitMe application sample output

Next steps

Today’s video picks up from where we left off: the Python 2 baseline app in its Module 0 repo folder. We call this the “START”. By the time the migration has completed, the resulting source code, called “FINISH”, can be found in the Module 1 repo folder. If you mess up partway through, you can rewind back to the START, or compare your solution with ours, FINISH. We also hope to one day provide a Python 3 version as well as cover other legacy runtimes like Java 8, PHP 5, and Go 1.11 and earlier, so stay tuned!

All of the migration learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. The next video (Module 2) will cover migrating from App Engine’s ndb library for Datastore to Cloud NDB. We hope you find all these resources helpful in your quest to modernize your serverless apps!

Assistant Recap Google I/O 2021

Written by: Jessica Dene Earley-Cha, Mike Bifulco and Toni Klopfenstein, Developer Relations Engineers for Google Assistant

Now that we’ve packed up all of the virtual stages from Google I/O 2021, let’s take a look at some of the highlights and new product announcements for App Actions, Conversational Actions, and Smart Home Actions. We also held a number of amazing live events and meetups that happened during I/O – which we’ll summarize as well.

App Actions

App Actions allows developers to extend their Android App to Google Assistant. For our Android Developers, we are happy to announce that App Actions is now part of the Android framework. With the introduction of the beta shortcuts.xml configuration resource and our latest Google Assistant Plug App Actions is moving closer to the Android platform.

Capabilities

Capabilities is a new Android framework API that allows you to declare the types of actions users can take to launch your app and jump directly to performing a specific task. Assistant provides the first available concrete implementation of the capabilities API. You can utilize capabilities by creating shortcuts.xml resources and defining your capabilities. Capabilities specify two things: how it’s triggered and what to do when it’s triggered. To add a capability, use Built-In intents (BIIs), which are pre-built intents that provide all the Natural Language Understanding to map the user’s input to individual fields. When a BII is matched by the user’s speech, your capability will trigger an Android Intent that delivers the understood BII fields to your app, so you can determine what to show in response.

This framework integration is in the Beta release stage, and will eventually replace the original implementation of App Actions that uses actions.xml. If your app provides both the new shortcuts.xml and old actions.xml, the latter will be disregarded.

Voice shortcuts for Discovery

Google Assistant suggests relevant shortcuts to users and has made it easier for users to discover and add shortcuts by saying “Hey Google, shortcuts.”

Image of Google Assistant voice shortcuts

You can use the Google Shortcuts Integration library, currently in beta, to push an unlimited number of dynamic shortcuts to Google to make your shortcuts visible to users as voice shortcuts. Assistant can suggest relevant shortcuts to users to help make it more convenient for the user to interact with your Android app.

gif of In App Promo SDK

In-App Promo SDK

Not only can Assistant suggest shortcuts, with In-App Promo SDK you can proactively suggest shortcuts in your app for actions that the user can repeat with a voice command to Assistant, in beta. The SDK allows you to check if the shortcut you want to suggest already exists for that user and prompt the user to create the suggested shortcut.

Google Assistant plugin for Android Studio

To support testing Capabilities, Google Assistant plugin for Android Studio was launched. It contains an updated App Action Test Tool that creates a preview of your App Action, so you can test an integration before publishing it to the Play store.

New App Actions resources

Learn more with new or updated content:

Conversational Actions

During the What’s New in Google Assistant keynote, Director of Product for the Google Assistant Developer Platform Rebecca Nathenson mentioned several coming updates and changes for Conversational Actions.

Updates to Interactive Canvas

Over the coming weeks, we’ll introduce new functionality to Interactive Canvas. Canvas developers will be able to manage intent fulfillment client-side, removing the need for intermediary webhooks in some cases. For use cases which require server-side fulfillment, like transactions and account linking, developers will be able to opt-in to server-side fulfillment as needed.

We’re also introducing a new function, outputTts(), which allows you to trigger Text to Speech client-side. This should help reduce latency for end users.

Additionally, there will be updates to the APIs available to get and set storage for both the home and individual users, allowing for client-side storage of user information. You’ll be able to persist user information within your web app, which was previously only available for access by webhook.

These new features for Interactive Canvas will be made available soon as part of a developer preview for Conversational Actions Developers. For more details on these new features, check out the preview page.

Updates to Transaction UX for Smart Displays

Also coming soon to Conversational Actions – we’re updating the workflow for completing transactions, allowing users to complete transactions from their smart screens, by confirming the CVC code from their chosen payment method. Watch our demo video showing new transaction features on smart devices to get a feel for these changes.

Tips on Launching your Conversational Action

Make sure to catch our technical session Driving a successful launch for Conversational Actions to learn about some strategies for putting together a marketing team and go-to-market plan for releasing your Conversational Action.

AMA: Games on Google Assistant

If you’re interested in building Games for Google Assistant with Conversational Actions, you should check out the recording of our AMA, where Googlers answered questions from I/O attendees about designing, building, and launching games.

Smart Home Actions

The What’s new in Smart Home keynote covered several updates for Smart Home Actions. Following our continued emphasis on quality smart home integrations with the updated policy launch, we added new features to help you build engaging, reliable Actions for your users.

Test Suite and Analytics

The updated Test Suite for Smart Home now supports automatic testing, without the use of TTS. Additionally, the Analytics dashboards have been expanded with more detailed logs and in-depth error reporting to help you more quickly identify any potential issues with your Action. For a deeper dive into these enhancements, try out the Debugging the Smart Home workshop. There are also two new debugging codelabs to help you get more familiar with using these tools to improve the quality of your Action.

Notifications

We expanded support for proactive notifications to include the device traits RunCycle and SensorState. Users can now be proactively notified for multiple different device events. We also announced the release of follow-up responses. These follow-up responses enable your smart devices to notify users asynchronously to device changes succeeding or failing.

WebRTC

We added support for WebRTC to the CameraStream trait. Smart camera users can now benefit from lower latency and half-duplex talk between devices. As mentioned in the keynote, we will also be making updates to the other currently supported protocols for smart cameras.

Bluetooth Seamless Setup

To improve the on-boarding experience, developers can now enable BLE (bluetooth low energy) for device onboarding with Bluetooth Seamless Setup. Google Home and Nest devices can act as local hubs to provision and register nearby devices for any Action configured with local fulfillment.

Matter

Project CHIP has officially rebranded as Matter. Once the IP-based connectivity protocol officially launches, we will be supporting devices running the protocol. Watch the Getting started with Project CHIP tech session to learn more.

Ecosystem and Community

The women building voice AI and their role in the voice revolution

Voice AI is fundamentally changing how we interact with technology and its future will be a product of the people that build it. Watch this session to hear about the talented women shaping the Voice AI field, including an interview with Lilian Rincon, Sr. Director of Product Management at Google. Leslie also discusses strategies for achieving equal gender representation in Voice AI, an ambitious but essential goal.

AMA: How the Assistant Investment Program can help fund your startup

This “Ask Me Anything” session was hosted by the all-star team who runs the Google for Startups Accelerator: Voice AI. The team fielded questions from startups and investors around the world who are interested in building businesses based on voice technology. Check out the recording of this event here. The day after the AMA session, the 2021 cohort for the Voice AI accelerator had their demo day – you can catch the recording of their presentations here.

Image from the AMA titled: How the Assistant Investment Program can help fund your startup

Women in Voice Meetup

We connected with amazing women in Voice AI and discussed ways allies can help women in Voice to be more successful while building a more inclusive ecosystem. It was hosted by Leslie Garcia-Amaya, Jessica Dene Earley-Cha, Karina Alarcon, Mike Bifulco, Cathy Pearl, Toni Klopfenstein, Shikha Kapoor & Walquiria Saad

Smart home developer Meetups

One of the perks of I/O being virtual this year was the ability to connect with students, hobbyists, and developers around the globe to discuss the current state of Smart Home, as well as some of the upcoming features. We hosted 3 meetups for the APAC, Americas, and EMEA regions and gathered some great feedback from the community.

Assistant Google Developers Experts Meetup

Every year we host an Assistant Google Developer Expert meetup to connect and share knowledge. This year we were able to invite everyone who is interested in building for Google Assistant to network and connect with one another. At the end several attendees came together at the Assistant Sandbox for a virtual photo!

Image of GoogleIO assitant meetup

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Bringing artworks to life with AR

Posted by Richard Adem, UX Engineer at Google Arts & Culture

What is Art Filter?

One of the best ways to learn about global culture is by trying on famous art pieces using Google’s Augmented Reality technology on your mobile device. What does it feel like to wear a three thousand year old necklace, put on a sixteenth century Japanese helmet or don pearl earrings and pose in a Vermeer?

Google Arts & Culture have created a new feature called Art Filter allowing everyone to learn about culturally significant art pieces from around the world and put themselves inside famous paintings, normally safely displayed in a museum.

We teamed up with the MediaPipe team, which offers cross-platform, customizable ML solutions to combine ML with rendering to generate stunning visuals.

Working closely with the MediaPipe team to utilize their face mesh and 3D face transform allowed us to create custom effects for each of the artifacts we had chosen, and easily display them on as part of the Google Arts & Culture iOS and Android app.

gif of the art filter feature

Figure 1. The Art Filter feature.

The Challenges

We selected five iconic cultural treasures from around the world:

Given their diverse formats or textures each artwork or object required special approaches to bring it to life in AR.

gif of user wearing jewelry on art filter feature

Figure 2. User wearing the jewelry from Johannes Vermeer’s “Girl with a Pearl Earring” – Mauritshuis museum, Hague.

Creating 3D objects that can be viewed from all sides, using 2D references.

Some of the artwork we selected are 2D paintings and we wanted everyone to immerse themselves in the paintings. Our team of 3D artists and designers took high resolution gigapixel images from Google Arts & Culture and projected them onto 3D meshes to texture them. We also extended the 2D textures all the way around the 3D meshes while maintaining the style of the original artist. This means that when you turn your head the previously hidden parts of the piece are viewable from every angle, mimicking how the object would look in real-life.

Gif of the Van Gogh Self-Portrait filter

Figure 3. The Van Gogh Self-Portrait filter – Musée d’Orsay, Paris.

Our cultural partners were immensely helpful during the creation of Art Filter. They have sourced a huge amount of reference images allowing us to reproduce the pieces accurately using photographs from different angles, to help them appear to fit into the “real world” in AR (using size comparisons).

Layering elements of the effect along with the image of the user.

Art Filter takes an image of the user from their device’s camera and uses that to generate a 3D mesh of the user’s face. All processing of user images or video feeds is run entirely on device. We do not use this feature to identify or collect any personal biometric data; the feature cannot be used to identify an individual.

The image is then reused to texture the face mesh, generated in real-time on-device with MediaPipe Face Mesh, representing it in the virtual 3D world within the device. We then add virtual 2D and 3D layers around the face to complete the effect. The Tengu Helmet, for example, sits on top of the face mesh in 3D and is “attached” to the face mesh so it moves around when the user moves their head around. The Vermeer earrings with a headscarf and Frida Kahlo’s necklace are attached to the user’s image in a similar way. The Van Gogh effect works slightly differently since we still use a mesh of the user’s face but this time we apply a texture from the painting.

We use 2D elements to complete the scene as well, such as the backgrounds in the Kahlo and Van Gogh paintings. These were created by carefully separating the painting subjects from the background then placing them behind the user in 3D. You may notice that Van Gogh’s body is also 2D, shown as a “billboard” so that it always faces the camera.

Figure 4. Creating the 3D mesh showing layers and masks.

Figure 4. Creating the 3D mesh showing layers and masks.

Using shaders for different materials such as the metal helmet.

To create a realistic looking material we used “Physically Based” Rendering shaders. You can see this on the Tengu helmet, it has a bumpy surface that is affected by the real life light captured by the device. This requires creating extra textures, texture maps, for the effect that uses colors to represent how bumpy or shiny the 3D object should appear. Texture maps look like bright pink and blue images but tell the renderer about tiny details on the surface of the object without creating any extra polygons, which can slow down the frame rate of the feature.

Figure 5. User wearing Helmet with Tengu Mask and Crows - The Metropolitan Museum of Art

Figure 5. User wearing Helmet with Tengu Mask and Crows – The Metropolitan Museum of Art.

Conclusion

We hope you enjoy the collection we have created in Art Filter. Please visit and try for yourself! You can also explore more amazing ML features with Google Arts & Culture such as Art Selfie and Art Transfer.

We hope to bring many more filters to the feature and are looking forward to new features from MediaPipe.

Google updates Passes API to store COVID vaccination and testing information on Android devices

Posted by Irfan Faizullabhoy

Google has updated its Passes API to enable a simple and secure way to store and access COVID vaccination and test cards on Android devices. Starting today, developers from healthcare organizations, government agencies and organizations authorized by public health authorities to distribute COVID vaccines and/or tests will have access to these APIs to create a digital version of COVID vaccination or test information. This will roll out initially in the United States followed by other countries.

Image of three smart phones side by side showing Covid vaccination cards

Example COVID Cards from Healthvana, a company serving Los Angeles County

Once a user stores the digital version of the COVID Card to their device, they will be able to access it via a shortcut on their device home screen, even when they are offline or in areas that have weak internet service. To use this feature, the device needs to run Android 5 or later and be Play Protect certified. Installing the Google Pay app is not a requirement to access COVID Cards.

The COVID Card has been designed with privacy and security at its core.

  • Storing information: The user’s COVID vaccination and test information is stored on their Android device. If a user wants to access this information on multiple devices, the user will need to manually store it on each device. Google does not retain a copy of the user’s COVID vaccination or test information.
  • Sharing information: Users can choose to show their COVID Card to others. The information in the user’s COVID Card is not shared by Google with its various services or third parties and it is not used for targeting ads.
  • Securing information: A lock screen is required in order to store a COVID Card on a device. This is for added security and to protect the user’s personal information. When a user wants to access their COVID Card, they will be asked for the password, pin or biometric method set up for their Android device.

If you are a qualified provider, please sign up to share your interest here. And, for more information about COVID cards and their privacy and security features, please see the help center.

What do you think?

Do you have any questions? Let us know in the comments below or tweet using #AskGooglePayDevs and follow us @GooglePayDevs.

Upcoming security changes to Google’s OAuth 2.0 authorization endpoint in embedded webviews

Posted by Badi Azad, Group Product Manager (@badiazad)

The Google Identity team is continually working to improve Google Account security and create a safer and more secure experience for our users. As part of that work, we recently introduced a new secure browser policy prohibiting Google OAuth requests in embedded browser libraries commonly referred to as embedded webviews. All embedded webviews will be blocked starting on September 30, 2021.

Embedded webview libraries are problematic because they allow a nefarious developer to intercept and alter communications between Google and its users by acting as a “man in the middle.” An application embedding a webview can modify or intercept network requests, insert custom scripts that can potentially record every keystroke entered in a login form, access session cookies, or alter the content of the webpage. These libraries also allow the removal of key elements of a browser that hold user trust, such as the guarantee that the response originates from Google’s servers, display of the website domain, and the ability to inspect the security of a connection. Additionally the OAuth 2.0 for Native Apps guidelines from IETF require that native apps must not use embedded user-agents such as webviews to perform authorization requests.

Embedded webviews not only affect account security, they could affect usability of your application. The sandboxed storage environment of an embedded webview disconnects a user from the single sign-on features they expect from Google. A full-featured web browser supports multiple tools to help a logged-out user quickly sign-in to their account including password managers and Web Authentication libraries. Google’s users also expect multiple-step login processes, including two-step verification and child account authorizations, to function seamlessly when a login flow involves multiple devices, when switching to another app on the device, or when communicating with peripherals such as a security key.

Instructions for impacted developers

Developers must register an appropriate OAuth client for each platform (Desktop, Android, iOS, etc.) on which your app will run, in compliance with Google’s OAuth 2.0 Policies. You can verify the OAuth client ID used by your installed application is the most appropriate choice for your platform by visiting the Google API Console’s Credentials page. A “Web application” client type in use by an Android application is an example of mismatched use. Reference our OAuth 2.0 for Mobile & Desktop Apps guide to properly integrate the appropriate client for your app’s platform.

Applications opening all links and URLs inside an embedded webview should follow the following instructions for Android, iOS, macOS, and captive portals:

Android

Embedded webviews implementing or extending Android WebView do not comply with Google’s secure browser policy for its OAuth 2.0 Authorization Endpoint. Apps should allow general, third-party links to be handled by the default behaviors of the operating system, enabling a user’s preferred routing to their chosen default web browser or another developer’s preferred routing to its installed app through Android App Links. Apps may alternatively open general links to third-party sites in Android Custom Tabs.

iOS & macOS

Embedded webviews implementing or extending WKWebView, or the deprecated UIWebView, do not comply with Google’s secure browser policy for its OAuth 2.0 Authorization Endpoint. Apps should allow general, third-party links to be handled by the default behaviors of the operating system, enabling a user’s preferred routing to their chosen default web browser or another developer’s preferred routing to its installed app through Universal Links. Apps may alternatively open general links to third-party sites in SFSafariViewController.

Captive portals

If your computer network intercepts network requests, redirecting to a web portal supporting authorization with a Google Account, your web content could be displayed in an embedded webview controlled by a captive network assistant. You should provide potential viewers instructions on how to access your network using their default web browser. For more information reference the Google Account Help article Sign in to a Wi-Fi network with your Google Account.

New IETF standards adopted by Android and iOS may help users access your captive pages in a full-featured web browser. Captive networks should integrate Captive-Portal Identification in DHCP and Router Advertisements (RAs) proposed IETF standard to inform clients that they are behind a captive portal enforcement device when joining the network, rather than relying on traffic interception. Networks should also integrate the Captive Portal API proposed IETF standard to quickly direct clients to a required portal URL to access the Internet. For more information reference Captive portal API support for Android and Apple’s How to modernize your captive network developer articles.

Test for compatibility

If you’re a developer that currently uses an embedded webview for Google OAuth 2.0 authorization flows, be aware that embedded webviews will be blocked as of September 30, 2021. To verify whether the authorization flow launched by your application is affected by these changes, test your application for compatibility and compliance with the policies outlined in this post.

You can add a query parameter to your authorization request URI to test for potential impact to your application before September 30, 2021. The following steps describe how to adjust your current requests to Google’s OAuth 2.0 Authorization Endpoint to include an additional query parameter for testing purposes.

  1. Go to where you send requests to Google’s OAuth 2.0 Authorization Endpoint. Example URI: https://accounts.google.com/o/oauth2/v2/auth
  2. Add the disallow_webview parameter with a value of true to the query component of the URI. Example: disallow_webview=true

An implementation affected by the planned changes will see a disallowed_useragent error when loading Google’s OAuth 2.0 Authorization Endpoint, with the disallow_webview=true query string, in an embedded webview instead of the authorization flows currently displayed. If you do not see an error message while testing the effect of the new embedded webview policies your app’s implementation might not be impacted by this announcement.

Note: A website’s ability to request authorization from a Google Account may be impacted due to another developer’s decision to use an embedded webview in their app. For example, if a messaging or news application opens links to your site in an embedded webview, the features available on your site, including Google OAuth 2.0 authorization flows, may be impacted. If your site or app is impacted by the implementation choice of another developer please contact that developer directly.

User-facing warning message

A warning message may be displayed in non-compliant authorization requests after August 30, 2021. The warning message will include the user support email defined in your project’s OAuth consent screen in Google API Console and direct the user to visit our Sign in with a supported browser support article.

A screenshot showing an example Google OAuth authorization dialog including a warning message: To help protect your account, Google will soon block apps that don't comply with Google's embedded webview policy. You can let the app developer (moo@gmail.com) know that this app should stop using embedded webviews

Developers may acknowledge the upcoming enforcement and suppress the warning message by passing a specific query parameter to the authorization request URI. The following steps explain how to adjust your authorization requests to include the acknowledgement parameter:

  1. Go to where you send requests to Google’s OAuth 2.0 Authorization Endpoint. Example URI: https://accounts.google.com/o/oauth2/v2/auth
  2. Add an ack_webview_shutdown parameter with a value of the enforcement date: 2021-09-30. Example: ack_webview_shutdown=2021-09-30

A successful request to Google’s OAuth 2.0 Authorization Endpoint including the acknowledgement query parameter and enforcement date will suppress the warning message in non-compliant authorization requests. All non-compliant authorization requests will display a disallowed_useragent error when loading Google’s OAuth 2.0 Authorization Endpoint after the enforcement date.

Related content

With 1,600 students by his side, Jack Lee grew the largest Google Developer Student Club in the world

Posted by Noa Havazelet, Program Manager, Google Developer Student Clubs, UK & Ireland

With 1,600 students by his side, Jack Lee grew the largest Google Developer Student Club in the world in just 6 months at the London School of Economics (LSE). A life-long athlete, who loves leading teams, Jack saw that reigniting his university’s GDSC would be a great opportunity to have a large impact on the local tech scene. With a heavy focus on partnerships, Jack connected members of his club with leaders at top companies and other student groups across Scotland, France, Norway, Canada, and Nigeria. These collaborations enabled students to practice networking, while gaining access to key internships.

Learn more about Jack and his club below.

Image of Jack Lee

Image of Jack Lee speaking at a GDSC event

Student-to-student mentorship with impact

Leaders like Jack Lee make Google Developer Student Clubs around the world special by providing a trusted and fun space for student-to-student mentorship. When students step up to help their peers, a strong camaraderie and support system forms beyond the classroom.

One of the secrets to Jack’s success was to appeal to both computer science students as well as those with a non-technical background, like business majors. To inspire more students with different backgrounds to join the club, Jack put together a team of additional student leaders. Under his leadership, this team had the freedom to independently build tech-focused events that would interest students across the university.

Image of GDSC LSE team

After the first semester, Jack’s approach was working. They hosted over 80 events, covering a wide range of topics including front end web development and career talks with financial firms.

The intersection of students with different backgrounds inspired club members to work together on community projects, utilizing their different skills. In fact, a few club members formed teams to solve for one of the United Nations 17 Sustainable Development Goals. As part of the Google Developer Student Clubs 2021 Solution Challenge, students from the London School of Economics developed prototype solutions for NGOs on 1) wildfire analysis using TensorFlow, 2) raising donations and grant access, and 3) increasing voter registrations.

As more students continued to join their GDSC, Jack decided to up the tempo to keep the momentum going.

Connecting students to companies

Since the London School of Economics is not only a tech-focused university, Jack requested support from a team at Google for Startups. Together they reached out to some of the world’s largest firms and startups to collaborate on events and specialized programs for the student club. Jack’s GDSC established relationships with 6 partners, and 3 local sponsors from startups, NGOs, and financial firms. All these partners contributed to nearly 30 events throughout the academic year, which included:

  • Introductory Python courses
  • Mentorship sessions
  • Networking events
  • Talks with CEOs
  • Panel talks across industries

These events started catching the attention of students across Europe and Asia, with some students who could not afford to attend university reaching out for technical learning resources and opportunities.

Connecting 150 students to mentors from different startups is one of the achievements that makes Jack and the club leaders most proud.

This is yet another example of how Jack’s determination to grow a stronger community led him to build a global Google Developer Student Club that left a profound impact on his fellow students.

If you’re also a student and want to join a Google Developer Student Club community like this, find one near you here.

Pride Week with Google Developer Group Floripa

Posted by Rodrigo Akira Hirooka, Program Manager, Google Developer Groups Latin America

Lorena Locks is on a mission to grow the LGBTQIA+ tech community in Brazil. Her inspiration came from hosting Google Developer Group (GDG) Floripa meetups with her friend Catarina, where they were able to identify a need in their community.

We felt there wasn’t a forum to meet people in the tech industry that reflected ourselves. So we decided to think bigger.”

Image from GDG Floripa event

Image from GDG Floripa event

Pride Week at GDG Floripa, Brazil

As a Women Techmakers Ambassador and Google Developer Group lead in Floripa, Brazil, Lorena worked with the local community to create a week of special events, including over 12 talks and sessions centered on empowering the LGBTQIA+ experience in tech.

The events took place every night at 7pm from June 21st – 25th and focused on creating inclusive representation and building trust among developer communities.

Lorena’s commitment to this underrepresented group gained the attention of many local leaders in tech who identify as LGBTQIA+ and volunteered as speakers during Pride Week.

By creating spaces to talk about important LGBTQIA+ topics in tech, Pride Week with Google Developer Groups Floripa included sessions on:

  • Spotting binary designs in products
  • How to build inclusive tech teams
  • Being an LGBTQIA+ manager
  • Developing ‘Nohs Somos‘ an app for the LGBTQIA+ community
  • The best practices for D&I
  • General Personal Data Protection Law and inclusive gender questions on forms

Image from event

Speakers in photo: Lorena Locks and Catarina Schein

With one-hundred percent of the speakers at these events coming from the LGTBQIA+ community, Pride Week at GDG Floripa was a high impact program that has gone on to inspire GDGs around the world.

If you want to learn more about how to get involved in Google Developer Group communities like this one, visit the site here.