Open Source Pass Converter for Mobile Wallets

Posted by Stephen McDonald, Developer Programs Engineer, and Nick Alteen, Technical Writer, Engineering, Wallet

Each of the mobile wallet apps implement their own technical specification for passes that can be saved to the wallet. Pass structure and configuration varies by both the wallet application and the specific type of pass, meaning developers have to build and maintain code bases for each platform.

As part of Developer Relations for Google Wallet, our goal is to make life easier for those who want to integrate passes into their mobile or web applications. Today, we’re excited to release the open-source Pass Converter project. The Pass Converter lets you take existing passes for one wallet application, convert them, and make them available in your mobile or web application for another wallet platform.

Moving image of Pass Converter successfully converting an external pkpass file to a Google Wallet pass

The Pass Converter launches with support for Google Wallet and Apple Wallet apps, with plans to add support for others in the future. For example, if you build an event ticket pass for one wallet, you can use the converter to automatically create a pass for another wallet. The following list of pass types are supported for their respective platforms:

  • Event tickets
  • Generic passes
  • Loyalty/Store cards
  • Offers/Coupons
  • Flight/Boarding passes
  • Other transit passes

We designed the Pass Converter with flexibility in mind. The following features provide additional customization to your needs.

  • hints.json file can be provided to the Pass Converter to map Google Wallet pass properties to custom properties in other passes.
  • For pass types that require certificate signatures, you can simply generate the pass structure and hand it off to your existing signing process
  • Since images in Google Wallet passes are referenced by URLs, the Pass Converter can host the images itself, store them in Google Cloud Storage, or send them to another image host you manage.

If you want to quickly test converting different passes, the Pass Converter includes a demo mode where you can load a simple webpage to test converting passes. Later, you can run the tool via the command line to convert existing passes you manage. When you’re ready to automate pass conversion, the tool can be run as a web service within your environment.

The following command provides a demo web page on http://localhost:3000 to test converting passes.

node app.js demo

The next command converts passes locally. If the output path is omitted, the Pass Converter will output JSON to the terminal (for PKPass files, this will be the contents of pass.json).

node app.js <pass input path> <pass output path>

Lastly, the following command runs the Pass Converter as a web service. This service accepts POST requests to the root URL (e.g. https://localhost:3000/) with multipart/form-data encoding. The request body should include a single pass file.

node app.js

Ready to get started? Check out the GitHub repository where you can try converting your own passes. We welcome contributions back to the project as well!

Build smarter and ship faster with the latest updates across our ecosystem

Posted by Jeanine Banks, VP/GM, Developer X and DevRel

At last week’s Made by Google launch event, we announced several new hardware products including the Pixel 7 and Pixel 7 Pro, Google Pixel Watch, and Google Pixel Tablet—a suite of innovative products that we’re excited about. While sure to delight users, it got me thinking—what will these changes mean for developers?

It’s hard to build experiences that let users enjoy the best that their devices have to offer. Undoubtedly this brings a level of complexity for developers who will need to build and test against multiple OS updates and new features. That’s the thing about development—the environment is constantly evolving. We want to cut through the complexity and make it simpler to choose the technology you use, whether for an app on one device or across large and small screens.

Earlier this year at Google I/O, we shared our focus on making developer tools work better together, and providing more guidance and best practices to optimize your end-to-end workflow. For example, we announced the new App Quality Insights window in Android Studio that shows crash data from Firebase Crashlytics directly inside the IDE to make it easier to discover, investigate, and fix offending lines of code.

But our work doesn’t stop once I/O ends. We work all year round to offer increasingly flexible, open and integrated solutions so you can work smarter, ship faster, and confidently set up your business for the future.

That’s why we’re excited to connect with you again—both in person and virtually—to share more recent product updates. Over the next three months, we have over 200 events in more than 50 countries reaching thousands of developers through product summits, community events, industry conferences, and more. Here are a few:

DevFest | Now – December
Local Google Developer Groups (GDG) organize these technology conferences according to the needs and interests of the region’s developer community, and in the local language. Tune in virtually or join in person.
Chrome | Multiple dates
This year the Chrome team will meet you at your favorite regional developer conferences and events, in addition to online forums across time zones. Join us on the journey to build a better web. Check out the calendar.
Google Cloud Next | October 11-13
Learn how to transform with Google Cloud to build apps faster and make smarter business decisions.
Firebase Summit | October 18
Join this hybrid event online or in person in New York City to hear how Firebase can help you accelerate app development, run your app with confidence, and scale your business.
Android Dev Summit | Beginning October 24
Learn from the source about building excellent apps across devices, coming to you online and around the world. We’ll be sharing the sessions live on YouTube in three tracks spread across three weeks, including Modern Android Development on Oct 24, form factors on Nov 9, and platform on Nov 14.
BazelCon | November 16-17
Hosted by Bazel and Google Open Source, BazelCon connects you with the team, maintainers, contributors, users, and friends to learn how Bazel automates software builds and tests on Android and iOS.
Women in ML Symposium | Coming in December
Join open source communities, seek out leadership opportunities, share knowledge, and speak freely about your career development with other women and gendered minorities in a safe space. Catch up on last year’s event.
Flutter Event | Coming in December/January
Hear exciting product updates on Google’s open source framework for building beautiful, natively compiled, multi-platform applications from a single codebase. In the meantime, re-live last year’s event.

We look forward to the chance to meet with you to share technical deep dives, give you hands-on learning opportunities, and hear your feedback directly. After you have heard what we’re up to, make sure to access our comprehensive documentation, training materials, and best practices to help speed up your development and quickly guide you towards success.

Mark your calendars and register now to catch the latest updates.

Register now for Firebase Summit 2022!

Posted by Grace Lopez, Product Marketing Manager

One of the best things about Firebase is our community, so after three long years, we’re thrilled to announce that our seventh annual Firebase Summit is returning as a hybrid event with both in-person and virtual experiences! Our 1-day, in-person event will be held at Pier 57 in New York City on October 18, 2022. It will be a fun reunion for us to come together to learn, network, and share ideas. But if you’re unable to travel, don’t worry, you’ll still be able to take part in the activities online from your office/desk/couch wherever you are in the world.

Join us to learn how Firebase can help you accelerate app development, run your app with confidence, and scale your business. Registration is now open for both the physical and virtual events! Read on for more details on what to expect.

Keynote full of product updates

In-person and livestreamed

We’ll kick off the day with a keynote from our leaders, highlighting all the latest Firebase news and announcements. With these updates, our goal is to give you a seamless and secure development experience that lets you focus on making your app the best it can be.

#AskFirebase Live

In-person and livestreamed

Having a burning question you want to ask us? We’ll take questions from our in-person and virtual attendees and answer them live on stage during a special edition of everyone’s favorite, #AskFirebase.

NEW! Ignite Talks

In-person and livestreamed

This year at Firebase Summit, we’re introducing Ignite Talks, which will be 7-15 minute bitesize talks focused on hot topics, tips, and tricks to help you get the most out of our products.

NEW! Expert-led Classes

In-person and will be released later

You’ve been asking us for more technical deep dives, so this year we’ll also be running expert-led classes at Firebase Summit. These platform-specific classes will be designed to give you comprehensive knowledge and hands-on practice with Firebase products. Initially, these classes will be exclusive to in-person attendees, but we’ll repackage the content for self-paced learning and release them later for our virtual attendees.

We can’t wait to see you

In addition, Firebase Summit will be full of all the other things you love – interactive demos, lots of networking opportunities, exciting conversations with the community…and a few surprises too! The agenda is now live, so don’t forget to check it out! In the meantime, register for the event, subscribe to the Firebase YouTube channel, and follow us on Twitter and LinkedIn to join the conversation using #FirebaseSummit

Introducing Discovery Ad Performance Analysis

Posted by Manisha Arora, Nithya Mahadevan, and Aritra Biswas, gPS Data Science team

Overview of Discovery Ads and need for Ad Performance Analysis

Discovery ads, launched in May 2019, allow advertisers to easily extend their reach of social ads users across YouTube, Google Feed and Gmail worldwide. They provide brands a new opportunity to reach 3 billion people as they explore their interests and search for inspiration across their favorite Google feeds (YouTube, Gmail, and Discover) — all with a single campaign. Learn more about Discovery ads here.

Due to these uniquenesses, customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their Discovery Ad campaigns, where interaction is defined as the main user action associated with an ad format—clicks and swipes for text and Shopping ads, views for video ads, calls for call extensions, and so on.

Interaction Rate = interaction / impressions

“Customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their campaigns.”

– Manisha Arora, Data Scientist

Our analysis approach:

The Data Science team at Google is investing in a machine learning approach to uncover insights from complex unstructured data and provide machine learning based recommendations to our customers. Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.

We follow a six-step based approach for Discovery Ad Performance Analysis:

  • Understand Business Goals
  • Build Creative Hypothesis
  • Data Extraction
  • Feature Engineering
  • Machine Learning Modeling
  • Analysis & Insight Generation

To begin with, we work closely with the advertisers to understand their business goals, current ad strategy, and future goals. We closely map this to industry insights to draw a larger picture and provide a customized analysis for each advertiser. As a next step, we build hypotheses that best describe the problem we are trying to solve. An example of a hypothesis can be -”Do superlatives (words like “top”, “best”) in the ad copy drive performance?”

“Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.”

Manisha Arora, Data Scientist

Once we have a hypothesis we are working towards, the next step is to deep-dive into the technical analysis.

Data Extraction & Pre-processing

Our initial dataset includes raw ad text, imagery, performance KPIs & target audience details from historic ad campaigns in the industry. Each Discovery ad contains two text assets (Headline and Description) and one image asset. We then apply ML to extract text and image features from these assets.

Text Feature Extraction

We apply NLP to extract the text features from the ad text. We pass the raw text in the ad headline & description through Google Cloud’s Language API which parses the raw text into our feature set: commonly used keywords, sentiments etc.


Image Feature Extraction

We apply Image Processing to extract image features from the ad copy imagery. We pass the raw images through Google Cloud’s Vision API & extract image components including objects, person, background, lighting etc.

Following are the holistic set of features that are extracted from the ad content:

Feature Design

Text Feature Design

There are two types of text features being included in DisCat:

1. Generic text feature

a. These are features returned by Google Cloud’s Language API including sentiment, word / character count, tone (imperative vs indicative), symbols, most frequent words and so on.

2. Industry-specific value propositions

a. These are features that only apply to a specific industry (e.g. finance) that are manually curated by the data science developer in collaboration with specialists and other industry experts.

  • For example, for the finance industry, one value proposition can be “Price Offer”. A list of keywords / phrases that are related to price offers (e.g. “discount”, “low rate”, “X% off”) will be curated based on domain knowledge to identify this value proposition in the ad copies. NLP techniques (e.g. wordnet synset) and manual examination will be used to make sure this list is inclusive and accurate.

Image Feature Design

Like the text features, image features can largely be grouped into two categories:

1. Generic image features

a. These features apply to all images and include the color profile, whether any logos were detected, how many human faces are included, etc.

b. The face-related features also include some advanced aspects: we look for prominent smiling faces looking directly at the camera, we differentiate between individuals vs. small groups vs. crowds, etc.

2. Object-based features

a. These features are based on the list of objects and labels detected in all the images in the dataset, which can often be a massive list including generic objects like “Person” and specific ones like particular dog breeds.

b. The biggest challenge here is dimensionality: we have to cluster together related objects into logical themes like natural vs. urban imagery.

c. We currently have a hybrid approach to this problem: we use unsupervised clustering approaches to create an initial clustering, but we manually revise it as we inspect sample images. The process is:

  • Extract object and label names (e.g. Person, Chair, Beach, Table) from the Vision API output and filter out the most uncommon objects
  • Convert these names to 50-dimensional semantic vectors using a Word2Vec model trained on the Google News corpus
  • Using PCA, extract the top 5 principal components from the semantic vectors. This step takes advantage of the fact that each Word2Vec neuron encodes a set of commonly adjacent words, and different sets represent different axes of similarity and should be weighted differently
  • Use an unsupervised clustering algorithm, namely either k-means or DBSCAN, to find semantically similar clusters of words
  • We are also exploring augmenting this approach with a combined distance metric:

d(w1, w2) = a * (semantic distance) + b * (co-appearance distance)

where the latter is a Jaccard distance metric

Each of these components represents a choice the advertiser made when creating the messaging for an ad. Now that we have a variety of ads broken down into components, we can ask: which components are associated with ads that perform well or not so well?

We use a fixed effects1 model to control for unobserved differences in the context in which different ads were served. This is because the features we are measuring are observed multiple times in different contexts i.e. ad copy, audience groups, time of year & device in which ad is served.

The trained model will seek to estimate the impact of individual keywords, phrases & image components in the discovery ad copies. The model form estimates Interaction Rate (denoted as ‘IR’ in the following formulas) as a function of individual ad copy features + controls:

We use ElasticNet to spread the effect of features in presence of multicollinearity & improve the explanatory power of the model:

“Machine Learning model estimates the impact of individual keywords, phrases, and image components in discovery ad copies.”

– Manisha Arora, Data Scientist


Outputs & Insights

Outputs from the machine learning model help us determine the significant features. Coefficient of each feature represents the percentage point effect on CTR.

In other words, if the mean CTR without feature is X% and the feature ‘xx’ has a coeff of Y, then the mean CTR with feature ‘xx’ included will be (X + Y)%. This can help us determine the expected CTR if the most important features are included as part of the ad copies.

Key-takeaways (sample insights):

We analyze keywords & imagery tied to the unique value propositions of the product being advertised. There are 6 key value propositions we study in the model. Following are the sample insights we have received from the analyses:


Although insights from DisCat are quite accurate and highly actionable, the moel does have a few limitations:

1. The current model does not consider groups of keywords that might be driving ad performance instead of individual keywords (Example – “Buy Now” phrase instead of “Buy” and “Now” individual keywords).

2. Inference and predictions are based on historical data and aren’t necessarily an indication of future success.

3. Insights are based on industry insights and may need to be tailored for a given advertiser.

DisCat breaks down exactly which features are working well for the ad and which ones have scope for improvement. These insights can help us identify high-impact keywords in the ads which can then be used to improve ad quality, thus improving business outcomes. As next steps, we recommend testing out the new ad copies with experiments to provide a more robust analysis. Google Ads A/B testing feature also allows you to create and run experiments to test these insights in your own campaigns.


Discovery Ads are a great way for advertisers to extend their social outreach to millions of people across the globe. DisCat helps break down discovery ads by analyzing text and images separately and using advanced ML/AI techniques to identify key aspects of the ad that drives greater performance. These insights help advertisers identify room for growth, identify high-impact keywords, and design better creatives that drive business outcomes.


Thank you to Shoresh Shafei and Jade Zhang for their contributions. Special mention to Nikhil Madan for facilitating the publishing of this blog.


  1. Greene, W.H., 2011. Econometric Analysis, 7th ed., Prentice Hall;

    Cameron, A. Colin; Trivedi, Pravin K. (2005). Microeconometrics: Methods and Applications

Come to the Tag1 & Google Performance Workshop at DrupalCon Europe 2022, Prague

Posted by Andrey Lipattsev, EMEA CMS Partnerships Lead

TL;DR: If you’re attending @DrupalConEur submit your URL @ to get your UX & performance right on #Drupal at the Tag1 & Google interactive workshop.

Getting your User Experience right, which includes performance, is critical for success. It’s a key driver of many success metrics ( and a factor taken into account by platforms, including search engines, that surface links to your site (

Quantifying User Experience is not always easy, so one way to measure, track and improve it is by using Core Web Vitals (CWV, Building a site with great CWV on Drupal is easier than on many platforms on average ( and yet there are certain tips and pitfalls you should be aware of.

In this workshop the team from Tag1 and Google (Michael Meyers, Andrey Lipattsev and others) will use real life examples of Drupal-based websites to illustrate some common pain points and the corresponding solutions. If you would like us to take a look at your website and provide actionable advice, please submit the URL via this link ( The Workshop is interactive, so bring your laptop – we’ll get you up and running and teach you hands-on how to code for the relevant improvements.

We cannot guarantee that all the submissions will be analysed as this depends on the number of submissions and the time that we have. However, we will make sure that all the major themes cutting across the submitted sites will be covered with relevant solutions.

See you in Prague!

Date & Time: Wednesday 21.09.2022, 16:15-18:00

Updates to Emoji: New Characters, New Animation, New Color Customization, and More!

Posted by Jennifer Daniel, Emoji and Expression Creative Director

It’s official: new emoji are here, there, and everywhere.

But what exactly is “new” and where is “here”? Great question.

Emoji have long eclipsed their humble beginnings in sms text messages in the 1990’s. Today, they appear in places you’d never expect like self-checkout kiosks, television screens and yes, even refrigerators 😂. As emoji increase in popularity and advance in how they are used, the Noto Emoji project has stepped up our emoji game to help everyone get “🫠” without having to buy a new device (or a new refrigerator).

Over the past couple of years we’ve been introducing a suite of updates to make it easier than ever for apps to embrace emoji. Today, we’re taking it a step further by introducing new emoji characters (in color and in monochrome), metadata like shortcodes, a new font standard called COLRv1, open source animated emotes, and customization features in emoji kitchen. Now it’s easier than ever to operate at the speed of language online.

New Emoji!

First and foremost, earlier today the Unicode Consortium published all data files associated with the Unicode 15.0 release, including 31 new emoji characters.🎉

Among the collection includes a wing(🪽), a leftwards and rightwards hand, and a shaking face (🫨). Now you too can make pigs fly (🐖🪽), high five (🫸🏼🫷🏿), and shake in your boots all in emoji form (🫨🫨🫨🫨🫨).

These new characters bring our emoji total to 3,664 and all of them are all coming to Android soon and will become available across Google products early next year.

Can’t wait until then? You can download the font today and use it today (wherever color vector fonts are supported). Our entire emoji library including the source files and associated metadata like short codes is open source on Github for you to go build with and build on (Note: Keep an eye open for those source files on Github later this week).

And before you ask, yes the variable monochrome version of Noto Emoji that launched earlier this year is fully up to date to the new Unicode Standard. 🪿🫎🪮

Dancing Emotes

While emoji are almost unrecognizable today from what they were in the late 1990’s, there are some things I miss about the original emoji sets from Japan. Notably, the animation. Behold the original dancer emoji via phone operator KDDI: 

This animation is so good. Go get it, KDDI dancer.

Just as language doesn’t stand still, neither do emoji. Say hello to our first set of animations!!!!!

Scan the collection, download in your preferred file format, and watch them dance. You may have already seen a few in the Messages by Google app which supports these today. The artwork is available under the CC BY 4.0 license.  

New Color Font Support

Emoji innovation isn’t limited to mobile anymore and there is a lot to be explored in web environments. Thanks to a new font format called COLRv1, color fonts — such as Noto Color emoji — can render with the crispness we’ve come to expect from digital imagery. You can also do some sweet things to customize the appearance of color fonts. If you’re viewing this on the latest version of Chrome. Go ahead, give it a whirl.

(Having trouble using this demo? Please update to the latest version of Chrome.)

Make a vaporwave duck

Or a duck from the 1920’s

Softie duckie

… a sunburnt duck?

Before you ask: No, you can’t send 1920’s duck as a traditional emoji using the COLRv1 tech. It’s more demonstrating the possibilities of this new font standard. Because your ducks render in the browser (*) interoperability isn’t an issue! Take our vibrant and colorful drawings and stretch our imaginations of what it even means to be an emoji. It’s an exciting time to be emoji-adjacent.

If you’d like to send goth emoji today in a messaging app, you’ll have to use Emoji Kitchen stickers in Gboard to customize their color. *COLRv1 is available on Google Chrome and in Edge. Expect it in other browsers such as Firefox soon.

Customized Emotes

That’s right, you can change the color of emoji using emoji kitchen. No shade: I love that “pink heart” was anointed the title of “Most anticipated emoji” on social media earlier this summer but what if … changing the color of an emote happened with the simple click of a button and didn’t require the Unicode Consortium, responsible for digitizing the world’s languages, to do a cross-linguistic study of color terms to add three new colored hearts?

Customizing and personalizing emotes is becoming more technically feasible, thanks to Noto Emoji. Look no further than Emoji Kitchen available on Gboard: type a sequence of emoji including a colored heart to change its color.

No lime emoji? No problem.🍋💚

Red rose too romantic for the moment? Try a yellow rose🌹💛

Feeling goth? 💋🖤

Go Cardinals! ❤️🐦

While technically these are stickers, it’s a lovely example of how emoji are rapidly evolving. Whether you’re a developer, designer, or just a citizen of the Internet, Noto Emoji has something for everyone and we love seeing what you make with it.

#WeArePlay | Meet Sam from Chicago. More stories from Peru, Croatia and Estonia.

Posted by Leticia Lago, Developer Marketing

A medical game for doctors, a language game for kids, a scary game for horror lovers and an escape room game for thrill seekers! In this latest batch of #WeArePlay stories, we’re celebrating the founders behind a wonderful variety of games from all over the world. Have a read and get gaming! 

To start, let’s meet Sam from Chicago. Coming from a family of doctors, his Dad challenged him to make a game to help those in the medical field. Sam agreed, made a game and months later discovered over 100,000 doctors were able to practice medical procedures. This early success inspired him to found Level Ex – a company of 135, making world-class medical games for doctors across the globe. Despite his achievements, his Dad still hopes Sam may one day get into medicine himself and clinch a Nobel prize.

Next, a few more stories from around the world:

  • Aldo and Sandro from Peru – founders of Dark Dome. They combine storytelling and art to make thrilling and chilling games, filled with plot twists and jump scares.

  • Vladimir, Tomislav and Boris from Croatia – founders of Pine Studio. They won the Indie Games Festival 2021 with their game Cats In Time. 

  • Kelly, Mikk, Reimo and Madde from Estonia – founders of ALPA kids. Their language games for children have a huge impact on early education and language preservation.

Check out all the stories now at and stay tuned for even more coming soon.

How useful did you find this blog post?

Introducing the Google for Startups Accelerator: Black Founders Class of 2022

Posted by Matt Ridenour, Head of Startup Developer Ecosystem – USA

Image contains logos and headshots of the most recent class of Google for Startups Accelerator: Black Founders.

Today only 1% of venture capital goes to Black founders in the US, with Black women founders receiving even less. At Google, we are committed to building racial equity in the North American startup ecosystem. In May, we announced an open call for applications for our third class of Google for Startups Accelerator: Black Founders, bringing the best of Google’s programs, products, people and technology to Black founders across North America. From hundreds of applicants, we’re proud to announce the 12 exceptional startups selected to join the accelerator:

  • DNA (Toronto, Ontario): A growth coordination AI Platform helping businesses maximize growth using ads, email and social.
  • EdLight (Melrose, Massachusetts) Uses AI to better read, interpret and digitize handwritten student work, reducing misconceptions and increasing equity amongst students, teachers and families.
  • HumanSquad (Toronto, Ontario): Simplifies the immigration and study abroad system by empowering immigrants everywhere with the resources, products and personalized support to immigrate conveniently and affordably.
  • Innovare (Chicago, Illinois): An app that aggregates and displays data from a variety of systems to empower education leaders to make data-driven decisions that positively impact students and communities.
  • Mozaic (Chicago, Illinois): An API-first global payment platform built for co-creators on any project, providing smart contracts that automate split income among creative teams.
  • Node (Toronto, Ontario): A gig marketplace that allows small businesses to hire local influencers in their neighborhood.
  • Onramp (Oakland, California): A workforce development platform helping companies build more diverse candidate pipelines by providing them with a mechanism to invest in skills development for current and future candidates.
  • Paerpay (Boston, Massachusetts): A contactless payment and loyalty experience for restaurants and their guests that doesn’t require a new point of sale (POS) system.
  • Smart Alto (Birmingham, Alabama): A conversational sales platform for local service providers, enabling them to set meetings with clients without cold calling.
  • TurnSignl (Minneapolis, Minnesota): A mobile platform that provides real-time, on-demand legal guidance from an attorney to drivers, all while their camera records the interaction.
  • WearWorks (Brooklyn, New York): Uses the skin as a communications channel to deliver information. Their product, Wayband, is a Haptic navigation app and wristband to guide users using vibration without visual or audio cues.
  • XpressRun (Louisville, Kentucky): Provides same-day and next-day delivery at competitive rates for direct-to-consumer brands.

This fall, these startups will embark on a 10-week virtual program consisting of mentorship, technical support and curriculum covering product design, machine learning, customer acquisition, and leadership development for founders.

Please visit the company’s websites and reach out to them for more information.

Introducing the Google for Startups Accelerator: Women Founders Class of 2022

Posted by Ashley Francisco, Head of Startup Ecosystem, North America

The challenges faced by women founders is evident. Despite an increase in total venture funding raised by women-led startups in recent years, women founders still secured only 2% of the total amount invested in VC-backed startups throughout the year. In addition, a recent report on women-founded companies in Canada noticed that women technology entrepreneurs travel longer routes from startup to scale-up, with women in the study doing more funding pitches than men and taking longer to raise their Series A financing.

In 2020, we launched Google for Startups Accelerator: Women Founders to help bridge the gender gap in the North American startup ecosystem, and provide high-quality mentorship opportunities, technical guidance, support and community for women founders in North America.

To date, 24 women-led startups have graduated from the program, but support for women founders must continue. Earlier this year, we announced an open call to applications for the third class of Accelerator: Women Founders, starting in the fall.

We received hundreds of strong applications and, after careful deliberation, are excited to introduce the 12 impressive startups selected to participate in the 2022 cohort:

  • Advocatia (Lake Bluff, Illinois): Powers healthcare organizations with the ability to engage and enroll their customers into programs that reduce cost and improve outcomes.
  • Arintra (Austin, Texas): Helps hospitals and clinics save time and maximize reimbursement by automating medical coding
  • Blossom Social (Vancouver, British Columbia): Canada’s first social brokerage, combining mobile-first stock trading with a social community for investors.
  • CIRT Can I recycle this? (Athens, Georgia): Builds software and uses AI to digitize the circularity of products and packaging for the modern world, helping customers go zero waste.
  • CyDeploy (Baltimore, Maryland): Provides an intelligent, automated configuration and patch testing solution that positions our customers to make security changes quickly and with confidence.
  • Emaww (Montreal, Quebec): Provides the most advanced and least intrusive emotion analytics for websites to better user experience and improve their digital well-being with emotional intelligence.
  • Farm Generations (Germantown, New York): Builds fair technology for the future of small farms.
  • Hound (Denver, Colorado): A platform for veterinary recruiting, veterinary employee engagement technology and distributing at home veterinary care.
  • Generable (New York, New York): Develops best-in-class Bayesian machine-learning models to improve efficiency of oncology drug-development.
  • MedEssist (Toronto, Ontario): Transforms local pharmacies into modern healthcare hubs.
  • Noticeninja (Fort Myers, Florida): Converts paper notices and manual processes into automated digital workflows that provide resolution pathways for users to follow.
  • Zero5 (San Mateo, California): Transforms parking spaces into tech-enabled mobility service hubs for all vehicles from level 0 to 5 autonomy.

These startups will join the 10-week intensive virtual program, connecting them to the best of Google’s programs, products, people and technology to help them reach their goals and unlock their next phase of growth.

Google Dev Library Letters — 12th Issue

Posted by Garima Mehra, Program Manager

‘Google Dev Library Letters’ is curated to bring you some of the latest projects developed with Google tech submitted to Google Dev Library Platform. We hope this brings you the inspiration you need for your next project!


Shape your Image: Circle, Rounded Square, or Cuts at the corner in Android by Sriyank Siddhartha

Using the MDC library, shape images in just a few lines of code by using ShapeableImageView.

Foso/Ktorfit by Jens Klingenberg

HTTP client / Kotlin Symbol Processor for Kotlin Multiplatform (Js, Jvm, Android, Native, iOS) using KSP and Ktor clients inspired by Retrofit.

Meet the 2022 Code Jam World Finalists!

Posted by Julia DeLorenzo, Program Manager, Coding Competitions

The Code Jam World Finals returns!

Over the past several months, participants have worked their way through multiple rounds of algorithmic coding challenges, and solved some of the most challenging competitive programming problems. The field has been narrowed down from tens of thousands of participants, to the top competitors who will face off at the World Finals on August 5th, 2022

Join us 16:30 UTC for a livestream to see which one of these finalists will be crowned the Code Jam 2022 World Champion, winning the grand prize of $15,000 USD!

Here are this year’s finalists sharing their favorite music genres, tips, fun facts, and more.

This year’s Code Jam World Finalists are:

Antonio Molina Lovett

Handle: y0105w49

What’s your favorite music to listen to while coding?
“Always looping the Vicious Delicious album by Infected Mushroom.”

Yuhao Du

Handle: xll114514

Code Jam claim to fame:
This is Yuhao’s second time at the Code Jam World Finals, previously competing in the 2021 World Finals.

Benjamin Qi

Handle: Benq

What’s your favorite 2022 Code Jam Problem?
“Qualification Round – Twisty Little Passages. First time I used importance sampling in a contest!”

Sangsoo Park

Handle: molamola

What does your handle mean?
“1. I personally like sunfish 🙂
2. I like the way it sounds.
3. Mola is pronounced “몰라” in Korean, which means “I don’t know”.”

Daniel Rutschmann

Handle: dacin21

What’s the best coding competition advice you’ve ever received?
“Have fun and always try to challenge yourself by solving problems that seem too difficult at first.”

Mingyang Deng

Handle: CauchySheep

What’s an interesting and fun fact about yourself?
“I love random walking.”

Gennady Korotkevich

Handle: Gennady.Korotkevich

What’s your favorite 2022 Code Jam Problem?
Saving the Jelly from Round 2 took the most creativity to solve!”

Alexander Golovanov

Handle: Golovanov399

What’s an interesting and fun fact about yourself?
“I have 11 musical instruments, most of which I can only play on a level “may accompany in a song I know.”

Andrew He

Handle: ecnerwala

Code Jam claim to fame:
This will be Andrew’s fourth time competing in the Code Jam World Finals, having competed in 2019, 2020, and 2021 previously.

Aleksei Esin

Handle: ImBarD

What’s an interesting and fun fact about yourself?
“I love bungee jumping.”

Lingyu Jiang

Handle: jiangly

What’s an interesting and fun fact about yourself?
This is Lingyu’s first time competing in the Code Jam World Finals.

Kevin Sun

Handle: ksun48

Code Jam claim to fame:
This will be Kevin’s third time competing in the Code Jam World Finals, having competed in 2019 and 2020 previously.

Lukas Michel

Handle: lumibons

What does your handle mean?
“It’s a combination of letters from my name and the name of the village where I grew up.”

Matvii Aslandukov

Handle: BigBag

What’s an interesting and fun fact about yourself?
“I enjoy playing sports such as tennis, table tennis, volleyball, football, as well as playing piano and guitar.”

Borys Minaiev

Handle: qwerty787788

What’s an interesting and fun fact about yourself?
“A year ago I started doing buildering and we created a chat with just 3 people in it. Now there are almost 100 participants. Who could imagine it would grow so fast?”

Yahor Dubovik

Handle: mhg

What’s your favorite music to listen to while coding?
“Red Hot Chilli Peppers.”

Mateusz Radecki

Handle: Radewoosh

What’s the best coding competition advice you’ve ever received?
“Becoming good isn’t about creating a chance to solve a problem. It’s about removing a chance to not solve a problem.”

Nikolay Kalinin

Handle: KalininN

What’s an interesting and fun fact about yourself?
“I’m an experimentalist in laser physics, also I love traveling and photography.”

Simon Lindholm

Handle: simonlindholm

What’s an interesting and fun fact about yourself?
“I’ve been really into the Super Mario 64 A Button Challenge recently, and N64 game decompilation. Also, mushroom hunting.”

Kento Nikaido

Handle: Snuke

What’s an interesting and fun fact about yourself?
“I’m a cat. My recent hobby is Sed Puzzle

Tiancheng Lou

Handle: ACRushTC

Code Jam claim to fame:
This will be Tiancheng’s eighth Code Jam World Finals, having previously competed in the World Finals in 2006, 2008, 2009, 2010, 2011, 2019, 2021.

Aleksei Daniliuk

Handle: Um_nik

What’s your favorite 2022 Code Jam Problem?
I, O Bot from Round 2, because it was actually a competitive programming problem”

Yuta Takaya

Handle: yutaka1999

What’s your favorite 2022 Code Jam Problem?
Saving the Jelly. It is mainly because I solved it in the last five minutes of the contest.”

Konstantin Semenov

Handle: zemen

Code Jam claim to fame:
This will be Konstantin’s third Code Jam World Finals, having previously competed in the World Finals in 2017 and 2018.

Watch the Code Jam World Finals Livestream 

Join us on August 5 at 16:30 UTC for a livestream of the Code Jam 2022 World Finals. 

Watch all the action unfold as the Code Jam team broadcasts live from Google New York. You’ll have an opportunity to hear from our team, see Code Jam engineers explain the problems from the round, and watch live as we reveal the scoreboard and announce this year’s winners!

At the end, one of these finalists will be crowned the Code Jam 2022 World Champion, winning the grand prize of $15,000 USD. Good luck to all the finalists and as always, happy coding!

How to use App Engine Blobstore (Module 15)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

In our ongoing Serverless Migration Station mini-series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another goal is to demonstrate how to move away from App Engine legacy APIs (now referred to as “bundled services”) to Cloud standalone replacement services. Once this has been accomplished, apps are much more portable, making them flexible enough to:

Developers building web apps that provide for user uploads or serve large files like videos or audio clips can benefit from convenient “blob” storage backing such functionality, and App Engine’s Blobstore serves this specific purpose. As mentioned above, moving away from proprietary App Engine services like Blobstore makes user apps more portable. The original underlying Blobstore infrastructure eventually merged with the Cloud Storage service anyway, so it’s logical to move completely to Cloud Storage when convenient, and this content is inform on this process.

Showing App Engine users how to use its Blobstore service

In today’s Module 15 video, we begin this journey by showing users how to add Blobstore usage to a sample app, setting us up for our next move to Cloud Storage in Module 16. Similar videos in this series adding use of an App Engine bundled service start with a Python 2 sample app that has already migrated web frameworks from webapp2 to Flask, but not this time.

Blobstore for Python 2 has a dependency on webapp, the original App Engine micro framework replaced by webapp2 when the Python 2.5 runtime was deprecated in favor of 2.7. Because the Blobstore handlers were left “stuck” in webapp, it’s better to start with a more generic webapp2 app prior to a Flask migration. This isn’t an issue because we modernize this app completely in Module 16 by:

  • Migrating from webapp2 (and webapp) to Flask
  • Migrating from App Engine NDB to Cloud NDB
  • Migrating from App Engine Blobstore to Cloud Storage
  • Migrating from Python 2 to Python (2 and) 3

We’ll go into more detail in Module 16, but it suffices to say that once those migrations are complete, the resulting app becomes portable enough for all the possibilities mentioned at the top.

Adding use of Blobstore

The original sample app registers individual web page “visits,” storing visitor information such as the IP address and user agent, then displaying the most recent visits to the end-user. In today’s video, we add one additional feature: allowing visitors to optionally augment their visits with a file artifact, like an image. Instead of registering a visit immediately, the visitor is first prompted to provide the artifact, as illustrated below.

The updated sample app’s new artifact prompt page

The end-user can choose to do so or click a “Skip” button to opt-out. Once this process is complete, the same most recent visits page is then rendered, with one difference: an additional link to view a visit artifact if one’s available.

The sample app’s updated most recent visits page

Below is pseudocode representing the core part of the app that was altered to add Blobstore usage, namely new upload and download handlers as well as the changes required of the main handler. Upon the initial GET request, the artifact form is presented. When the user submits an artifact or skips, the upload handler POSTs back to home (“/”) via an HTTP 307 to preserve the verb, and then the most recent visits page is rendered as expected. There, if the end-user wishes to view a visit artifact, they can click a “view” link where the download handler which fetches and returns the corresponding artifact from the Blobstore service, otherwise an HTTP 404 if the artifact wasn’t found. The bolded lines represent the new or altered code.

Adding App Engine Blobstore usage to sample app


In this “migration,” we added Blobstore usage to support visit artifacts to the Module 0 baseline sample app and arrived at the finish line with the Module 15 sample app. To get hands-on experience doing it yourself, do the codelab by hand and follow along with the video. Then you’ll be ready to upgrade to Cloud Storage should you choose to do so. 

In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate to Cloud Storage when porting your app to Python 3. You can continue using Blobstore in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes

If you do want to move to Cloud Storage, Module 16 is next. You can also try its codelab to get a head start. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.