4 updates from the Google for Games Developer Summit

Posted by Alex Chen, Google for Games

This week, we announced new games solutions and updates to our tools at the Google for Games Developer Summit, a free digital event for developers, publishers and advertisers. From highlighting viewership growth trends on YouTube gaming to reaching more players on different devices with Google Play Games on PC, here’s a quick recap with some of our top announcements and key updates.

1. Build high-quality games on Android

The Android team talked about how they’ve made it easier to develop fun and engaging games with updates to Android vitals and the Android Game Development Kit. They also shared how you can get these games to more users on more devices, with Android support for form factors like foldables, Chromebooks and PCs. Learn more about these announcements, including new ways to connect with a global audience, on the Android Developers blog.

2. Strengthen your ads monetization and growth strategies

Google Ads showed advertisers how to get more value from both in-app ads and in-app purchases with a new feature called target return on ad spend for hybrid monetization. And AdMob showed publishers how to save time and costs with a more efficient way to manage ad mediation, with a revamped buyer management interface and streamlined ad unit mapping workflow. See more in the Google Ads blog post.

3. Create connections with your community

As a home of popular gaming creators, videos, and livestreams worldwide, YouTube continues to see incredible growth. The YouTube team announced that over 2 trillion hours of gaming content was consumed in 2022. Through different formats, availability on multiple devices and culture-shaping Creators, they’re committed to being the place where game publishers and Creators reach players and build communities around their favorite games.

4. Keep players engaged with live service games

Google Cloud shared their strategy for live service game development. They’re combining technology that brings togethers players from all over the world, databases that store critical data for an optimal player experience and the analytics that allow game companies to foster a relationship with their players. Learn more on Google Cloud’s blog.

Whether it’s creating the newest hit game, connecting with an enthusiastic community or growing your business to reach more players everywhere, Google is glad to be your partner along the way. To learn more, you can access all content on demand. And if you’re planning to attend Game Developers Conference next week in San Francisco, come say hi at one of our in-person developer sessions.

How to be more productive as a developer: 5 app integrations for Google Chat that can help

Posted by Mario Tapia, Product Marketing Manager, Google Workspace

In today’s fast-paced and ever-changing world, it is more important than ever for developers to be able to work quickly and efficiently. With so many different tools and applications available, it can be difficult to know which ones will help you be the most productive. In this blog post, we will discuss five different DevOps application integrations for Google Chat that can help you improve your workflows and be more productive as a developer.

PagerDuty for Google Chat

PagerDuty helps automate, orchestrate, and accelerate responses to unplanned work across an organization. PagerDuty for Google Chat empowers developers, DevOps, IT operations, and business leaders to prevent and resolve business-impacting incidents for an exceptional customer experience—all from Google Chat. With PagerDuty for Google Chat, get notifications, see and share details with link previews, and act by creating or updating incidents.

How to: Use PagerDuty for Google Chat

Asana for Google Chat

Asana helps you manage projects, focus on what’s important, and organize work in one place for seamless collaboration. With Asana for Google Chat, you can easily create tasks, get notifications, update tasks, assign them to the right people, and track your progress.

How to: Use Asana for Google Chat


Jira makes it easy to manage your issues and bugs. With Jira for Google Chat, you can receive notifications, easily create issues, assign them to the right people, and track your progress while keeping everyone in the loop.

How to: Use Jira for Google Chat


Jenkins allows you to automate your builds and deployments. With Jenkins for Google Chat, development and operations teams can connect into their Jenkins pipeline and stay up to date by receiving software build notifications or trigger a build directly in Google Chat.

How to: Use Jenkins for Google Chat


GitHub lets you manage your code and collaborate with your team. Integrations like GitHub for Google Chat make the entire development process fit easily into a developer’s workflow. With GitHub, teams can quickly push new commits, make pull requests, do code reviews, and provide real-time feedback that improves the quality of their code—all from Google Chat.

How to: Use GitHub for Google Chat

Next steps

These are just a few of the many different application integrations that can help you be more productive as a developer, check out the Google Workspace Marketplace for more integrations you or the team might already be using. By using the right tools and applications, you can easily stay connected with your team, manage your tasks and projects, and automate your builds and deployments.

To keep track of all the latest announcements and developer updates for Google Workspace please subscribe to our monthly newsletter or follow us @workspacedevs.

Meet our newest Accelerator: Climate Change cohort

Posted by Matt Ridenour, Head of Startup Developer Ecosystem – USA

Scaling high potential startups aimed at tackling climate change can have an immensely positive impact for our planet.

In line with Google’s broader commitment to address climate change, we are proud to announce the third cohort for our Google for Startups Accelerator: Climate Change program. This 10-week digital accelerator brings the best of Google’s people, products and programming to help take early-stage North American climate tech startups to the next level.

Meet the 12 exceptional startups using cloud technology, artificial intelligence, machine learning and more for a healthier planet.

Agrology, Alexandria, VA

Agrology’s predictive agriculture platform helps farmers grow with confidence and beat climate change through data, insights and soil monitoring at scale.

BattGenie, Seattle, WA

BattGenie provides Li-ion battery management software and solutions, enabling safe, fast charging while improving battery life cycle.

Bodhi, Austin, TX

Bodhi empowers solar companies to deliver amazing customer experiences, automating communications so installers can focus on increasing renewable energy access.

Cambio, San Francisco, CA

Cambio is software that helps commercial real estate companies and their corporate tenants decarbonize their buildings.

Cleartrace, Austin, TX

Cleartrace is disrupting legacy reporting with a new standard for how energy and decarbonization information is collected, stored, accessed and transacted.

ElectricFish, Fremont, CA

ElectricFish builds and deploys resilient, flexible EV infrastructure to accelerate decarbonization and support community climate adaptation.

Enersion, Toronto, ON

Enersion offers zero-emission solar trigeneration energy that converts solar radiation into refrigerant-free cooling, heating and electricity.

Eugenie AI, Cupertino, CA

Eugenie is an AI intelligence platform for asset-heavy manufacturers to track, trace, and reduce emissions while improving operations.

Finch, Denver, CO

Finch is a platform that decodes products’ environmental footprints to help consumers and shares insights with businesses.

Refiberd, Cupertino, CA

Refiberd is tackling the 186 billion pound global textile waste problem with the first AI-empowered circular textile sorting and reclamation system.

Sesame Solar, Jackson, MI

Sesame Solar is decarbonizing disaster response with rapidly deployable mobile Nanogrids with essential services, providing continuous power from 100% renewable energy.

Voltpost, New York City, NY

Voltpost decarbonizes mobility and democratizes charging access by retrofitting lamp posts into modular electric vehicle charging stations.

These companies will join the other 22 startups from across North America who have participated in the accelerator (see program alumni).

In addition to mentorship and technical project support, the 10-week program will focus on product design, customer acquisition, and leadership development, granting startups access to an expansive network of mentors, senior executives, and industry leaders. All Google for Startups Accelerators are equity-free, so selected companies don’t have to offer anything to participate.

We are honored to partner with this cohort of companies through this accelerator and beyond, to advance their technologies and protect our planet.

The program kicks off on Tuesday, March 7 and concludes with a virtual Demo Day on May 11. Stay tuned and join us in celebrating these exceptional startups.

3 things to expect at the Google for Games Developer Summit

Posted by Greg Hartrell, Product Director, Games on Play/Android

Save the date for this year’s virtual Google for Games Developer Summit, happening on March 14 at 9 a.m. PT. You’ll hear about product updates and discover new ways to build great games, connect with players around the globe and grow your business.

Here are three things you can expect during and after the event:

1. Hear about Google’s newest games products for developers

The summit kicks off at 9 a.m. PT, with keynotes from teams across Android, Google Play, Ads and Cloud. They’ll discuss the latest trends in the gaming industry and share new products we’re working on to help developers build great experiences for gamers everywhere.

2. Learn how to grow your games business in on-demand sessions

Following the keynotes, more than 15 on-demand sessions will be available starting at 10 a.m. PT, where you can learn more about upcoming products, watch technical deep dives and hear inspiring stories from other game developers. Whether you’re looking to expand your reach, reduce cheating or better understand in-game ad formats, there will be plenty of content to help you take your game to the next level.

3. Join us at the Game Developers Conference

If you’re looking for even more gaming content after the summit, join us in person for the Game Developers Conference in San Francisco. We’ll host developer sessions on March 20 and 21 to share demos, technical best practices and more.

Visit g.co/gamedevsummit to learn more and get updates about both events, including the full agendas. See you there!

Solution Challenge 2023: Use Google Technologies to Address the United Nations’ Sustainable Development Goals

Posted by Rachel Francois, Google Developer Student Clubs, Global Program Manager

Each year, the Google Developer Student Clubs Solution Challenge invites university students to develop solutions for real-world problems using one or more Google products or platforms. How could you use Android, Firebase, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action?

Join us to build solutions for one or more of the United Nations 17 Sustainable Development Goals. These goals were agreed upon in 2015 by all 193 United Nations Member States and aim to end poverty, ensure prosperity, and protect the planet by 2030.

One 2022 Solution Challenge participant said, “I love how it provides the opportunity to make a real impact while pursuing undergraduate studies. It helped me practice my expertise in a real-world setting, and I built a project I can proudly showcase on my portfolio.”

Solution Challenge prizes

Participants will receive specialized prizes at different stages:

  • The top 100 teams receive customized mentorship from Google and experts to take solutions to the next level, branded swag, and a certificate.
  • The top 10 finalists receive additional mentorship, a swag box, and the opportunity to showcase their solutions to Google teams and developers all around the world during the virtual 2023 Solution Challenge Demo Day, live on YouTube.
  • Contest finalists – In addition to the swag box, each individual from the seven teams not in the top three will receive a Cash Prize of $1,000 per student. Winnings for each qualifying team will not exceed $4,000.
  • Top 3 winners – In addition to the swag box, each individual from the top 3 winning teams will receive a Cash Prize of $3,000 and a feature on the Google Developers Blog. Winnings for each qualifying team will not exceed $12,000

Joining the Solution Challenge

There are four steps to join the Solution Challenge and get started on your project:

  1. Register at goo.gle/solutionchallenge and join a Google Developer Student Club at your college or university. If there is no club at your university, you can join the closest one through our event platform.
  2. Select one or more of the United Nations 17 Sustainable Development Goals to address.
  3. Build a solution using Google technology.
  4. Create a demo and submit your project by March 31, 2023. 

    Google Resources for Solution Challenge participants

    Google will support Solution Challenge participants with resources to help students build strong projects, including:

    • Live online sessions with Q&As
    • Mentorship from Google, Google Developer Experts, and the Google Developer Student Club community
    • Curated Codelabs designed by Google Developers
    • Access to Design Sprint guidelines developed by Google Ventures
    • and more!
    “During the preparation and competition, we learned a great deal,” said a 2022 Solution Challenge team member. “That was part of the reason we chose to participate in this competition: the learning opportunities are endless.”

    Winner announcement dates

    Once all projects are submitted, our panel of judges will evaluate and score each submission using specific criteria.

    After that, winners will be announced in three rounds.

    Round 1 (April): The top 100 teams will be announced.

    Round 2 (June): After the top 100 teams submit their new and improved solutions, 10 finalists will be announced.

    Round 3 (August): The top 3 grand prize winners will be announced live on YouTube during the 2023 Solution Challenge Demo Day.

    We can’t wait to see the solutions you create with your passion for building a better world, coding skills, and a little help from Google technologies.

    Learn more and sign up for the 2023 Solution Challenge here.

    I got the time to push my creativity to the next level. It helped me attain more information from more knowledgeable people by expanding my network. Working together and building something was a great challenge and one of the best experiences, too. I liked the idea of working on the challenge to present a solution.

    ~2022 Solution Challenge participant

    Google Home is officially ready for your Matter devices and apps

    Posted by Kevin Po, Group Product Manager

    Earlier this Fall, the Connectivity Standards Alliance released the Matter 1.0 standard and certification program, officially launching the industry into a new era of the smart home.

    We are excited to share that Google Nest and Android users are now ready for your Matter-enabled devices and apps. Many Android devices from Google and our OEM partners now support the new Matter APIs in Google Play services so you can update and build apps to support Matter. Google Nest speakers, displays, and Wi-Fi routers have been updated to work as hubs, and we have also updated Nest Wifi Pro, Nest Hub Max and the Nest Hub (2nd gen) to work as Thread border routers, so users can securely connect your Thread devices.

    Our top priority is to ensure both customers and developers have high-quality, reliable Matter devices. We are starting with Android devices and Google Nest speakers and displays, which are now Matter-enabled. These devices are ready to help users set up, automate, and use your devices wherever they interact with Google. Next up, we are working on bringing Google Home app iOS support for Matter devices in early 2023, and support to other Nest devices such as Nest Wifi and Nest Thermostat.

    Building With Google Home

    As companies all over are shifting their focus to prioritize Matter, we have also expanded the resources available in the Google Home Developer Center to better support you in building your Matter devices — from beginning to end. At this one-stop-shop for anyone interested in developing smart home devices and apps with Google, developers can now create and launch seamless Matter integrations with Google Home, apply for Works with Google Home certification, customize their product’s out of box experience in the Google Home app and on Android and more. Let’s dive into what’s new.

    Even More Tools In Our SDKs

    We have been dedicated to building the most helpful tools to assist you in building Matter-enabled products and apps. We announced two software development kits for both device and mobile developers that make it easier to build with the open-source Matter SDK and integrate your devices and apps with Google. We’ve made them available to help with the development of your newest smart devices and apps.

    • Google Home Device SDK
      • Documentation and tutorials
      • Sample apps
    • Google Home Mobile SDK
      • Device commissioning APIs
      • Multi-admin (sharing) APIs
      • Thread credential APIs
      • Documentation and tutorials
      • Google Home Sample app for Matter

    Works With Google Home Certification

    Matter devices integrated and tested through the Google Home Developer Center can carry the Works With Google Home badge, which earlier this year replaced the Works With Hey Google badge. This badge gives users the utmost confidence that your devices work seamlessly with Google Home and Android.

    Early Access Program Partner Testimonials

    We understand that you want to build innovative and high quality product integrations as quickly as possible, and we built our SDKs and tools to help you do just that. Since announcing earlier this year, we have worked closely with dozens of Early Access Program (EAPs) partners to ensure the tools we have created in the Google Home Developer Console can achieve what we set out to do, before making them widely available to you all today.

    We’ve asked some of our EAP partners to share more about their experience building Matter devices with Google, to give you more insight on how building with Google’s end-to-end tools for Matter devices and apps can make a difference in your innovation and development process. After working closely with our partners, we are confident our tools allow you to accelerate time-to-market for your devices, improve reliability, and let you differentiate with Google Home while having interoperability with other Matter platforms.

    • From Eve Systems: “The outstanding expertise and commitment of the teams in Google’s Matter Early Access Program enabled us to leverage the potential of our products. We’re thrilled to be partnering with Google on Matter, an extraordinary project that has Thread at the heart.”
    • From Nanoleaf: “Nanoleaf has been working closely with Google as part of the Matter Early Access Program to bring Matter 1.0 to life. It’s been a pleasure collaborating with Google the past few years; the team’s vision of the helpful home deeply resonates with our goal of creating a smart home that is both intelligent and personalized to each person living in it. We’re very excited to see that vision borne out in Google’s initial Matter offering, and can’t wait to continue building on the potential of Matter together.”
    • From Philips Hue: “For us especially, the Matter Early Access Platform releases with documentation and instructions have been very useful. It meant we could already start Matter integration testing between Philips Hue and Google on early builds, to ensure seamless interoperability in the final release.”
    • From Tuya: “As a long-term ecosystem partner and an authorized solution provider of Google, Tuya has contributed to a wider application and implementation of Matter, as well as promotion of Matter globally together. In the future, we will continue to strengthen cooperation between Google and Tuya by integrating both parties’ ecosystems, technologies, and channels to support the implementation of Matter to enable global customers to achieve commercial success in the smart home and other industries.”

    Ready To Build?

    We are excited to see Matter come to life and the devices you build to further shape the smart home. Get started building your Matter devices today and stay up to date on our recent updates in the Google Home Developer Center.

    Help Shape The Future Of Google Products

    User feedback is critical to ensure we continue building more inclusive and helpful products. Join our developer research program and share feedback on all kinds of Google products & tools. Sign up here!

    Open Source Pass Converter for Mobile Wallets

    Posted by Stephen McDonald, Developer Programs Engineer, and Nick Alteen, Technical Writer, Engineering, Wallet

    Each of the mobile wallet apps implement their own technical specification for passes that can be saved to the wallet. Pass structure and configuration varies by both the wallet application and the specific type of pass, meaning developers have to build and maintain code bases for each platform.

    As part of Developer Relations for Google Wallet, our goal is to make life easier for those who want to integrate passes into their mobile or web applications. Today, we’re excited to release the open-source Pass Converter project. The Pass Converter lets you take existing passes for one wallet application, convert them, and make them available in your mobile or web application for another wallet platform.

    Moving image of Pass Converter successfully converting an external pkpass file to a Google Wallet pass

    The Pass Converter launches with support for Google Wallet and Apple Wallet apps, with plans to add support for others in the future. For example, if you build an event ticket pass for one wallet, you can use the converter to automatically create a pass for another wallet. The following list of pass types are supported for their respective platforms:

    • Event tickets
    • Generic passes
    • Loyalty/Store cards
    • Offers/Coupons
    • Flight/Boarding passes
    • Other transit passes

    We designed the Pass Converter with flexibility in mind. The following features provide additional customization to your needs.

    • hints.json file can be provided to the Pass Converter to map Google Wallet pass properties to custom properties in other passes.
    • For pass types that require certificate signatures, you can simply generate the pass structure and hand it off to your existing signing process
    • Since images in Google Wallet passes are referenced by URLs, the Pass Converter can host the images itself, store them in Google Cloud Storage, or send them to another image host you manage.

    If you want to quickly test converting different passes, the Pass Converter includes a demo mode where you can load a simple webpage to test converting passes. Later, you can run the tool via the command line to convert existing passes you manage. When you’re ready to automate pass conversion, the tool can be run as a web service within your environment.

    The following command provides a demo web page on http://localhost:3000 to test converting passes.

    node app.js demo

    The next command converts passes locally. If the output path is omitted, the Pass Converter will output JSON to the terminal (for PKPass files, this will be the contents of pass.json).

    node app.js <pass input path> <pass output path>

    Lastly, the following command runs the Pass Converter as a web service. This service accepts POST requests to the root URL (e.g. https://localhost:3000/) with multipart/form-data encoding. The request body should include a single pass file.

    node app.js

    Ready to get started? Check out the GitHub repository where you can try converting your own passes. We welcome contributions back to the project as well!

    Build smarter and ship faster with the latest updates across our ecosystem

    Posted by Jeanine Banks, VP/GM, Developer X and DevRel

    At last week’s Made by Google launch event, we announced several new hardware products including the Pixel 7 and Pixel 7 Pro, Google Pixel Watch, and Google Pixel Tablet—a suite of innovative products that we’re excited about. While sure to delight users, it got me thinking—what will these changes mean for developers?

    It’s hard to build experiences that let users enjoy the best that their devices have to offer. Undoubtedly this brings a level of complexity for developers who will need to build and test against multiple OS updates and new features. That’s the thing about development—the environment is constantly evolving. We want to cut through the complexity and make it simpler to choose the technology you use, whether for an app on one device or across large and small screens.

    Earlier this year at Google I/O, we shared our focus on making developer tools work better together, and providing more guidance and best practices to optimize your end-to-end workflow. For example, we announced the new App Quality Insights window in Android Studio that shows crash data from Firebase Crashlytics directly inside the IDE to make it easier to discover, investigate, and fix offending lines of code.

    But our work doesn’t stop once I/O ends. We work all year round to offer increasingly flexible, open and integrated solutions so you can work smarter, ship faster, and confidently set up your business for the future.

    That’s why we’re excited to connect with you again—both in person and virtually—to share more recent product updates. Over the next three months, we have over 200 events in more than 50 countries reaching thousands of developers through product summits, community events, industry conferences, and more. Here are a few:

    DevFest | Now – December
    Local Google Developer Groups (GDG) organize these technology conferences according to the needs and interests of the region’s developer community, and in the local language. Tune in virtually or join in person.
    Chrome | Multiple dates
    This year the Chrome team will meet you at your favorite regional developer conferences and events, in addition to online forums across time zones. Join us on the journey to build a better web. Check out the calendar.
    Google Cloud Next | October 11-13
    Learn how to transform with Google Cloud to build apps faster and make smarter business decisions.
    Firebase Summit | October 18
    Join this hybrid event online or in person in New York City to hear how Firebase can help you accelerate app development, run your app with confidence, and scale your business.
    Android Dev Summit | Beginning October 24
    Learn from the source about building excellent apps across devices, coming to you online and around the world. We’ll be sharing the sessions live on YouTube in three tracks spread across three weeks, including Modern Android Development on Oct 24, form factors on Nov 9, and platform on Nov 14.
    BazelCon | November 16-17
    Hosted by Bazel and Google Open Source, BazelCon connects you with the team, maintainers, contributors, users, and friends to learn how Bazel automates software builds and tests on Android and iOS.
    Women in ML Symposium | Coming in December
    Join open source communities, seek out leadership opportunities, share knowledge, and speak freely about your career development with other women and gendered minorities in a safe space. Catch up on last year’s event.
    Flutter Event | Coming in December/January
    Hear exciting product updates on Google’s open source framework for building beautiful, natively compiled, multi-platform applications from a single codebase. In the meantime, re-live last year’s event.

    We look forward to the chance to meet with you to share technical deep dives, give you hands-on learning opportunities, and hear your feedback directly. After you have heard what we’re up to, make sure to access our comprehensive documentation, training materials, and best practices to help speed up your development and quickly guide you towards success.

    Mark your calendars and register now to catch the latest updates.

    Register now for Firebase Summit 2022!

    Posted by Grace Lopez, Product Marketing Manager

    One of the best things about Firebase is our community, so after three long years, we’re thrilled to announce that our seventh annual Firebase Summit is returning as a hybrid event with both in-person and virtual experiences! Our 1-day, in-person event will be held at Pier 57 in New York City on October 18, 2022. It will be a fun reunion for us to come together to learn, network, and share ideas. But if you’re unable to travel, don’t worry, you’ll still be able to take part in the activities online from your office/desk/couch wherever you are in the world.

    Join us to learn how Firebase can help you accelerate app development, run your app with confidence, and scale your business. Registration is now open for both the physical and virtual events! Read on for more details on what to expect.

    Keynote full of product updates

    In-person and livestreamed

    We’ll kick off the day with a keynote from our leaders, highlighting all the latest Firebase news and announcements. With these updates, our goal is to give you a seamless and secure development experience that lets you focus on making your app the best it can be.

    #AskFirebase Live

    In-person and livestreamed

    Having a burning question you want to ask us? We’ll take questions from our in-person and virtual attendees and answer them live on stage during a special edition of everyone’s favorite, #AskFirebase.

    NEW! Ignite Talks

    In-person and livestreamed

    This year at Firebase Summit, we’re introducing Ignite Talks, which will be 7-15 minute bitesize talks focused on hot topics, tips, and tricks to help you get the most out of our products.

    NEW! Expert-led Classes

    In-person and will be released later

    You’ve been asking us for more technical deep dives, so this year we’ll also be running expert-led classes at Firebase Summit. These platform-specific classes will be designed to give you comprehensive knowledge and hands-on practice with Firebase products. Initially, these classes will be exclusive to in-person attendees, but we’ll repackage the content for self-paced learning and release them later for our virtual attendees.

    We can’t wait to see you

    In addition, Firebase Summit will be full of all the other things you love – interactive demos, lots of networking opportunities, exciting conversations with the community…and a few surprises too! The agenda is now live, so don’t forget to check it out! In the meantime, register for the event, subscribe to the Firebase YouTube channel, and follow us on Twitter and LinkedIn to join the conversation using #FirebaseSummit

    Introducing Discovery Ad Performance Analysis

    Posted by Manisha Arora, Nithya Mahadevan, and Aritra Biswas, gPS Data Science team

    Overview of Discovery Ads and need for Ad Performance Analysis

    Discovery ads, launched in May 2019, allow advertisers to easily extend their reach of social ads users across YouTube, Google Feed and Gmail worldwide. They provide brands a new opportunity to reach 3 billion people as they explore their interests and search for inspiration across their favorite Google feeds (YouTube, Gmail, and Discover) — all with a single campaign. Learn more about Discovery ads here.

    Due to these uniquenesses, customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their Discovery Ad campaigns, where interaction is defined as the main user action associated with an ad format—clicks and swipes for text and Shopping ads, views for video ads, calls for call extensions, and so on.

    Interaction Rate = interaction / impressions

    “Customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their campaigns.”

    – Manisha Arora, Data Scientist

    Our analysis approach:

    The Data Science team at Google is investing in a machine learning approach to uncover insights from complex unstructured data and provide machine learning based recommendations to our customers. Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.

    We follow a six-step based approach for Discovery Ad Performance Analysis:

    • Understand Business Goals
    • Build Creative Hypothesis
    • Data Extraction
    • Feature Engineering
    • Machine Learning Modeling
    • Analysis & Insight Generation

    To begin with, we work closely with the advertisers to understand their business goals, current ad strategy, and future goals. We closely map this to industry insights to draw a larger picture and provide a customized analysis for each advertiser. As a next step, we build hypotheses that best describe the problem we are trying to solve. An example of a hypothesis can be -”Do superlatives (words like “top”, “best”) in the ad copy drive performance?”

    “Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.”

    Manisha Arora, Data Scientist

    Once we have a hypothesis we are working towards, the next step is to deep-dive into the technical analysis.

    Data Extraction & Pre-processing

    Our initial dataset includes raw ad text, imagery, performance KPIs & target audience details from historic ad campaigns in the industry. Each Discovery ad contains two text assets (Headline and Description) and one image asset. We then apply ML to extract text and image features from these assets.

    Text Feature Extraction

    We apply NLP to extract the text features from the ad text. We pass the raw text in the ad headline & description through Google Cloud’s Language API which parses the raw text into our feature set: commonly used keywords, sentiments etc.


    Image Feature Extraction

    We apply Image Processing to extract image features from the ad copy imagery. We pass the raw images through Google Cloud’s Vision API & extract image components including objects, person, background, lighting etc.

    Following are the holistic set of features that are extracted from the ad content:

    Feature Design

    Text Feature Design

    There are two types of text features being included in DisCat:

    1. Generic text feature

    a. These are features returned by Google Cloud’s Language API including sentiment, word / character count, tone (imperative vs indicative), symbols, most frequent words and so on.

    2. Industry-specific value propositions

    a. These are features that only apply to a specific industry (e.g. finance) that are manually curated by the data science developer in collaboration with specialists and other industry experts.

    • For example, for the finance industry, one value proposition can be “Price Offer”. A list of keywords / phrases that are related to price offers (e.g. “discount”, “low rate”, “X% off”) will be curated based on domain knowledge to identify this value proposition in the ad copies. NLP techniques (e.g. wordnet synset) and manual examination will be used to make sure this list is inclusive and accurate.

    Image Feature Design

    Like the text features, image features can largely be grouped into two categories:

    1. Generic image features

    a. These features apply to all images and include the color profile, whether any logos were detected, how many human faces are included, etc.

    b. The face-related features also include some advanced aspects: we look for prominent smiling faces looking directly at the camera, we differentiate between individuals vs. small groups vs. crowds, etc.

    2. Object-based features

    a. These features are based on the list of objects and labels detected in all the images in the dataset, which can often be a massive list including generic objects like “Person” and specific ones like particular dog breeds.

    b. The biggest challenge here is dimensionality: we have to cluster together related objects into logical themes like natural vs. urban imagery.

    c. We currently have a hybrid approach to this problem: we use unsupervised clustering approaches to create an initial clustering, but we manually revise it as we inspect sample images. The process is:

    • Extract object and label names (e.g. Person, Chair, Beach, Table) from the Vision API output and filter out the most uncommon objects
    • Convert these names to 50-dimensional semantic vectors using a Word2Vec model trained on the Google News corpus
    • Using PCA, extract the top 5 principal components from the semantic vectors. This step takes advantage of the fact that each Word2Vec neuron encodes a set of commonly adjacent words, and different sets represent different axes of similarity and should be weighted differently
    • Use an unsupervised clustering algorithm, namely either k-means or DBSCAN, to find semantically similar clusters of words
    • We are also exploring augmenting this approach with a combined distance metric:

    d(w1, w2) = a * (semantic distance) + b * (co-appearance distance)

    where the latter is a Jaccard distance metric

    Each of these components represents a choice the advertiser made when creating the messaging for an ad. Now that we have a variety of ads broken down into components, we can ask: which components are associated with ads that perform well or not so well?

    We use a fixed effects1 model to control for unobserved differences in the context in which different ads were served. This is because the features we are measuring are observed multiple times in different contexts i.e. ad copy, audience groups, time of year & device in which ad is served.

    The trained model will seek to estimate the impact of individual keywords, phrases & image components in the discovery ad copies. The model form estimates Interaction Rate (denoted as ‘IR’ in the following formulas) as a function of individual ad copy features + controls:

    We use ElasticNet to spread the effect of features in presence of multicollinearity & improve the explanatory power of the model:

    “Machine Learning model estimates the impact of individual keywords, phrases, and image components in discovery ad copies.”

    – Manisha Arora, Data Scientist


    Outputs & Insights

    Outputs from the machine learning model help us determine the significant features. Coefficient of each feature represents the percentage point effect on CTR.

    In other words, if the mean CTR without feature is X% and the feature ‘xx’ has a coeff of Y, then the mean CTR with feature ‘xx’ included will be (X + Y)%. This can help us determine the expected CTR if the most important features are included as part of the ad copies.

    Key-takeaways (sample insights):

    We analyze keywords & imagery tied to the unique value propositions of the product being advertised. There are 6 key value propositions we study in the model. Following are the sample insights we have received from the analyses:


    Although insights from DisCat are quite accurate and highly actionable, the moel does have a few limitations:

    1. The current model does not consider groups of keywords that might be driving ad performance instead of individual keywords (Example – “Buy Now” phrase instead of “Buy” and “Now” individual keywords).

    2. Inference and predictions are based on historical data and aren’t necessarily an indication of future success.

    3. Insights are based on industry insights and may need to be tailored for a given advertiser.

    DisCat breaks down exactly which features are working well for the ad and which ones have scope for improvement. These insights can help us identify high-impact keywords in the ads which can then be used to improve ad quality, thus improving business outcomes. As next steps, we recommend testing out the new ad copies with experiments to provide a more robust analysis. Google Ads A/B testing feature also allows you to create and run experiments to test these insights in your own campaigns.


    Discovery Ads are a great way for advertisers to extend their social outreach to millions of people across the globe. DisCat helps break down discovery ads by analyzing text and images separately and using advanced ML/AI techniques to identify key aspects of the ad that drives greater performance. These insights help advertisers identify room for growth, identify high-impact keywords, and design better creatives that drive business outcomes.


    Thank you to Shoresh Shafei and Jade Zhang for their contributions. Special mention to Nikhil Madan for facilitating the publishing of this blog.


    1. Greene, W.H., 2011. Econometric Analysis, 7th ed., Prentice Hall;

      Cameron, A. Colin; Trivedi, Pravin K. (2005). Microeconometrics: Methods and Applications

    Come to the Tag1 & Google Performance Workshop at DrupalCon Europe 2022, Prague

    Posted by Andrey Lipattsev, EMEA CMS Partnerships Lead

    TL;DR: If you’re attending @DrupalConEur submit your URL @ https://bit.ly/CWV-DrupalCon-22 to get your UX & performance right on #Drupal at the Tag1 & Google interactive workshop.

    Getting your User Experience right, which includes performance, is critical for success. It’s a key driver of many success metrics (https://web.dev/tags/web-vitals) and a factor taken into account by platforms, including search engines, that surface links to your site (https://developers.google.com/search/docs/advanced/experience/page-experience).

    Quantifying User Experience is not always easy, so one way to measure, track and improve it is by using Core Web Vitals (CWV, https://web.dev/vitals/). Building a site with great CWV on Drupal is easier than on many platforms on average (https://bit.ly/CWV-tech-report) and yet there are certain tips and pitfalls you should be aware of.

    In this workshop the team from Tag1 and Google (Michael Meyers, Andrey Lipattsev and others) will use real life examples of Drupal-based websites to illustrate some common pain points and the corresponding solutions. If you would like us to take a look at your website and provide actionable advice, please submit the URL via this link (https://bit.ly/CWV-DrupalCon-22). The Workshop is interactive, so bring your laptop – we’ll get you up and running and teach you hands-on how to code for the relevant improvements.

    We cannot guarantee that all the submissions will be analysed as this depends on the number of submissions and the time that we have. However, we will make sure that all the major themes cutting across the submitted sites will be covered with relevant solutions.

    See you in Prague!

    Date & Time: Wednesday 21.09.2022, 16:15-18:00