GDE community highlight: Lars Knudsen

Posted by Monika Janota, Community Manager

Lars Knudsen is a Google Developer Expert; we talked to him about how a $10 device can make computers more accessible for people with disabilities.

 

Monika: What inspired you to become a developer? What’s your current professional focus?

Lars: I got my MSc in engineering, but in fact my interest in tech started much earlier. When I was a kid in the 80s, my father owned a computing company working with graphic design. Sometimes, especially during the summer holidays, he would take me to work with him. At times, some of his employees would keep an eye on me. There was this really smart guy who once said to me, “Lars, I need to get some work done, but here’s a C manual, and there’s a computer over there. Here’s how you start a C compiler. If you have any questions, come and ask me.” I started to write short texts that were translated into something the computer could understand. It seemed magical to me. I was 11 years old when I started and around seventh grade, I was able to create small applications for my classmates or to be used at school. That’s how it started.

Over the years, I’ve worked for many companies, including Nokia, Maersk, and Openwave. At the beginning, like in many other professions, because you know a little, you feel like you can do everything, but with time you learn each company has a certain way of doing things.

After a few years of working for a medical company, I started my own business in 1999. I worked as a freelance contractor and, thanks to that, had the chance to get to know multiple organizations quickly. After completing the first five contracts, I found out that every company thinks they’ve found the perfect setup, but all of them are completely different. At that time, I was also exposed to a lot of different technologies, operating systems etc. Around my early twenties, my mindset changed. At the beginning, I was strictly focused on one technology and wanted to learn all about it. With time, I started to think about combining technologies as a way of improving our lives. I have a particular interest in narrowing the gap between what we call the A and the B team in the world. I try to transfer as much knowledge as possible to regions where people don’t have the luxury of owning a computer or studying at university free of charge.

I continue to work as a contractor for external partners but, whenever possible, I try to choose projects that have some kind of positive impact on the environment or society. I’m currently working on embedded software for a hearing-aid company called Oticon. Software-wise, I’ve been working on everything from the tiniest microcontrollers to the cloud; a lot of what I do revolves around the web. I’m trying to combine technologies whenever it makes sense.

Monika: Were you involved in developer communities before joining the Google Developer Experts program?

Lars: Yes, I was engaged in meetups and conferences. I first connected with the community while working for Nokia. Around 2010, I met Kenneth Rohde Christiansen, who became a GDE before me. He inspired me to see how web technologies can be useful for aspiring tech professionals in developing countries. Developing and deploying solutions using C++, C# or Java requires some years of experience, but everyone who has access to a computer, browser, and notepad can start developing web-based applications and learn really fast. It’s possible to build a fully functional application with limited resources, and ramp up from nothing. That’s why I call the web a very democratizing technology stack.

But back to the community—after a while I got interested in web standardization and what problems bleeding edge web technologies could solve. I experimented with new capabilities in a browser before release. I was working for Nokia at the time, developing for a Linux-based flagship device, the N9. The browser we built was WebKit based and I got some great experience developing features for a large open source project. In the years after leaving Nokia, I got involved in web conferences and meetups, so it made sense to join the GDE community in 2017.

I really enjoy the community work and everything we’re doing together, especially the pre-pandemic Chrome Developer Summits, where I got to help with booth duty alongside a bunch of awesome Google Engineers and other GDEs.

Monika: What advice would you give to a young developer who’s just starting their professional career and is not sure which path to take?

Lars: I’d say from my own experience—if you can afford it—consider freelancing for a couple of different companies. This way, you’ll be exposed to code in many different forms and stages of development. You’ll get to know a multitude of operating systems and languages, and learn how to resolve problems in many ways. This helped me a lot. I gained experience as senior developer in my twenties. This approach will help you achieve your professional goals faster.

Besides that, have fun, explore, play with the hardware and software. Consider building something that solves a real problem—maybe for your friends, family, or a local business. Don’t be afraid to jump into something you’ve never done before.

Monika: What does the future hold for web technologies?

Lars: I think that for a couple of years now the web has been fully capable of providing a platform for large field applications, both for the consumer and for business. On the server side of things, web technologies offer a seamless experience, especially for frontend developers who want to build a backend component. It’s easier for them to get started now. I know people who were using both Firebase and Heroku to get the job done. And this trend will grow—web technologies will be enough to build complex solutions of any kind. I believe that the Web Capabilities – Project Fugu 🐡 really unlocks that potential.

Looking at it from a slightly different point of view, I also think that if we provide full documentation and in-depth articles not only in English but also in other languages (for example, Spanish and Portuguese), we would unlock a lot of potential in Latin America—and other regions, of course. Developers there often don’t know English well enough to fully understand all the relevant articles. We should also give them the opportunity to learn as early as possible, even before they start university, while still in their hometowns. They may use those skills to help local communities and businesses before they leave home and maybe never come back.

Thomas: You came a long way from doing C development on a random computer to hacking on hardware. How did you do that?

Lars: I started taking apart a lot of hardware I had at home. My dad was not always happy when I couldn’t put it back together. With time, I learned how to build some small devices, but it really took off much later, around the time I joined Nokia, where I got my embedded experience. I had the chance to build small screensavers, components for the Series 30 phones. I was really passionate about it and could really think outside the box. They assigned me a task to build a Snake game for those devices. It was a very interesting experience. The main difference between building embedded systems and most other things (including web) is that you leave a small footprint—you don’t have much space or memory to use. While building Snake, the RAM that I had available was less than one-third of the frame buffer (around 120 x 120 pixels). I had to come up with ways to algorithmically rejoin components on screen so they’d look static, as if they were tiles. I learned a lot—that was the move from larger systems to small, embedded solutions.

Thomas: The skill set of a typical frontend developer is very different from the skill set of someone who builds embedded hardware. How would you encourage a frontend developer to look into hardware and to start thinking in binary?

Lars: I think that the first step is to look at some of the Fugu APIs that work in Chrome and Edge, and are built into all the major systems today. That’s all you need at the start.

Another thing is that the toolchains for building embedded solutions have a steep learning curve. If you want to build your own custom hardware, start with Arduino or ESP32—something that is easy to buy and fairly cheap. With the right development environment, you can get your project up and running in no time.

You could also buy a heart rate monitor or a multisensor unit, which are already using Bluetooth GATT services, so you don’t have to build your own hardware or firmware—you can use what’s already there and start experimenting with the Web Bluetooth API to start communicating with it.

There are also devices that use a serial protocol—for these, you can use the Web Serial API (also Fugu). Recently I’ve been looking into using the WebHID API, which enables you to talk to all the human interface devices that everyone has access to. I found some old ones in my basement that had not been supported by any operating system for years, but thanks to reverse engineering it took me a few hours to re-enable them.

There are different approaches depending on what you want to build, but to a web developer I would say, get a solid sensor unit, maybe a Thingy 52 from Nordic Semiconductor; it has a lot of sensors, and you can hook up to your web application with very little effort.

Thomas: Connecting to the device is the first step, but then speaking to it effectively—that’s a whole other thing. How come you did not give up after facing obstacles? What kept you motivated to continue working?

Lars: For me personally the social aspect of solving a problem was the most important. When I started working on my own embedded projects, I had a vision and a desire to build a science lab in a box for developing regions. My wife is from Mexico and I saw some of the schools there; some that are located outside of the big cities are pretty shabby, without access to the materials and equipment that we have in our part of the world.

The passion for building something that can potentially be used to help others—that’s what kept me going. I also really enjoyed the community support. I reached out to some people at Google and all were extremely helpful and patiently answered all of my questions.

Thomas: A lot of people have some sort of hardware at home, but don’t know what to do with it. How do you find inspiration for all your amazing projects, in particular the one under the working name SimpleMouse?

Lars: Well, recently I have been in fact reviving a lot of old hardware, but for this particular project—the name has not been set yet, but let’s call it SimpleMouse—I used my experience. I worked with some accessibility solutions earlier and I saw how some of them just don’t work anymore; you’d need to have an old Windows XP with certain software installed to run them. You can’t really update those, you can only use those at home because you can’t move your setup.

Because of that, I wondered how to combine my skills from the embedded world with project Fugu and what is now possible on the web to create cheap, affordable hardware combined with easy-to-understand software on both sides, so people can build on that.

For that particular project, I took a small USB dongle with a reflexive chip, the nRF52840. It communicates with Bluetooth on one side and USB on the other. You can basically program it to be anything on both sides. And then I thought about the devices that control a computer—a mouse and a keyboard. Some people with disabilities may find it difficult to operate those devices, and I wanted to help them.

The first thing I did was to make sure that any operating system would see the USB dongle as a mouse. You can control it from a native application or a web application—directly into Bluetooth. After that, I built a web application—a simple template that people can extend the way they want using web components. Thanks to that, everyone can control their computer with a web app that I made in just a couple of hours on an Android phone.

Having that set up will enable anyone in the world with some web experience to build, in a matter of days, a very customized solution for anyone with a disability who wants to control their computer. The cool thing is that you can take it with you anywhere you go and use it with other devices as well. It will be the exact same experience. To me, the portability and affordability of the device are very important because people are no longer confined to using their own devices, and are no longer limited to one location.

Thomas: Did you have a chance to test the device in real life?

Lars: Actually during my last trip to Mexico I discussed it with a web professional living there; he’s now looking into the possibilities of using the device locally. Over there the equipment is really expensive, but a USB dongle normally costs around ten US dollars. He’s now checking if we could build local setups there to try it out. But I haven’t done official trials yet here in Denmark.

Thomas: Many devices designed to assist people with disabilities are really expensive. Are you planning on cooperating with any particular company and putting it into production for a fraction of the price of that expensive equipment?

Lars: Yes, definitely! I’ve already been talking to a local hardware manufacturer about that. Of course, the device won’t replace all those highly specialized solutions, but it can be the first step to building something bigger—for example, using voice recognition, already available for web technologies. It’ll be an easy way of controlling devices using your Android phone; it can work with a device of any kind.

Just being able to build whatever you want on the web and to use that to control any host computer opens up a lot of possibilities.

Thomas: Are you releasing your Zephyr project as open source? What kind of license do you use? Are there plans to monetize the project?

Lars: Yes, the solution is open source. I did not put a specific license on it, but I think Apache 2.0 would be the way to go. Many major companies use this license, including Google. When I worked on SimpleMouse, I did not think about monetizing the project—that was not my goal. But I also think it would make sense to try to put it into production in some way, and with this comes cost. The ultimate goal is to make it available. I’d love to see it being implemented at a low cost and on a large scale.

How to use App Engine pull tasks (Module 18)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The Serverless Migration Station mini-series helps App Engine developers modernize their apps to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17, or to sister serverless platforms Cloud Functions and Cloud Run. Another goal of this series is to demonstrate how to move away from App Engine’s original APIs (now referred to as legacy bundled services) to Cloud standalone replacement services. Once no longer dependent on these proprietary services, apps become much more portable, making them flexible enough to:

App Engine’s Task Queue service provides infrastructure for executing tasks outside of the standard request-response workflow. Tasks may consist of workloads exceeding request timeouts or periodic tangential work. The Task Queue service provides two different queue types, push and pull, for developers to perform auxiliary work.

Push queues are covered in Migration Modules 7-9, demonstrating how to add use of push tasks to an existing baseline app followed by steps to migrate that functionality to Cloud Tasks, the standalone successor to the Task Queues push service. We turn to pull queues in today’s video where Module 18 demonstrates how to add use of pull tasks to the same baseline sample app. Module 19 follows, showing how to migrate that usage to Cloud Pub/Sub.

Adding use of pull queues

In addition to registering page visits, the sample app needs to be modified to track visitors. Visits are comprised of a timestamp and visitor information such as the IP address and user agent. We’ll modify the app to use the IP address and track how many visits come from each address seen. The home page is modified to show the top visitors in addition to the most recent visits:

Screen grab of the sample app's updated home page tracking visits and visitors
The sample app’s updated home page tracking visits and visitors

When visits are registered, pull tasks are created to track the visitors. The pull tasks sit patiently in the queue until they are processed in aggregate periodically. Until that happens, the top visitors table stays static. These tasks can be processed in a number of ways: periodically by a cron or Cloud Scheduler job, a separate App Engine backend service, explicitly by a user (via browser or command-line HTTP request), event-triggered Cloud Function, etc. In the tutorial, we issue a curl request to the app’s endpoint to process the enqueued tasks. When all tasks have completed, the table then reflects any changes to the current top visitors and their visit counts:

Screen grab of processed pull tasks updated in the top visitors table
Processed pull tasks update the top visitors table

Below is some pseudocode representing the core part of the app that was altered to add Task Queue pull task usage, namely a new data model class, VisitorCount, to track visitor counts, enqueuing a (pull) task to update visitor counts when registering individual visits in store_visit(), and most importantly, a new function fetch_counts(), accessible via /log, to process enqueued tasks and update overall visitor counts. The bolded lines represent the new or altered code.

Adding App Engine Task Queue pull task usage to sample app showing 'Before'[Module 1] on the left and 'After' [Module 18] with altered code on the right
Adding App Engine Task Queue pull task usage to sample app

Wrap-up

This “migration” is comprised of adding Task Queue pull task usage to support tracking visitor counts to the Module 1 baseline app and arrives at the finish line with the Module 18 app. To get hands-on experience doing it yourself, do the codelab by hand and follow along with the video. Then you’ll be ready to upgrade to Cloud Pub/Sub should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate pull tasks to Pub/Sub when porting your app to Python 3. You can continue using Task Queue in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Pub/Sub, see Module 19, including its codelab. All Serverless Migration Station content (codelabs, videos, and source code) are available at its open source repo. While we’re initially focusing on Python users, the Cloud team is covering other runtimes soon, so stay tuned. Also check out other videos in the broader Serverless Expeditions series.

When to step-up your Google Pay transactions as a PSP

Posted by Dominik Mengelt, Developer Relations Engineer, Google Pay and Nick Alteen, Technical Writer, Engineering, Wallet

What is step-up authentication?

When processing payments, step-up authentication (or simply “step-up”) is the practice of requiring additional authentication measures based on user activity and certain risk signals. For example, redirecting the user to 3D Secure to authenticate a transaction. This can help to reduce potential fraud and chargebacks. The following graphic shows the high-level flow of a transaction to determine what’s to be done if step-up is needed.

graphic showing the high-level flow of a transaction
Figure 1: Trigger your Risk Engine before sending the transaction to authorization if step-up is needed

It depends! When making a transaction, the Google Pay API response will return one of the following:

  • An authenticated payload that can be processed without any further step-up or challenge. For example, when a user adds a payment card to Google Wallet. In this case, the user has already completed identity verification with their issuing bank.
  • A primary account number (PAN) that requires additional authentication measures, such as 3D Secure. For example, a user making a purchase with a payment card previously stored through Chrome Autofill.

You can use the allowedAuthMethods parameter to indicate which authentication methods you want to support for Google Pay transactions:

“allowedAuthMethods”: [
    “CRYPTOGRAM_3DS”,
    “PAN_ONLY”

]

In this case, you’re asking Google Pay to display the payment sheet for both types. For example, if the user selects a PAN_ONLY card (a card not tokenized, not enabled for contactless) from the payment sheet during checkout, step-up is needed. Let’s have a look at two concrete scenarios:

In the first scenario, the Google Pay sheet shows a card previously added to Google Wallet. The card art and name of the user’s issuing bank are displayed. If the user selects this card during the checkout process, no step-up is required because it would fall under the CRYPTOGRAM_3DS authentication method.

On the other hand, the sheet in the second scenario shows a generic card network icon. This indicates a PAN_ONLY authentication method and therefore needs step-up.

PAN_ONLY vs. CRYPTOGRAM_3DS

Whether or not you decide to accept both forms of payments is your decision. For CRYPTOGRAM_3DS, the Google Pay API additionally returns a cryptogram and, depending on the network, an eciIndicator. Make sure to use those properties when continuing with authorization.

PAN_ONLY

This authentication method is associated with payment cards from a user’s Google Account. Returned payment data includes the PAN with the expiration month and year.

CRYPTOGRAM_3DS

This authentication method is associated with cards stored as Android device tokens provided by the issuers. Returned payment data includes a cryptogram generated on the device.

When should you step-up Google Pay transactions?

When calling the loadPaymentData method, the Google Pay API will return an encrypted payment token (paymentData.paymentMethodData.tokenizationData.token). After decryption, the paymentMethodDetails object contains a property, assuranceDetails, which has the following format:

“assuranceDetails”: {
    “cardHolderAuthenticated”: true,
    “accountVerified”: true
}

Depending on the values of cardHolderAuthenticated and accountVerified, step-up authentication may be required. The following table indicates the possible scenarios and when Google recommends step-up authentication for a transaction:

cardHolderAuthenticated

accountVerified

Step-up needed

true

true

No

false

true

Yes

Step-up can be skipped only when both cardHolderAuthenticated and accountVerified return true.

Next steps

If you are not using assuranceDetails yet, consider doing so now and make sure to step-uptransactions if needed. Also, make sure to check out our guide on Strong Customer Authentication (SCA) if you are processing payments within the European Economic Area (EEA). Follow @GooglePayDevs on Twitter for future updates. If you have questions, mention @GooglePayDevs and include #AskGooglePayDevs in your tweets.

Experts share insights on Firebase, Flutter and the developer community

Posted by Komal Sandhu – Global Program Manager, Google Developer Groups

Rich Hyndman, Manager, Firebase DevRel (left) and Eric Windmill, Developer Relations Engineer, Firebase and Flutter (right)

Firebase and Flutter offer many tools that ‘just work’, which is something that all apps need. I think you’d be hard pressed to find another combination of front end framework and back end services that let developers make apps quickly without sacrificing quality.” 

moving images of Sparky and Dart, respective mascots for Firebase and Flutter
Among the many inspiring experts in the developer communities for Firebase and Flutter are Rich Hyndman and Eric Windmill. Each Googler serves their respective product team from the engineering and community sides and has a keen eye towards the future. Read on to see their outlook on their favorite Firebase and Flutter tools and the developers that inspire them.

===

What is your title, and how long have you been at Google?

Rich: I run Firebase Developer Relations,, I’ve been at Google for around 11 years

Eric: I’m an engineer on the Flutter team and I’ve been at Google for a year.


Tell us about yourself:

Rich: I’ve always loved tech, from techy toys as a kid to anything that flies. I still get tech-joy when I see new gadgets and devices. I built and raced drones for a while, but mobile/cell phones are the ultimate gadget for me and enabled my career.

Eric: I’m a software engineer, and these days I’m specifically a Developer Relations Engineer. I’m not surprised I’ve ended up here, as I like to joke “I like computers but I like people more.” Outside of work, most of my time is spent thinking about music. I’m pretty poor at playing music, but I’ve always consumed as much as I could. If I had to choose a different job and start over, I’d be a music journalist.

How did you get started in this space?

Rich: I’ve always loved mobile apps: being able to carry my work in my pocket, play with it, test it, demo it, and be proud of it. From the beginning of my career right up till today, it’s still the best. I worked on a few mobile projects pre-Android and was part of an exciting mobile tech startup for a few years, but it was Android that really kick-started my career.

I quickly fell in love with the little green droid and the entire platform, and through a combination of meetups, competition entries and conferences I ended up in contact with Android DevRel at Google.

Firebase is a natural counterpart to Android and I love being able to support developers from a different angle. Firebase also supports Flutter, Web and iOS, Firebase, which has also given me the opportunity to learn more about other platforms and meet more developers.

Eric: I got into this space by accident. At my first software job, the company was already using Dart for their web application, and started rebuilding their mobile apps in Flutter soon after I joined. I think that was around 2016 or 2017. Flutter was still in its Alpha stage. I was introduced to Firebase at the same job, and I’ve used various tools from the Firebase SDK ever since.

What are some challenges that you have seen developers being facing?

Rich: Developers often want to get up and running with new projects quickly, but then iterate and improve their apps. No-code solutions can be great to start with but aren’t flexible enough down the road. A lower-code solution like Firebase can be quick to get started, and it can also provide control. Bringing Flutter and Firebase together creates a powerful and flexible combination.

Eric: Regardless of the technology, I think the biggest challenge developers face is actually with documentation. It doesn’t matter how good a product is if the docs are hard to find or hard to understand. We’ve seen this ourselves recently as Flutter became an “official” supported platform on Firebase in May 2022. When that happened, we moved the documentation from the Flutter site to the Firebase site, and folks didn’t know how to find the docs. It was an oversight on our part, but it’s a good example of the importance of docs. They deserve way more attention than they get in many, many cases.

image of Sparky and Dart, respective mascots for Firebase and Flutter

What do you think is the most interesting or useful resource to learn more about Firebase & Flutter? Is there a particular library or codelab that everyone should learn?

Rich: The official docs have to be first, located at firebase.google.com. We have a great repository of Learning Pathways, including Add Firebase to your Flutter App. We’re also just launching our new Solutions Portal with over 60 solutions guides indexed already.

Eric: If I have to name only one resource, it’d be this codelab: Get to know Firebase for Flutter
But Firebase offers so many tools. This codelab is just an introduction to what’s possible.

What are some inspiring ways that developers are building together Firebase and Flutter?

Rich: We’ve had an interesting couple of years at Firebase. Firebase has always been known for powering real-time data driven apps. If you used a Covid stats app during the pandemic there’s a fair chance it was running on Firebase; there was a big surge of new apps.

Eric: Lately I’ve seen an interest in using Flutter to make 2D games, and using some Firebase tools for the back end of the game. I love this. Games are just more fun than apps, of course, but it’s also great to see folks using these technologies in ways that aren’t the explicit purposes. It shows creativity and excellent problem solving.

What’s a specific use case of Firebase & Flutter technology that excites you?

Rich: Firebase Extensions are very exciting. They are pre-packaged bundles of code that make it easy to add new features to your app from Google and partners like Stripe and Vonage. We just launched the Extensions Marketplace and opened up the ability for developers to build extensions for their own apps through our Provider Alpha program.

Eric: Flutter web and Firebase hosting is just a no brainer. You can deploy a Flutter app to the web in no time.

How can developers be successful building on Firebase & Flutter?

Rich: There’s a very powerful combination with Crashlytics, Performance Monitoring, A/B Testing and Remote Config. Developers can quickly improve the stability of their apps whilst also iterating on features to deliver the best experience for their users. We’ve had a lot of success with improving monetization, too. Check out some of our case studies for more details.

Eric: Flutter developers can be successful by leveraging all that Firebase offers. Firebase might seem intimidating because it offers so much, but it excels at being easy to use, and I encourage all web and mobile developers to poke around. They’re likely to find something that makes their lives easier.

image of Firebase and Flutter logos against a dot matrix background

What’s next for the Firebase & Flutter Communities? What might the future look like?

Rich: Over the next year we’ll be focusing on modern app development and some more opinionated guides. Better support for Flutter, Kotlin, Jetpack Compose, Swift/SwiftUI and modern web frameworks.

Eric: There is a genuine effort amongst both teams to support each other. Flutter and Firebase are just such a great pair, that it makes sense for us to encourage our communities to check out one another. In the future, I think this will continue. I think you’ll see a lot of Flutter at Firebase events, and vice versa.

How does Firebase & Flutter help expand the impact of developers?

Rich: Firebase has always focused on helping developers get their apps up and running by providing tools to streamline time-consuming tasks. Enabling developers to focus on delivering the best app experiences and the most value to their users.

Eric: Flutter is an app-building SDK that is a joy to use. It seriously increases velocity because it’s cross-platform. Firebase and Flutter offer many tools that “just work”, which is something that all apps need. I think you’d be hard pressed to find another combination of front end framework and back end services that let developers make apps quickly without sacrificing quality.

Find a Google Developer Group hosting a DevFest near you.

Want to learn more about Google Technologies like Firebase & Flutter? Hoping to attend a DevFest or Google Developer Groups (GDG)? Find a GDG hosting a DevFest near you here.

#WeArePlay | Discover what inspired 4 game creators around the world

Posted by Leticia Lago, Developer Marketing

From exploring the great outdoors to getting your first computer – a seemingly random moment in your life might one day be the very thing which inspires you to go out there and follow your dreams. That’s what happened to four game studio founders featured in our latest release of #WeArePlay stories. Find out what inspired them to create games which are entertaining millions around the globe.

Born and raised in Salvador, Brazil, Filipe was so inspired by the city’s cultural heritage that he studied History before becoming a teacher. One day, he realised games could be a powerful medium to share Brazilian history and culture with the world. So he founded Aoca Game Lab, and their first title, ÁRIDA: Backland’s Awakening, is a survival game based in the historic town of Canudos. Aoca Game Lab took part in the Indie Games Accelerator and have also been selected to receive the Indie Games Fund. With the help from these Google Play programs, they will take the game and studio to the next level.
#WeArePlay Marko Peaskel Nis, Serbia
Next, Marko from Serbia. As a chemistry student, he was never really interested in tech – then he received his first computer and everything changed. He quit his degree to focus on his new passion and now owns his successful studio Peaksel with over 480 million downloads. One of their most popular titles is 100 Doors Games: School Escape, with over 100 levels to challenge the minds of even the most experienced players.
#WeArePlay Liene Roadgames Riga Latvia
And now onto Liene from Latvia. She often braves the big outdoors and discovers what nature has to offer – so much so that she organizes team-building, orienteering based games for the team at work. Seeing their joy as they explore the world around them inspired her to create Roadgames. It guides players through adventurous scavenger hunts, discovering new terrain.
#WeArePlay Xin Savy Soda Melbourne, Australia
And lastly, Xin from Australia. After years working in corporate tech, he gave it all up to pursue his dream of making mobile games inspired by the 90’s video games he played as a child. Now he owns his studio, Pixel Starships, and despite all his success with millions of downloads, his five-year-old child gives him plenty of feedback.

Check out all the stories now at g.co/play/weareplay and stay tuned for even more coming soon.

How useful did you find this blog post?

#WeArePlay Xin Savy Soda Melbourne, Australia Google Play g.co/play/weareplay

Open Source Pass Converter for Mobile Wallets

Posted by Stephen McDonald, Developer Programs Engineer, and Nick Alteen, Technical Writer, Engineering, Wallet

Each of the mobile wallet apps implement their own technical specification for passes that can be saved to the wallet. Pass structure and configuration varies by both the wallet application and the specific type of pass, meaning developers have to build and maintain code bases for each platform.

As part of Developer Relations for Google Wallet, our goal is to make life easier for those who want to integrate passes into their mobile or web applications. Today, we’re excited to release the open-source Pass Converter project. The Pass Converter lets you take existing passes for one wallet application, convert them, and make them available in your mobile or web application for another wallet platform.

Moving image of Pass Converter successfully converting an external pkpass file to a Google Wallet pass

The Pass Converter launches with support for Google Wallet and Apple Wallet apps, with plans to add support for others in the future. For example, if you build an event ticket pass for one wallet, you can use the converter to automatically create a pass for another wallet. The following list of pass types are supported for their respective platforms:

  • Event tickets
  • Generic passes
  • Loyalty/Store cards
  • Offers/Coupons
  • Flight/Boarding passes
  • Other transit passes

We designed the Pass Converter with flexibility in mind. The following features provide additional customization to your needs.

  • hints.json file can be provided to the Pass Converter to map Google Wallet pass properties to custom properties in other passes.
  • For pass types that require certificate signatures, you can simply generate the pass structure and hand it off to your existing signing process
  • Since images in Google Wallet passes are referenced by URLs, the Pass Converter can host the images itself, store them in Google Cloud Storage, or send them to another image host you manage.

If you want to quickly test converting different passes, the Pass Converter includes a demo mode where you can load a simple webpage to test converting passes. Later, you can run the tool via the command line to convert existing passes you manage. When you’re ready to automate pass conversion, the tool can be run as a web service within your environment.

The following command provides a demo web page on http://localhost:3000 to test converting passes.

node app.js demo

The next command converts passes locally. If the output path is omitted, the Pass Converter will output JSON to the terminal (for PKPass files, this will be the contents of pass.json).

node app.js <pass input path> <pass output path>

Lastly, the following command runs the Pass Converter as a web service. This service accepts POST requests to the root URL (e.g. https://localhost:3000/) with multipart/form-data encoding. The request body should include a single pass file.

node app.js

Ready to get started? Check out the GitHub repository where you can try converting your own passes. We welcome contributions back to the project as well!

Machine Learning Communities: Q3 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the third quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!

TensorFlow/Keras

Load-testing TensorFlow Serving’s REST Interface

Load-testing TensorFlow Serving’s REST Interface by ML GDE Sayak Paul (India) and Chansung Park (Korea) shares the lessons and findings they learned from conducting load tests for an image classification model across numerous deployment configurations.

TFUG Taipei hosted events (Python + Hugging Face-Translation+ tf.keras.losses, Python + Object detection, Python+Hugging Face-Token Classification+tf.keras.initializers) in September and helped community members learn how to use TF and Hugging face to implement machine learning model to solve problems.

Neural Machine Translation with Bahdanau’s Attention Using TensorFlow and Keras and the related video by ML GDE Aritra Roy Gosthipaty (India) explains the mathematical intuition behind neural machine translation.

Serving a TensorFlow image classification model as RESTful and gRPC based services with TFServing, Docker, and Kubernetes

Automated Deployment of TensorFlow Models with TensorFlow Serving and GitHub Actions by ML GDE Chansung Park (Korea) and Sayak Paul (India) explains how to automate TensorFlow model serving on Kubernetes with TensorFlow Serving and GitHub Action.

Deploying 🤗 ViT on Kubernetes with TF Serving by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to scale the deployment of a ViT model from 🤗 Transformers using Docker and Kubernetes.

Screenshot of the TensorFlow Forum in the Chinese Language run by the tf.wiki team

Long-term TensorFlow Guidance on tf.wiki Forum by ML GDE Xihan Li (China) provides TensorFlow guidance by answering the questions from Chinese developers on the forum.

photo of a phone with the Hindi letter 'Ohm' drawn on the top half of the screen. Hinidi Character recognition shows the letter Ohm as the Predicted Result below.

Hindi Character Recognition on Android using TensorFlow Lite by ML GDE Nitin Tiwari (India) shares an end-to-end tutorial on training a custom computer vision model to recognize Hindi characters. In TFUG Pune event, he also gave a presentation titled Building Computer Vision Model using TensorFlow: Part 1.

Using TFlite Model Maker to Complete a Custom Audio Classification App by ML GDE Xiaoxing Wang (China) shows how to use TFLite Model Maker to build a custom audio classification model based on YAMNet and how to import and use the YAMNet-based custom models in Android projects.

SoTA semantic segmentation in TF with 🤗 by ML GDE Sayak Paul (India) and Chansung Park (Korea). The SegFormer model was not available on TensorFlow.

Text Augmentation in Keras NLP by ML GDE Xiaoquan Kong (China) explains what text augmentation is and how the text augmentation feature in Keras NLP is designed.

The largest vision model checkpoint (public) in TF (10 Billion params) through 🤗 transformers by ML GDE Sayak Paul (India) and Aritra Roy Gosthipaty (India). The underlying model is RegNet, known for its ability to scale.

A simple TensorFlow implementation of a DCGAN to generate CryptoPunks

CryptoGANs open-source repository by ML GDE Dimitre Oliveira (Brazil) shows simple model implementations following TensorFlow best practices that can be extended to more complex use-cases. It connects the usage of TensorFlow with other relevant frameworks, like HuggingFace, Gradio, and Streamlit, building an end-to-end solution.

TFX

TFX Machine Learning Pipeline from data injection in TFRecord to pushing out Vertex AI

MLOps for Vision Models from 🤗 with TFX by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for a vision model (TensorFlow) from 🤗 Transformers using the TF ecosystem.

First release of TFX Addons Package by ML GDE Hannes Hapke (United States). The package has been downloaded a few thousand times (source). Google and other developers maintain it through bi-weekly meetings. Google’s Open Source Peer Award has recognized the work.

TFUG São Paulo hosted TFX T1 | E4 & TFX T1 | E5. And ML GDE Vinicius Caridá (Brazil) shared how to train a model in a TFX pipeline. The fifth episode talks about Pusher: publishing your models with TFX.

Semantic Segmentation model within ML pipeline by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for semantic segmentation task with TFX and various GCP products such as Vertex Pipeline, Training, and Endpoints.

JAX/Flax

Screen shot of Tutorial 2 (JAX): Introduction to JAX+Flax with GitHub Repo and Codelab via university of Amseterdam

JAX Tutorial by ML GDE Phillip Lippe (Netherlands) is meant to briefly introduce JAX, including writing and training neural networks with Flax.

TFUG Malaysia hosted Introduction to JAX for Machine Learning (video) and Leong Lai Fong gave a talk. The attendees learned what JAX is and its fundamental yet unique features, which make it efficient to use when executing deep learning workloads. After that, they started training their first JAX-powered deep learning model.

TFUG Taipei hosted Python+ JAX + Image classification and helped people learn JAX and how to use it in Colab. They shared knowledge about the difference between JAX and Numpy, the advantages of JAX, and how to use it in Colab.

Introduction to JAX by ML GDE João Araújo (Brazil) shared the basics of JAX in Deep Learning Indaba 2022.

A comparison of the performance and overview of issues resulting from changing from NumPy to JAX

Should I change from NumPy to JAX? by ML GDE Gad Benram (Portugal) compares the performance and overview of the issues that may result from changing from NumPy to JAX.

Introduction to JAX: efficient and reproducible ML framework by ML GDE Seunghyun Lee (Korea) introduced JAX/Flax and their key features using practical examples. He explained the pure function and PRNG, which make JAX explicit and reproducible, and XLA and mapping functions which make JAX fast and easily parallelized.

Data2Vec Style pre-training in JAX by ML GDE Vasudev Gupta (India) shares a tutorial for demonstrating how to pre-train Data2Vec using the Jax/Flax version of HuggingFace Transformers.

Distributed Machine Learning with JAX by ML GDE David Cardozo (Canada) delivered what makes JAX different from TensorFlow.

Image classification with JAX & Flax by ML GDE Derrick Mwiti (Kenya) explains how to build convolutional neural networks with JAX/Flax. And he wrote several articles about JAX/Flax: What is JAX?, How to load datasets in JAX with TensorFlow, Optimizers in JAX and Flax, Flax vs. TensorFlow, etc..

Kaggle

DDPMs – Part 1 by ML GDE Aakash Nain (India) and cait-tf by ML GDE Sayak Paul (India) were announced as Kaggle ML Research Spotlight Winners.

Forward process in DDPMs from Timestep 0 to 100

Fresher on Random Variables, All you need to know about Gaussian distribution, and A deep dive into DDPMs by ML GDE Aakash Nain (India) explain the fundamentals of diffusion models.

In Grandmasters Journey on Kaggle + The Kaggle Book, ML GDE Luca Massaron (Italy) explained how Kaggle helps people in the data science industry and which skills you must focus on apart from the core technical skills.

Cloud AI

How Cohere is accelerating language model training with Google Cloud TPUs by ML GDE Joanna Yoo (Canada) explains what Cohere engineers have done to solve scaling challenges in large language models (LLMs).

ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google

In Using machine learning to transform finance with Google Cloud and Digits, ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google, about how Digits leverages Google Cloud’s machine learning tools to empower accountants and business owners with near-zero latency.

A tour of Vertex AI by TFUG Chennai for ML, cloud, and DevOps engineers who are working in MLOps. This session was about the introduction of Vertex AI, handling datasets and models in Vertex AI, deployment & prediction, and MLOps.

TFUG Abidjan hosted two events with GDG Cloud Abidjan for students and professional developers who want to prepare for a Google Cloud certification: Introduction session to certifications and Q&A, Certification Study Group.

Flow chart showing shows how to deploy a ViT B/16 model on Vertex AI

Deploying 🤗 ViT on Vertex AI by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to deploy a ViT B/16 model on Vertex AI. They cover some critical aspects of a deployment such as auto-scaling, authentication, endpoint consumption, and load-testing.

Photo collage of AI generated images

TFUG Singapore hosted The World of Diffusion – DALL-E 2, IMAGEN & Stable Diffusion. ML GDE Martin Andrews (Singapore) and Sam Witteveen (Singapore) gave talks named “How Diffusion Works” and “Investigating Prompt Engineering on Diffusion Models” to bring people up-to-date with what has been going on in the world of image generation.

ML GDE Martin Andrews (Singapore) have done three projects: GCP VM with Nvidia set-up and Convenience Scripts, Containers within a GCP host server, with Nvidia pass-through, Installing MineRL using Containers – with linked code.

Jupyter Services on Google Cloud by ML GDE Gad Benram (Portugal) explains the differences between Vertex AI Workbench, Colab, and Deep Learning VMs.

Google Cloud's Two Towers Recommender and TensorFlow

Train and Deploy Google Cloud’s Two Towers Recommender by ML GDE Rubens de Almeida Zimbres (Brazil) explains how to implement the model and deploy it in Vertex AI.

Research & Ecosystem

WOMEN DATA SCIENCE, LA PAZ Club de lectura de papers de Machine Learning Read, Learn and Share the knowledge #MLPaperReadingClubs, Nathaly Alarcón, @WIDS_LaPaz #MLPaperReadingClubs

The first session of #MLPaperReadingClubs (video) by ML GDE Nathaly Alarcon Torrico (Bolivia) and Women in Data Science La Paz. Nathaly led the session, and the community members participated in reading the ML paper “Zero-shot learning through cross-modal transfer.”

In #MLPaperReadingClubs (video) by TFUG Lesotho, Arnold Raphael volunteered to lead the first session “Zero-shot learning through cross-modal transfer.”

Screenshot of a screenshare of Zero-shot learning through cross-modal transfer to 7 participants in a virtual call

ML Paper Reading Clubs #1: Zero Shot Learning Paper (video) by TFUG Agadir introduced a model that can recognize objects in images even if no training data is available for the objects. TFUG Agadir prepared this event to make people interested in machine learning research and provide them with a broader vision of differentiating good contributions from great ones.

Opening of the Machine Learning Paper Reading Club (video) by TFUG Dhaka introduced ML Paper Reading Club and the group’s plan.

EDA on SpaceX Falcon 9 launches dataset (Kaggle) (video) by TFUG Mysuru & TFUG Chandigarh organizer Aashi Dutt (presenter) walked through exploratory data analysis on SpaceX Falcon 9 launches dataset from Kaggle.

Screenshot of ML GDE Qinghua Duan (China) showing how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Introduction to MRC-style dialogue summaries based on BERT by ML GDE Qinghua Duan (China) shows how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Plant disease classification using Deep learning model by ML GDE Yannick Serge Obam Akou (Cameroon) talked on plant disease classification using deep learning model : an end to end Android app (open source project) that diagnoses plant diseases.

TensorFlow/Keras implementation of Nystromformer

Nystromformer Github repository by Rishit Dagli provides TensorFlow/Keras implementation of Nystromformer, a transformer variant that uses the Nyström method to approximate standard self-attention with O(n) complexity which allows for better scalability.

From a personal notebook to 100k YouTube subscriptions: How Carlos Azaustre turned his notes into a YouTube channel

Posted by Kevin Hernandez, Developer Relations Community Manager

Carlos Azaustre, smiling while holding his Silver Button Creator Award from YouTube
Carlos Azaustre with his Silver Button Creator Award from YouTube
When Carlos Azaustre, Web Technologies GDE, finished university, he started a blog to share his personal notes and learnings to teach others about Angular and JavaScript. These personal notes later evolved into tutorials that then turned into a blossoming YouTube channel with 105k subscriptions at the time of this writing. With his 10 years of experience as a Telecommunications Engineer focused on front end development, he has a breadth of experience that he shares with his viewers in a sea of competing content currently on YouTube. Carlos has successfully created a channel focused on technical topics related to JavaScript and has some valuable advice for those looking to educate on the platform.

How he got started with his channel

Carlos started his blog with the primary mission of using it as a personal notebook that he could reference in the future. As he wrote increasingly, he started to notice that people were coming across his notebooks and sharing with others. This inspired him to record tutorials based on the topics of his blogs, but when he was beginning to record these tutorials, a secondary mission came to fruition: he wanted to make technical content accessible to the Spanish-speaking community. He reflects, “In the Spanish community, English is difficult for some people, so I started to create content in Spanish to eliminate barriers for people who are interested in learning new technologies. Learning new things is hard, but it’s easier when it’s in your natural language.”

In the beginning of his YouTube journey, he used the platform for side projects and would post irregularly. Then, 2 years ago, he started putting more effort into creating new content and started to post one video a week while promoting on social media. This change sparked more comments, and his view and total subscribers increased in tandem.

Tips and tricks he’s applied to his channel

Carlos leverages analytics data to adjust his strategy. He explains, “YouTube provides a lot of analytics tools to see if people are engaging and when they leave the video. So you can adjust your content and the timing (video length) because the timing is important.” The data taught Carlos that longer videos generally don’t do as well. He learned the ideal video length for lecture videos where he’s primarily speaking is about 6-8 minutes. But when it comes to tutorials, videos that are about 40 – 60 minutes in length tend to get more views.

Carlos has also taken advantage of YouTube Shorts, a short-form video-sharing platform. “I started to see that Shorts are great to increase your reach because the algorithm pushes your content to people who aren’t subscribed to your channel,” he pointed out. He recommends using YouTube Shorts as an effective way of getting started. When asked about other resources, Carlos mentioned that he primarily draws from his own experience but also turns to books and blogs to help with his channel and to stay up to date with technology.

Choosing video topics

Creating fresh weekly content can be a challenge. To address this, Carlos keeps a notebook of ideas and inspiration for his next videos. For example, he may come across a problem that lacks a clear solution at work and will jot this down. He also keeps track of articles or other tutorials that he feels can either be explained in a more straightforward way or can be translated into Spanish.

Carlos also draws inspiration from the comment section of his videos. He engages with his audience to show there is a real person behind the videos that can guide them. He adds, “this is one of the parts I like the most. They propose new ideas for content that I might’ve missed”.

Advice for starting a channel on technical topics

Carlos’ advice for people looking to start a channel based on technical content is simple: just get started. “If you’re creating great content, people will eventually reach you,” he comments. When he first started his channel, Carlos wasn’t preoccupied with the number of views, comments, or subscriptions. He started his content with himself in mind and would ask himself what kind of content he would want to see. He says, “As long as you’re engaged with the community, you’ll have a great channel. If you try to optimize the content for the algorithm, you’re going to go crazy.” He recommends new content creators start with YouTube Shorts, and once they gain an audience they can create more detailed videos.

It’s also necessary to spark conversation in the comments, and one way you can achieve this is through the title and description of your video. A great title that catches the attention of the viewer, sparks conversation, and implements keywords is essential. A simple way to do this is by asking a question in the title. For example, one of his videos is titled, “How do Promises and Async / Await function in JavaScript?” and also asks a question in the description. This video alone has 250+ comments with viewers answering the question posed by the title and the description. He’s also mindful of what keywords he’s including in his title and finds these keywords by looking at the most popular content with similar topics.

When asked about gear and equipment recommendations, he states that the most important piece of equipment is your microphone, since your voice can be more important than the image, especially if you’re filming a tutorial video. He goes on, “With time, you can update your setup. Maybe your camera is next and then the lighting. Start with your phone or your regular laptop – just start!”

So remember to just get started, and maybe in time, you’ll become the next big content creator for Machine Learning, Google Cloud, Android, or Web Technologies.

You can check out Carlos’ YouTube Channel, find him live on Twitch, or follow him on Twitter or Instagram.

The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

Extending support for App Engine bundled services (Module 17)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Background

App Engine initially launched in 2008, providing a suite of bundled services making it convenient for applications to access a database (Datastore), caching service (Memcache), independent task execution (Task Queue), Google Sign-In authentication (Users), or large “blob” storage (Blobstore), or other companion services. However, apps leveraging those services can only run their apps on App Engine.

To increase app portability and help Google move towards its goal of having the most open cloud on the market, App Engine launched its 2nd-generation service in 2018, initially removing those legacy services. The newer platform allows developers to upgrade apps to the latest language runtimes, such as moving from Python 2 to 3 or Java 8 to 11 (and today, Java 17). One of the major drawbacks to the 1st-generation runtimes is that they’re customized, proprietary, and restrictive in what you can use or can’t.

Instead, the 2nd-generation platform uses open source runtimes, meaning ability to follow standard development practices, use common/known idioms, and have fewer restrictions of 3rd-party libraries, and obviating the need to copy or “vendor” them with your code. Unfortunately, to use these newer runtimes, migrating away from App Engine services were required because while you could upgrade language releases, there was no access to bundled services, breaking apps or requiring complete rewrites, making it a showstopper for many users.

Due to their popularity and the desire to ease the upgrade process for customers, the App Engine team restored access to most (but not all) of those services in Fall 2021. Today’s Serverless Migration Station video demonstrates how to continue usage of bundled services available to Python 3 developers.

Showing App Engine users how to use bundled services on Python 3

Performing the upgrade

Modernizing the typical Python 2 App Engine app looks something like this:
  1. Migrate from the webapp2 framework (not available in Python 3)
  2. Port from Python 2 to 3, preserve use of bundled services
  3. Optional migration to Cloud standalone or similar 3rd-party services

The first step is to move to a standard Python web framework like Flask, Django, Pyramid, etc. Below is some pseudocode from Migration Module 1 demonstrating how to migrate from webapp2 to Flask:

codeblocks for porting Python 2 sample app from webapp2 to Flask
Step 1: Port Python 2 sample app from webapp2 to Flask

The key changes are bolded in the above code snippets. Notice how the App Engine NDB code [the Visit class definition plus store_visit() and fetch_visits() functions] are unaffected by this web framework migration. The full webapp2 code sample can be found in the Module 0 repo folder while the completed migration to Flask sample is located in the Module 1 repo folder.

After your app has ported frameworks, you’re free to upgrade to Python 3 while preserving access to the bundled services if your app uses any. Below is pseudocode demonstrating how to upgrade the same sample app to Python 3 as well as the code changes needed to continue to use App Engine NDB:

codeblocks for porting sample app to Python 3, preserving use of NDB bundled service
Step 2: Port sample app to Python 3, preserving use of NDB bundled service
The original app was designed to work under both Python 2 and 3 interpreters, so no language changes were required in this case. We added an import of the new App Engine SDK followed by the key update wrapping the WSGI object so the app can access the bundled services. As before, the key updates are bolded. Some updates to configuration are also required, and those are outlined in the documentation and the (Module 17) codelab.

The NDB code is also left untouched in this migration. Not all of the bundled services feature such a hands-free migration, and we hope to cover some of the more complex ones ahead in Module 22. Java, PHP, and Go users have it even better, requiring fewer or no code changes at all. The Python 2 Flask sample is located in the Module 1 repo folder, and the resulting Python 3 app can be found in the Module 1b repo folder.

The immediate benefit of step two is the ability to upgrade to a more current version of language runtime. This leaves the third step of migrating off the bundled services as optional, especially if you plan on staying on App Engine for the long-term.

Additional options

If you decide to migrate off the bundled services, you can do so on your own timeline. It should be a consideration should you ever want to move to modern serverless platforms such as Cloud Functions or Cloud Run, to lower-level platforms because you want more control, like GKE, our managed Kubernetes service, or Compute Engine VMs.

Step three is also where the rest of the Serverless Migration Station content may be useful:

*code samples and codelabs available; videos forthcoming

As far as moving to modern serverless platforms, if you want to break apart a large App Engine app into multiple microservices, consider Cloud Functions. If your organization has added containerization as part of your software development workflow, consider Cloud Run. It’s suitable for apps if you’re familiar with containers and Docker, but even if you or your team don’t have that experience, Cloud Buildpacks can do the heavy lifting for you. Here are the relevant migration modules to explore:

    Wrap-up

    Early App Engine users appreciate the convenience of the platform’s bundled services, and after listening to user feedback, adding them back to 2nd-generation runtimes is another way we can help developers modernize their apps. Whether upgrading to newer language runtimes to stay on App Engine and continue to use its bundled services, migrating to Cloud standalone products, or shifting to other serverless platforms, the Google Cloud team aims to provide the tools to help streamline your modernization efforts.

    All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. Today’s video features a special guest to provide a teaser of what to expect for Java. For additional video content, check out the broader Serverless Expeditions series.

    Introducing Developer Journey: November 2022

    Posted by Lyanne Alfaro, DevRel Program Manager, Google Developer Studio

    Developer Journey is a new monthly series to spotlight diverse and global developers sharing relatable challenges, opportunities, and wins in their journey. Every month, we will spotlight developers around the world, the Google tools they leverage, and the kind of products they are building.

    We are kicking off #DevJourney in November to give members of our community the chance to share their stories through our social platforms. This month, it’s our pleasure to feature four members spanning products including Google Developer Expert, Android, and Cloud. Enjoy reading through their entries below and be on the lookout on social media platforms, where we will also showcase their work.

    Headshot of Sierra Obryan smiling
    Sierra OBryan, Google Developer Expert, Android
















    Sierra OBryan

    Google Developer Expert, Android
    Cincinnati, OH
    Twitter and Instagram: @_sierraOBryan

    What Google tools have you used?

    As an Android developer, I use many Google tools every day like Jetpack Compose and other Android libraries, Android Studio, and Material Design. I also like to explore some of the other Google tools in personal projects. I’ve built a Flutter app, poked around in Firebase, and trained my own ML model using the model maker.

    Which tool has been your favorite to use? Why?

    It’s hard to choose one but I’m really excited about Jetpack Compose! It’s really exciting to be able to work with a new and evolving framework with so much energy and input coming from the developer community. Compose makes it easier to quickly build things that previously could be quite complex like animations and custom layouts, and has some very cool tooling in Android Studio like Live Edit and recomposition counts; all of which improve developer efficiency and app quality. One of my favorite things about Compose in general is that I think it will make Android development more accessible to more people because it is more intuitive and easier to get started and so we’ll see the Android community continue to grow with new perspectives and backgrounds bringing in new ideas.

    Google also provides a lot of really helpful tools for building more accessible mobile apps and I’m really glad these important tools also exist! The Accessibility Scanner is available on Google Play and can identify some common accessibility pitfalls in your app with tips about how to fix them and why it’s important. The “Accessibility in Jetpack Compose” code lab is a great starting place for learning more about these concepts.

    Please share with us about something you’ve built in the past using Google tools.

    A favorite personal project is a (very) simple flower identifying app built using ML Kit ’s Image Labeling API and Android. After the 2020 ML-focused Android Developer Challenge, I was very curious about ML Kit but also still quite intimidated by the idea of machine learning. It was surprisingly easy to follow the documentation to build and tinker with a custom model and then add it to an Android app. I just recently migrated the app to Jetpack Compose.

    What advice would you give someone starting in their developer journey?

    Find a community! Like most things, developing is more fun with friends.

    Photo of Harun Wangereka smiling
    Harun Wangereka, Google Developer Expert, Android

















    Harun Wangereka

    Google Developer Expert, Android

    What Google tools have you used?

    I’m an Android Engineer by profession. The tools I use on a day-to-day basis are Android as the framework, Android Studio as the IDE, and some of the Jetpack Libraries from the Android Team at Google.

    Which tool has been your favorite to use? Why?

    Jetpack libraries. I love these libraries because they solve most of the common pain points we, as Android developers, faced before they came along. They also concisely solve them and provide best practices for Android developers to follow.

    Please share with us about something you’ve built in the past using Google tools.

    At my workplace, Apollo Agriculture, I collaborate with cross-functional teams to define, design and ship new features for the agent’s and agro-dealer’s Android apps, which are entirely written in Kotlin. We have Apollo for Agents, an app for agents to perform farmer-related tasks and Apollo Checkout, which helps farmers check out various Apollo products. With these two apps, I’m assisting Apollo Agriculture to make financing for small-scale farmers accessible to everyone.

    What advice would you give someone starting in their developer journey?

    Be nice to yourself as you learn. The journey can be quite hard at times but remember to give yourself time. You can never know all the things at once, so try to learn one thing at a time. Do it consistently and it will pay off in the very end. Remember also to join existing developer communities in your area. They help a lot!

    Selfie of Richard Knowles at the beach
    Richard Knowles, Android Developer






















    Richard Knowles

    Android Developer
    Los Angeles, CA

    What Google tools have you used?

    I’ve been building Android apps since 2011, when I was in graduate school studying for my Master’s Degree in Computer Engineering. I built my first Android app using Eclipse which seemed to be a great tool at the time, at least until Google’s Android Studio was released for the first time in 2014. Android Studio is such a powerful and phenomenal IDE! I’ve been using it to build apps for Android phones, tablets, smartwatches, and TV. It is amazing how the Android Accessibility Test Framework integrates with Android Studio to help us catch accessibility issues in our layouts early on.

    Which tool has been your favorite to use? Why?

    My favorite tool by far is the Accessibility Scanner. As a developer with a hearing disability, accessibility is very important to me. I was born with a sensorineural hearing loss, and wore hearing aids up until I was 18 when I decided to get a cochlear implant. I am a heavy closed-captioning user and I rely on accessibility every single day. When I was younger, before the smartphone era, even through the beginning of the smartphone era, it was challenging for me to fully enjoy TV or videos that didn’t have captions. I’m so glad that the world is starting to adapt to those with disabilities and the awareness of accessibility has increased. In fact, I chose the software engineering field because I wanted to create software or apps that would improve other people’s lives, the same way that technology has made my life easier. Making sure the apps I build are accessible has always been my top priority. This is why the Accessibility Scanner is one of my favorite tools: It allows me to efficiently test how accessible my user-facing changes are, especially for those with visual disabilities.

    Please share with us about something you’ve built in the past using Google tools.

    As an Android engineer on Twitter’s Accessibility Experience Team, one of our initiatives is to improve the experience of image descriptions and the use of alt text. Did you know that when you put images in your Tweets on Twitter, you can add descriptions to make them accessible to people who can’t see images? If yes, that is great! But do you always remember to do it? Don’t worry if not – you’re not alone. Many people including myself forget to add image descriptions. So, we implemented Alt Text reminders which allow users to opt in to be notified when they tweet images without descriptions. We also have been working to expose alt text for all images and GIFs. What that means is, we are now displaying an “ALT” badge on images that have associated alternative text or image descriptions. In general, alt text is primarily used for Talkback users but we wanted to allow users not using a screen reader to know which images have alternative text, and of course allow them to view the image description by selecting the “ALT” badge. This feature helped achieve two things: 1) Users that may have low-vision or other disabilities that would benefit from available alternative text can now access that text; 2) Users can know which images have alternative text before retweeting those images. I personally love this feature because it increases the awareness of Alt text.

    What advice would you give someone starting in their developer journey?

    What an exciting time to start! I have three tips I’d love to share:

    1) Don’t start coding without reviewing the specifications and designs carefully. Draw and map out the architecture and technical design of your work before you jump into the code. In other words, work smarter, not harder.

    2) Take the time to read through the developer documentation and the source code. You will become an expert more quickly if you understand what is happening behind the scenes. When you call a function from a library or SDK, get in the habit of looking at the source code and implementation of that function so that you can not only learn as you code, but also find opportunities to improve performance.

    3) Learn about accessibility as early as possible, preferably at the same time as learning everything else, so that it becomes a habit and not something you have to force later on.

    Headshot of Lynne Langit smiling
    Lynn Langit, GDE/Cloud
























    Lynn Langit

    GDE/Cloud
    Minnesota
    Twitter: @lynnlangit

    What Google tools have you used?

    So many! My favorite Google Cloud services are CloudRun, BigQuery, Dataproc. Favorite Tools are Cloud Shell Editor, SSH-in browser for Compute Engine and Big Query Execution Details.

    Which tool has been your favorite to use? Why?

    I love to use the open source Variant Transforms tool for VCF [or genomic] data files. This tool gets bioinformaticians working with BigQuery quickly. Researchers use the VariantTransforms tool to validate and load VCF files into BigQuery. VariantTransforms supports genome-scale data analysis workloads. These workloads can contain hundreds of thousands of files, millions of genomic samples, and billions of input records.

    Please share with us about something you’ve built in the past using Google tools.

    I have been working with teams around the world to build, scale, and deploy multiple genomic-scale data pipelines for human health. Recent use cases are data analysis in support of Covid or cancer drug development.

    What advice would you give someone starting in their developer journey?

    Expect to spend 20-25% of your professional time learning for the duration of your career. All public cloud services, including Google Cloud, evolve constantly. Building effectively requires knowing both cloud patterns and services at a deep level.

    Interview with Doug Duhaime, contributor to Google’s Dev Library

    Posted by the Google Dev Library Team

    Introducing the Dev Library Contributor Spotlights – a blog series highlighting developers that are supporting the thriving development ecosystem by contributing their resources and tools to Google Dev Library.

    We met with Doug Duhaime, Full Stack Developer in Yale University’s Digital Humanities Lab, to discuss his passion for Machine Learning, his processes and what inspired him to release his PixPlot project as an Open Source.

    What led you to explore the field of machine learning?

    I was an English major in undergrad and in graduate school. I have a PhD in English literature. My dissertation was exploring copyright history and the ways that changes in copyright law affected the book market. How does the institution of fixed duration copyright influence the book market? To answer this question, I had to mine an enormous collection of data – half a million books, published before 1800 – to look at different patterns. That was one of the key projects that got me inspired to further explore the world of Machine Learning.

    In fact, one of my projects – the PixPlot library – uses computer vision to analyze image collections, which was also partially used in my research. Part of my research looked at plagiarism detection and how readily people are inclined to copy images once it becomes legal to copy them from other texts. Computer vision helps us to answer these questions and identify key patterns.

    I’ve seen machine learning and programming as a way to ask new questions in historical contexts. And there’s a whole field of us – we’re called digital humanists. Yale University, where I’ve been for the last five years, has a fantastic digital humanities program where researchers are asking questions like this and using fun machine learning platforms like TensorFlow to answer those questions.

    Screenshot from the PixPlot library showing Image Fields in the Meserve-Kunhardt Collection with the following identified hotspots: Boxers, Buildings, Buttons, Chairs, Gowns

    Can you tell us more about the evolution of your PixPlot library project?

    We started in Yale’s digital humanities lab with a project called neural neighbors. And the idea here was to find patterns in the Meserve-Kunhardt Collection of images.

    Meserve-Kunhardt is a collection of photographs largely from the 19th century that Yale recently acquired. After being acquired by the university, some curators were preparing to identify all this really rich metadata to describe these images. However, they had a backlog, and they needed help to try to make sense of what’s in this collection. And so, Neural Neighbors was our initial attempt to answer this question.

    As this project went on, we started running up against limitations and asking bigger questions. For example, instead of just looking at the pictures, what would it be like to look at the entire collection all at once? In order to answer this question, we needed a more performant rendering layer.

    So we decided to utilize TensorFlow, which allowed us to extract vector representation of each image. We then compressed the dimensionality of those vectors down to 2D. But for PixPlot, we decided to use a different dimensionality reduction technique called umap. And that brought us to the first release of PixPlot.

    The idea here was to take the whole collection, shoot it down into 2D, and then let you move through it and look at the images in the collection wherein we expect images with similar content to be placed close by one another.

    And so it’s just evolved from that early genesis and Neural Neighbors through to where it is today.

    What inspired you to release PixPlot as an open source project?

    In the case of PixPlot, I was working for Yale University, and we had a goal to make as much of our contributions to the software world as possible open and publicly accessible without any commercial terms.

    It was a huge privilege to spend time with the lab and build software that others found useful. I would say even more generally, in my personal life, I really like building things that people find useful and, when possible, contributing back to the open source world because, I think, so many of us learn from open source.

    Google Dev Library Quote: We look at other people's examples and get excited by tools and projects others are building. And many of those are non-commercial. They're just open and free to the world. And it's great to give back when we can. Doug Duhaime Dev Library Contributor

    Find out more content contributed and authored by Doug Duhaime and discover more unique tools and resources on the Google Dev Library website!

    Get to know Google’s Coding Competitions

    Posted by Julia DeLorenzo, Program Manager, Coding Competitions

    Google’s Coding Competitions provide interactive rounds throughout the year to help you grow your skills, challenge yourself, and connect with developers from around the globe.

    Google has three flagship Coding Competitions: Code Jam, Hash Code, and Kick Start. Each competition is unique and offers different types of challenges from algorithmic puzzles to team-based optimization problems. Our Coding Competitions are designed and tested by a team of Google engineers and program managers who craft new and engaging problems for users to tackle.

    Google’s Coding Competitions have been around for quite a while (two decades!) and this passionate group of contributors and fans around the world makes each new season even more exciting than the last.

    Hear from two program managers on the Coding Competitions team:

    Emilly Miller, Google’s Coding Competitions Lead Program Manager

    Emily Miller Headshot

    “My first year working on Coding Competitions was 2013 with Code Jam. The Finals were hosted in London that year — video proof — and I’ve been hooked ever since! It’s been incredibly rewarding and a whole lot of fun to interact with coders from around the world over the years.

    I find it so cool that even after 20 years of Code Jam, the space of online competitions continues to evolve and grow. To me, it’s a testament to the strength of the global online community and the value that products like Code Jam, Hash Code, and Kick Start provide developers to connect and learn from one another. Plus, the problem statements are so creative and fun!

    My advice to future participants is, jump in and try it out! We’re all here for something unique to us, so find out what that is for you and pursue that. Hitting roadblocks along the way is likely, so don’t get discouraged. Remember there’s a global community of coders out there waiting to help you!”


    Julia DeLorenzo, Google’s Coding Competitions Program Manager

    Julia DeLorenzo Headshot

    “My first introduction to Google’s Coding Competitions was in 2016, when I had the chance to volunteer at the Code Jam World Finals in New York City. The excitement and energy of that Finals stuck with me – four years later, in 2020, an opportunity to work on Coding Competitions full time came up and I jumped at the chance!

    I love that Google’s Coding Competitions offer different ways to participate. No matter where you are in your competitive programming journey, there’s a Competition for you. People who are new to competitive programming can get familiar with space by participating in Kick Start; those who want to participate with friends or teammates can try Hash Code; and folks looking for a challenge should try Code Jam. Some people participate in all three! The problems you’ll see are always different and creative so you’re sure to have fun along the way.

    As cliché as it sounds, my advice to future participants is that failure is an opportunity for growth. Don’t let imposter syndrome or fear of failure stand in the way of trying something new. If you come across a problem you can’t solve – that’s great! It’s an opportunity to challenge yourself and try a different approach.”


    Stay Tuned!

    Over the next few weeks, keep an eye on the blog – we’ll be spotlighting each of Google’s Coding Competitions in a series of blog posts to help you understand the ins and outs of each competition.