#IamaGDE: Josue Gutierrez


Posted by Alicja Heisig

#IamaGDE series presents: Google Maps

The Google Developers Experts program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

Meet Josue Gutierrez — Maps, Web, Identity and Angular Google Developer Expert.

Josue currently works at the German company Boehringer Ingelheim and lives near Frankfurt. Before moving to Germany, Josue was working as a software engineer in Mexico, and before that, he spent almost a year in San Francisco as a senior front-end developer at Sutter Health.

Image of Josue Gutierrez

Josue Gutierrez

Josue studied computer science and engineering as an undergraduate and learned algorithms and programming. His first language was C++, and he learned C and Python, but was drawn to web technologies.

“When I saw a web browser for the first time, it stuck with me,” he says, “It was changing in real time as you’re developing. That feeling is really cool. That’s why I went into frontend development.”

Josue has worked on multiple ecommerce projects focused on improving customers’ trade experience. He sees his role as creating something from scratch to help people improve lives.

“These opportunities we have as developers are great — to travel, work for many verticals, and learn many businesses,” he says. “In my previous job, I developed tech-oriented trade tools for research companies, to manipulate strings or formulas. I was on the team involved in writing these kinds of tools, so it was more about the trade experience for doctors.”

Getting involved in the developer community

Josue’s first trip outside Mexico, to San Francisco, exposed him to the many developer communities in the area, and he appreciated the supportive communities of people trying to learn together. Several of the people he met suggested he start his own meetup in Mexico City, to get more involved in Google technologies, so he launched an Angular community there. As he hunted for speakers to come to his Angular meetup, Josue found himself giving talks, too.

Then, the GDG Mexico leader invited Josue to give talks on Google for startups.

“That helped me get involved in the ecosystem,” Josue says. “I met a lot of people, and now many of them are good friends. It’s really exciting because you get connected with people with the same interests as you, and you all learn together.”

“I’m really happy to be part of the Google Maps ecosystem,” Josue says. “It’s super connected, with kind people, and now I know more colleagues in my area, who work for different companies and have different challenges. Seeing how they solve them is a good part of being connected to the product. I try to share my knowledge with other people and exchange points of view.”

Josue says 2020 provided interesting opportunities.

“This year was weird, but we also discovered more tools that are evolving with us, more functionalities in Hangouts and Meetup,” Josue says. “It’s interesting how people are curious to get connected. If I speak from Germany, I get comments from countries like Bolivia and Argentina. We are disconnected but increasing the number of people we engage with.”

He notes that the one missing piece is the face-to-face, spontaneous interactions of in-person workshops, but that there are still positives to video workshops.

“I think as communities, we are always trying to get information to our members, and having videos is also cool for posterity,” he says.

He is starting a Maps developer community in Germany.

“I have colleagues interested in trying to get a community here with a solid foundation,” he says. “We hope we can engage people to get connected in the same place, if all goes well.”

Favorite Maps features and current projects

As a frontend developer, Josue regards Google Maps Platform as an indispensable tool for brands, ecommerce companies, and even trucking companies.

“Once you start learning how to plant coordinates inside a map, how to convert information and utilize it inside a map, it’s easy to implement,” he says.

In 2021, Josue is working on some experiments with Maps, trying to make more real-time actualization, using currently available tools.

“Many of the projects I’ve been working on aren’t connected with ecommerce,” he says. “Many customers want to see products inside a map, like trucking products. I’ve been working in directories, where you can see the places related to categories — like food in Mexico. You can use Google Maps functionalities and extend the diversification of maps and map whatever you want.”

“Submission ID is really cool,” he adds. “You can do it reading the documentation, a key part of the product, with examples, references, and a live demo in the browser.”

Future plans

Josue says his goal going forward is to be as successful as he can at his current role.

“Also, sharing is super important,” he says. “My company encourages developer communities. It’s important to work in a place that matches your interests.”

Image of Josue Gutierrez

Follow Josue on Twitter at @eusoj |Check out Josue’s projects on GitHub.

For more information on Google Maps Platform, visit our website or learn more about our GDE program.

MediaPipe on the Web

Posted by Michael Hays and Tyler Mullen from the MediaPipe team

MediaPipe is a framework for building cross-platform multimodal applied ML pipelines. We have previously demonstrated building and running ML pipelines as MediaPipe graphs on mobile (Android, iOS) and on edge devices like Google Coral. In this article, we are excited to present MediaPipe graphs running live in the web browser, enabled by WebAssembly and accelerated by XNNPack ML Inference Library. By integrating this preview functionality into our web-based Visualizer tool, we provide a playground for quickly iterating over a graph design. Since everything runs directly in the browser, video never leaves the user’s computer and each iteration can be immediately tested on a live webcam stream (and soon, arbitrary video).

Running the MediaPipe face detection example in the Visualizer

Figure 1 shows the running of the MediaPipe face detection example in the Visualizer

MediaPipe Visualizer

MediaPipe Visualizer (see Figure 2) is hosted at viz.mediapipe.dev. MediaPipe graphs can be inspected by pasting graph code into the Editor tab or by uploading that graph file into the Visualizer. A user can pan and zoom into the graphical representation of the graph using the mouse and scroll wheel. The graph will also react to changes made within the editor in real time.

MediaPipe Visualizer hosted at https://viz.mediapipe.dev

Figure 2 MediaPipe Visualizer hosted at https://viz.mediapipe.dev

Demos on MediaPipe Visualizer

We have created several sample Visualizer demos from existing MediaPipe graph examples. These can be seen within the Visualizer by visiting the following addresses in your Chrome browser:

Edge Detection

Face Detection

Hair Segmentation

Hand Tracking

Edge detection
Face detection
Hair segmentation
Hand tracking

Each of these demos can be executed within the browser by clicking on the little running man icon at the top of the editor (it will be greyed out if a non-demo workspace is loaded):

This will open a new tab which will run the current graph (this requires a web-cam).

Implementation Details

In order to maximize portability, we use Emscripten to directly compile all of the necessary C++ code into WebAssembly, which is a special form of low-level assembly code designed specifically for web browsers. At runtime, the web browser creates a virtual machine in which it can execute these instructions very quickly, much faster than traditional JavaScript code.

We also created a simple API for all necessary communications back and forth between JavaScript and C++, to allow us to change and interact with the MediaPipe graph directly from JavaScript. For readers familiar with Android development, you can think of this as a similar process to authoring a C++/Java bridge using the Android NDK.

Finally, we packaged up all the requisite demo assets (ML models and auxiliary text/data files) as individual binary data packages, to be loaded at runtime. And for graphics and rendering, we allow MediaPipe to automatically tap directly into WebGL so that most OpenGL-based calculators can “just work” on the web.

Performance

While executing WebAssembly is generally much faster than pure JavaScript, it is also usually much slower than native C++, so we made several optimizations in order to provide a better user experience. We utilize the GPU for image operations when possible, and opt for using the lightest-weight possible versions of all our ML models (giving up some quality for speed). However, since compute shaders are not widely available for web, we cannot easily make use of TensorFlow Lite GPU machine learning inference, and the resulting CPU inference often ends up being a significant performance bottleneck. So to help alleviate this, we automatically augment our “TfLiteInferenceCalculator” by having it use the XNNPack ML Inference Library, which gives us a 2-3x speedup in most of our applications.

Currently, support for web-based MediaPipe has some important limitations:

  • Only calculators in the demo graphs above may be used
  • The user must edit one of the template graphs; they cannot provide their own from scratch
  • The user cannot add or alter assets
  • The executor for the graph must be single-threaded (i.e. ApplicationThreadExecutor)
  • TensorFlow Lite inference on GPU is not supported

We plan to continue to build upon this new platform to provide developers with much more control, removing many if not all of these limitations (e.g. by allowing for dynamic management of assets). Please follow the MediaPipe tag on the Google Developer blog and Google Developer twitter account. (@googledevs)

Acknowledgements

We would like to thank Marat Dukhan, Chuo-Ling Chang, Jianing Wei, Ming Guang Yong, and Matthias Grundmann for contributing to this blog post.

Six ways we’re making Azure reservations even more powerful

New Azure reservations features can help you save more on your Azure costs, easily manage reservations, and create internal reports. Based on your feedback, we’ve added the following features to reservations:

 

Azure Databricks pre-purchase plans

You can now save up to 37 percent on your Azure Databricks costs when you pre-purchase Azure Databricks commit units (DBCU) for one or three years. Any Azure Databricks use deducts from the pre-purchased DBUCs automatically. You can use the pre-purchased DBCUs at any time during the purchase term.

Databricks SKU selection

See our documentation “Optimize Azure Databricks costs with a pre-purchase” to learn more, or purchase an Azure Databricks plan in the Azure portal.

 

App Service Isolated stamp fee reservations

Save up to 40 percent on your App Service Isolated stamp fee costs with App Service reserved capacity. After you purchase a reservation, the isolated stamp fee usage that matches the reservation is no longer charged at the on-demand rates. App Service workers are charged separately and don’t get reservation discount.

App Service Reserved Capacity

Visit our documentation “Prepay for Azure App Service Isolated Stamp Fee with reserved capacity to learn more or purchase a reservation in the Azure portal.

 

Automatically renew your reservations

Now you can setup your reservations to renew automatically. This ensures that you keep getting the reservation discounts without any gaps. You can opt-in to automatically renew your reservations anytime during the term of the reservation and opt-out anytime. You can also update the renewal quantity to better align with any changes in your usage pattern. To setup automatic renewal, just go to any reservation that you’ve already purchased and click on the Renewal tab.

Renewal setup

 

Scope reservation to resource group

You can now scope reservations to a resource group. This feature is helpful in scenarios where same subscription has deployments from multiple cost centers, represented by their respective resource groups, and the reservation is purchased for a particular cost center. This feature helps you narrow down the reservation application to a resource group making internal charge-back easier. You can scope a reservation to a resource group at the time of purchase or update the scope after purchase. If you delete or migrate a resource group then the reservation will have to be rescoped manually.

Resource group scope

Learn more in our documentation “Scope reservations.”

 

Enhanced usage data to help with charge back, savings, and utilization

Organizations rely on their enterprise agreement (EA) usage data to reconcile invoices, track usage, and charge back internally. We recently added more details to the EA usage data to make your reservation reporting easier. With these changes you can easily perform following tasks:

  • Get reservation purchase and refund charges
  • Know which resource consumed how many hours of a reservation and charge back data for the usage
  • Know how many hours of a reservation was not used
  • Amortize reservation costs
  • Calculate reservation savings

The new data files are available only through the Azure portal and not through the EA portal. Besides the raw data, now you can also see reservations in cost analysis.

You can visit our documentation “Get Enterprise Agreement reservation costs and usage” to learn more.

 

Purchase using API

You can now purchase reservations using REST APIs. The APIs below will help you get the SKUs, calculate the cost, and then make the purchase:

How HSBC built its PayMe for Business app on Microsoft Azure

Bank-grade security, super-fast transactions, and analytics 

If you live in Asia or have ever traveled there, you’ve probably witnessed the dramatic impact that mobile technology has had on all aspects of day to day life. In Hong Kong in particular, most consumers now use a smart phone daily, presenting new opportunities for organizations to deliver content and services directly to their mobile devices.

As one of the world’s largest international banks, HSBC is building new services on the cloud to enable them to organize their data more efficiently, analyze it to understand their customers better, and make more core customer journeys and features available on mobile first.

HSBC’s retail and business banking teams in Hong Kong have combined the convenience afforded by smart phones with cloud services to allow “cashless” transactions where people can use their smart phone to perform payments digitally. Today, over one and a half million people use HSBC’s PayMe app to exchange money with people in their personal network for free. And businesses are using HSBC’s new PayMe for Business app, built natively on Azure, to collect payments instantly, with 98 percent of all transactions completed in 500 milliseconds or less. Additionally, the businesses can leverage powerful built-in intelligence on the app to improve their sales and operations.

On today’s Microsoft Mechanics episode of “How We Built it,” Alessio Basso, Chief Architect of PayMe from HSBC, explains the approach they took and why.

Microsoft Mechanics episode - HSBC's PayMe for Business app

Bank-grade security, faster time to delivery, dynamic scale and resiliency

The first decision Alessio and team made was to use fully managed services to allow them to go from ideation to a fully operational service in just a few months. Critical to their approach was adopting a microservices-based architecture with Azure Kubernetes Service and Azure Database for MySQL.

They designed each microservice to be independent, with their own instance of Azure managed services, including Azure Database for MySQL, Azure Event Hub, Azure Storage, Azure Key Vault for credentials and secrets management, and more. They architected for this level of isolation to strengthen security and overall application uptime, as shared dependencies are eliminated.

microservice

Each microservice can rapidly scale compute and database resources elastically and independently, based on demand. What’s more, Azure Database for MySQL, allows for the creation of read replicas to offload read-only and analytical queries without impacting payment transaction response times.

replicas

Also, from a security perspective, because each microservice runs within its own subnet inside of Azure Virtual Network, the team is able to isolate network and communications back and forth between Azure resources with service principals via Virtual Network service endpoints.

Fast and responsive analytics platform

At its core, HSBC’s PayMe is a social app that allows consumers to establish their personal networks, while facilitating the interactions and transactions with the people in their circle and business entities. In order to create more value for both businesses and consumers, Azure Cosmos DB is used for graph data modelled to store customer-merchant-transaction relationships.

Massive amounts of structured and unstructured data from Azure Database for MySQL, Event Hubs, and Storage are streamed and transformed. The team designed an internally developed data ingestion process, feeding an analytical model called S.L.I.M (simple, lightly, integrated model), optimized for analytics queries performance, as well as making data virtually available to the analytics platform, using Azure Databricks Delta’s unmanaged table capability.

virtualized-data

Then machine learning within their analytics platform built on Azure Databricks allows for the quick determination of patterns and relationships, as well as for the detection of anomalous activity.

With Azure, organizations can immediately take advantage of new opportunities to deliver content and services directly to mobile devices, including a next-level digital payment platform.

  • To learn more about how HSBC architected their cashless digital transaction platform, please watch the full episode.
  • Learn more about achieving microservice independence with your own instance of a Azure managed service like Azure Database for MySQL.

Leveraging complex data to build advanced search applications with Azure Search

Data is rarely simple. Not every piece of data we have can fit nicely into a single Excel worksheet of rows and columns. Data has many diverse relationships such as the multiple locations and phone numbers for a single customer or multiple authors and genres of a single book. Of course, relationships typically are even more complex than this, and as we start to leverage AI to understand our data the additional learnings we get only add to the complexity of relationships. For that reason, expecting customers to have to flatten the data so it can be searched and explored is often unrealistic. We heard this often and it quickly became our number one most requested Azure Search feature. Because of this we were excited to announce the general availability of complex types support in Azure Search. In this post, I want to take some time to explain what complex types adds to Azure Search and the kinds of things you can build using this capability. 

Azure Search is a platform as a service that helps developers create their own cloud search solutions.

What is complex data?

Complex data consists of data that includes hierarchical or nested substructures that do not break down neatly into a tabular rowset. For example a book with multiple authors, where each author can have multiple attributes, can’t be represented as a single row of data unless there is a way to model the authors as a collection of objects. Complex types provide this capability, and they can be used when the data cannot be modeled in simple field structures such as strings or integers.

Complex types applicability

At Microsoft Build 2019,  we demonstrated how complex types could be leveraged to build out an effective search application. In the session we looked at the Travel Stack Exchange site, one of the many online communities supported by StackExchange.

The StackExchange data was modeled in a JSON structure to allow easy ingestion it into Azure Search. If we look at the first post made to this site and focus on the first few fields, we see that all of them can be modeled using simple datatypes, including tags which can be modeled as a collection, or array of strings.

{
   "id": "1",
    "CreationDate": "2011-06-21T20:19:34.73",
    "Score": 8,
    "ViewCount": 462,
    "BodyHTML": "

My fiancée and I are looking for a good Caribbean cruise in October and were wondering which "Body": "my fiancée and i are looking for a good caribbean cruise in october and were wondering which islands "OwnerUserId": 9, "LastEditorUserId": 101, "LastEditDate": "2011-12-28T21:36:43.91", "LastActivityDate": "2012-05-24T14:52:14.76", "Title": "What are some Caribbean cruises for October?", "Tags": [ "caribbean", "cruising", "vacations" ], "AnswerCount": 4, "CommentCount": 4, "CloseDate": "0001-01-01T00:00:00",​

However, as we look further down this dataset we see that the data quickly gets more complex and cannot be mapped into a flat structure. For example, there can be numerous comments and answers associated with a single document.  Even votes is defined here as a complex type (although technically it could have been flattened, but that would add work to transform the data).

"CloseDate": "0001-01-01T00:00:00",
    "Comments": [
        {
            "Score": 0,
            "Text": "To help with the cruise line question: Where are you located? My wife and I live in New Orlea
            "CreationDate": "2011-06-21T20:25:14.257",
           "UserId": 12
        },
        {
            "Score": 0,
            "Text": "Toronto, Ontario. We can fly out of anywhere though.",
            "CreationDate": "2011-06-21T20:27:35.3",
            "UserId": 9
        },
        {
            "Score": 3,
            "Text": ""Best" for what?  Please read [this page](http://travel.stackexchange.com/questions/how-to
            "UserId": 20
        },
        {
            "Score": 2,
            "Text": "What do you want out of a cruise? To relax on a boat? To visit islands? Culture? Adventure?
            "CreationDate": "2011-06-24T05:07:16.643",
            "UserId": 65
        }
    ],
    "Votes": {
        "UpVotes": 10,
        "DownVotes": 2
    },
    "Answers": [
        {
            "IsAcceptedAnswer": "True",
            "Body": "This is less than an answer, but more than a comment…nnA large percentage of your travel b
            "Score": 7,
            "CreationDate": "2011-06-24T05:12:01.133",
            "OwnerUserId": 74

All of this data is important to the search experience. For example, you might want to:

In fact, we could even improve on the existing StackExchange search interface by leveraging Cognitive Search to extract key phrases from the answers to supply potential phrases for autocomplete as the user types in the search box.

All of this is now possible because not only can you map this data to a complex structure, but the search queries can support this enhanced structure to help build out a better search experience.

Next Steps

If you would like to learn more about Azure Search complex types, please visit the documentation, or check out the video and associated code I made which digs into this Travel StackExchange data in more detail.

Taking advantage of the new Azure Application Gateway V2

We recently released Azure Application Gateway V2 and Web Application Firewall (WAF) V2. These SKUs are named Standard_v2 and WAF_v2 respectively and are fully supported with a 99.95% SLA. The new SKUs offer significant improvements and additional capabilities to customers:

  • Autoscaling allows elasticity for your application by scaling the application gateway as needed based on your application’s traffic pattern. You no longer need to run application gateway at peak provisioned capacity, thus significantly saving on the cost.
  • Zone redundancy enables your application gateway to survive zonal failures, offering better resilience to your application
  • Static VIP feature ensures that your endpoint address will not change over its lifecycle
  • Header Rewrite allows you to add, remove or update HTTP request and response headers on your application gateway, thus enabling various scenarios such as HSTS support, securing cookies, changing cache controls etc. without the need to touch your application code.
  • Faster provisioning and configuration update time
  • Improved performance for your application gateway helps reduce overall cost

Diagram showing improved capabilities in V2

We highly recommend that customers use the V2 SKUs instead of the V1 SKU for new applications/workloads.

Customers who have existing applications behind the V1 SKUs of Application Gateway/WAF should also consider migrating to the V2 SKUs sooner rather than later. These are some of the reasons:

  • Features and improvements: You can take advantage of the improvements and capabilities mentioned above and continue to take advantage of new features in our roadmap as they are released. Going forward, most of the new features in our roadmap will only be released on the V2 SKU.
  • Cost: V2 SKU may work out to be overall cheaper for you relative to V1 SKU. See our pricing page for more information on V2 SKU costs.
  • Platform support in future: We will be disabling creation of new gateways on the V1 SKU at some point in the future, advance notification will be provided so customers have sufficient time to migrate. Migrating your gateways to the V2 SKU sooner rather than later will allow us to allocate more of our engineering and support resources to the V2 SKU sooner.  Help us help you!

Guided migration – Configuration replication to V2 SKU gateway

While customers can certainly do the migration on their own by manually configuring new V2 gateways with the same configuration as their V1 gateways, in reality, for many customers this could be quite complicated and error prone due to the number of configuration touchpoints that may be involved. To help with this, we have recently published a PowerShell script along with documentation that helps replicate the configuration on a V1 gateway to a new V2 gateway.

The PowerShell script requires a few inputs and will seamlessly copy over the configuration from a specified V1 gateway to a new V2 gateway, the V2 gateway will be automatically created for you). There are a few limitations, so please look at those before using the script and visit our mini FAQ for additional guidance.

Switching over traffic to new V2 endpoints

This will be completely up to the customer as the specifics of how the traffic flow through the application gateway is architected, vary from application to application and customer to customer. However, we have provided guidance for some scenarios of traffic flow. We will consider future tooling to help customers with this phase, especially for customers using Azure DNS or Azure Traffic Manager to direct traffic to application gateways.

Feedback

As always, we are interested in hearing your valuable feedback. For specific feedback on the migration to the V2 SKU, you are welcome to email us at [email protected]. For general feedback on Application Gateways, please use our Azure Feedback page.

Simplifying AI with automated ML no-code web interface

Leverage the power of automated machine learning

Leverage the power of automated machine learning

 

Artificial Intelligence (AI) has become the hottest topic in tech. Executives and business managers, analysts and engineers, developers, and data scientists, all want to leverage the power of AI to gain better insights to their work and better predictions for accomplishing their goals.

 

While businesses are beginning to fully realize the potential of machine learning (ML), it requires advanced data science skills that are hard to come by. There are many business domain experts who have a general understanding of machine learning and predictive analytics, however they prefer not to dwell into the depths of statistics or coding which are required when working with traditional ML tools.

With the launch of announcement of automated ML in Azure Machine Learning service last December, we have started the journey to both accelerate and simplify AI. This helps data scientists, who want to automate part of their ML workflow so they can spend more time focusing on other business objectives. It also makes AI available for a wider audience of business users who don’t have advanced data science and coding knowledge. One recent example is the integration with Power BI, which enables the accessibility of ML to data analysts.

We are excited to announce a new automated machine learning web user interface (UI) in Azure portal, available now in preview.

AutoML UI

Emphasizing on our mission to scale machine learning to the masses, we now introduce automated machine learning user interface (UI) which enables business domain experts to train ML models without requiring expertise in coding. Users can import their own data and, within a few clicks, start training on it. Automated machine learning will try a plethora of different combinations of algorithms and their hyperparameters to come up with the best possible ML model, customized to the user’s data. They can then continue and deploy the model to Azure Machine Learning service as a web service, to generate future predictions on new data.

Whether you’d like to predict customer churn, detect fraudulent transactions, or forecast demand, the most important knowledge you’ll need is to understand your data. Automated machine learning will find the best model for you and help you understand how well it will perform when making predictions on new data.

To start exploring the automated machine learning UI, simply go to Azure portal and navigate to an Azure machine learning workspace, where you will see “Automated machine learning” under the “Authoring” section. If you don’t have an Azure machine learning workspace yet you can learn how to create a workspace.

Authoring (preview)

Building models made easier

Let’s take a look at how easy it is to build and train models with the new user interface.

Quickly setup a new experiment

Starting the experiment is fast and easy. First, select a name for the experiment. After, you can choose the compute type to use for data exploration and training. For users who do not have a compute, you will find it easy to create one from this page.

Creating a new automated machine learning experiment

     

    Review and explore data

    • Select your data file (you can upload one from your machine) to get a preview of the data and explore it.
    • You can see both a sample of the raw data, as well as stats on each column, such as type, values histogram, min and max values, and more.
    • You can also select to exclude columns from the training job. 
    • Then, identify whether this is a classification, regression, or forecasting training type. 
    • From here you can select the column you’d like to get predictions on.
    • Start training to let automated machine learning find the best model.

    Selecting training job type and target column

       

      Control and fine tune settings

      If you are well versed with machine learning internals, you can open the “Advanced settings” section. Here you can define your desired settings for the training job, such as early exit criteria, cross validation method to use, algorithms to exclude, and more.

       

      Defining desired settings for the training job

         

        Review key metrics

        In the automated machine learning dashboard, you can see all your experiments and filter them by name, date, and state, as well as drill down to any of the runs. Once started, you can view the experiment progress in real time as more algorithms are evaluated, and a model is produced. You can evaluate each of the models using the various charts available. Review detailed metrics on each run iteration in order to determine if this is the suitable model.

        Review detailed metrics on each run iteration in order to determine if this is the suitable model through Run Detail.

           

          Share your work

          Want to consult with your colleagues, or show off your work? This user interface enables and supports collaborative experiences. To share your workspace with other people in your organization simple do it through the access control.

           

          Resources

          Get started today with your new Azure free trial. Learn more about automated machine learning user interface.

          Advance your career with the Google Africa Certifications Scholarships

          Posted by William Florance, Global Head, Developer Training Programs

          Building upon our pledge to provide mobile developer training to 100,000 Africans to develop world class apps, today we are pleased to announce the next round of Google Africa Certification Scholarships aimed at helping developers become certified on Google’s Android, Web, and Cloud technologies.

          This year, we are offering 30,000 additional scholarship opportunities and 1,000 grants for the Google Associate Android Developer, Mobile Web Specialist, and Associate Cloud Engineer certifications. The scholarship program will be delivered by our partners, Pluralsight and Andela, through an intensive learning curriculum designed to prepare motivated learners for entry-level and intermediate roles as software developers. Interested students in Africa can learn more about the Google Africa Certifications Scholarships and apply here

          According to World Bank, Africa is on track to have the largest working-age population (1.1 billion) by 2034. Today’s announcement marks a transition from inspiring new developers to preparing them for the jobs of tomorrow. Google’s developer certifications are performance-based. They are developed around a job-task analysis which test learners for skills that employers expect developers to have.

          As announced during Sundar Pichai – Google CEO’s visit to Nigeria in 2017, our continued initiatives focused on digital skills training, education and economic opportunity, and support for African developers and startups, demonstrate our commitment to help advance a healthy and vibrant ecosystem. By providing support for training and certifications we will help bridge the unemployment gap on the continent through increasing the number of employable software developers.

          Although Google’s developer certifications are relatively new, we have already seen evidence that becoming certified can make a meaningful difference to developers and employers. Adaobi Frank – a graduate of the Associate Android Developer certification – got a better job that paid ten times more than her previous salary after completing her certification. Her interview was expedited as her employer was convinced that she was great for the role after she mentioned that she was certified. Now, she’s got a job that helps provide for her family – see her video here. Through our efforts this year, we want to help many more developers like Ada and support the growth of startups and technology companies throughout Africa.

          Follow this link to learn more about the scholarships and apply.