Doubling down on the edge with Coral’s new accelerator

Posted by The Coral Team

Coral image

Moving into the fall, the Coral platform continues to grow with the release of the M.2 Accelerator with Dual Edge TPU. Its first application is in Google’s Series One room kits where it helps to remove interruptions and makes the audio clearer for better video meetings. To help even more folks build products with Coral intelligence, we’re dropping the prices on several of our products. And for those folks that are looking to level up their at home video production, we’re sharing a demo of a pose based AI director to make multi-camera video easier to make.

Coral M.2 Accelerator with Dual Edge TPU

The newest addition to our product family brings two Edge TPU co-processors to systems in an M.2 E-key form factor. While the design requires a dual bus PCIe M.2 slot, it brings enhanced ML performance (8 TOPS) to tasks such as running two models in parallel or pipelining one large model across both Edge TPUs.

The ability to scale across multiple edge accelerators isn’t limited to only two Edge TPUs. As edge computing expands to local data centers, cell towers, and gateways, multi-Edge TPU configurations will be required to help process increasingly sophisticated ML models. Coral allows the use of a single toolchain to create models for one or more Edge TPUs that can address many different future configurations.

A great example of how the Coral M.2 Accelerator with Dual Edge TPU is being used is in the Series One meeting room kits for Google Meet.

The new Series One room kits for Google Meet run smarter with Coral intelligence

Coral image

Google’s new Series One room kits use our Coral M.2 Accelerator with Dual Edge TPU to bring enhanced audio clarity to video meetings. TrueVoice®, a multi-channel noise cancellation technology, minimizes distractions to ensure every voice is heard with up to 44 channels of echo and noise cancellation, making distracting sounds like snacking or typing on a keyboard a concern of the past.

Enabling the clearest possible communication in challenging environments was the target for the Google Meet hardware team. The consideration of what makes a challenging environment was not limited to unusually noisy environments, such as lunchrooms doubling as conference rooms. Any conference room can present challenging acoustics that make it difficult for all participants to be heard.

The secret to clarity without expensive and cumbersome equipment is to use virtual audio channels and AI driven sound isolation. Read more about how Coral was used to enhance and future-proof the innovative design.

Expanding the AI edge

Earlier this year, we reduced the prices of our prototyping devices and sensors. We are excited to share further price drops on more of our products. Our System-on-Module is now available for $99.99, and our Mini PCIe Accelerator, M.2 Accelerator A+E Key, and M.2 Accelerator B+M key are now available at $24.99. We hope this lower price will make our edge AI more accessible to more creative minds around the world. Later, this month our SoM offering will also expand to include 2 and 4GB RAM options.

Multi-cam with AI

Coral image

As we expand our platform and product family, we continue to keep new edge AI use cases in mind. We are continually inspired by our developer community’s experimentation and implementations. When recently faced with the challenges of multicam video production from home, Markku Lepistö, Solutions Architect at Google Cloud, created this real-time pose-based multicam tool he so aptly dubbed, AI Director.

We love seeing such unique implementations of on-device ML and invite you to share your own projects and feedback at [email protected].

For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform.

Summer updates from Coral

Posted by the Coral Team

Summer has arrived along with a number of Coral updates. We’re happy to announce a new partnership with balena that helps customers build, manage, and deploy IoT applications at scale on Coral devices. In addition, we’ve released a series of updates to expand platform compatibility, make development easier, and improve the ML capabilities of our devices.

Open-source Edge TPU runtime now available on GitHub

First up, our Edge TPU runtime is now open-source and available on GitHub, including scripts and instructions for building the library for Linux and Windows. Customers running a platform that is not officially supported by Coral, including ARMv7 and RISC-V can now compile the Edge TPU runtime themselves and start experimenting. An open source runtime is easier to integrate into your customized build pipeline, enabling support for creating Yocto-based images as well as other distributions.

Windows drivers now available for the Mini PCIe and M.2 accelerators

Coral customers can now also use the Mini PCIe and M.2 accelerators on the Microsoft Windows platform. New Windows drivers for these products complement the previously released Windows drivers for the USB accelerator and make it possible to start prototyping with the Coral USB Accelerator on Windows and then to move into production with our Mini PCIe and M.2 products.

New fresh bits on the Coral ML software stack

We’ve also made a number of new updates to our ML tools:

  • The Edge TPU compiler is now version 14.1. It can be updated by running sudo apt-get update && sudo apt-get install edgetpu, or follow the instructions here
  • Our new Model Pipelining API allows you to divide your model across multiple Edge TPUs. The C++ version is currently in beta and the source is on GitHub
  • New embedding extractor models for EfficientNet, for use with on-device backpropagation. Embedding extractor models are compiled with the last fully-connected layer removed, allowing you to retrain for classification. Previously, only Inception and MobileNet were available and now retraining can also be done on EfficientNet
  • New Colab notebooks to retrain a classification model with TensorFlow 2.0 and build C++ examples

Balena partners with Coral to enable AI at the edge

We are excited to share that the Balena fleet management platform now supports Coral products!

Companies running a fleet of ML-enabled devices on the edge need to keep their systems up-to-date with the latest security patches in order to protect data, model IP and hardware from being compromised. Additionally, ML applications benefit from being consistently retrained to recognize new use cases with maximum accuracy. Coral + balena together, bring simplicity and ease to the provisioning, deployment, updating, and monitoring of your ML project at the edge, moving early prototyping seamlessly towards production environments with many thousands of devices.

Read more about all the benefits of Coral devices combined with balena container technology or get started deploying container images to your Coral fleet with this demo project.

New version of Mendel Linux

Mendel Linux (5.0 release Eagle) is now available for the Coral Dev Board and SoM and includes a more stable package repository that provides a smoother updating experience. It also brings compatibility improvements and a new version of the GPU driver.

New models

Last but not least, we’ve recently released BodyPix, a Google person-segmentation model that was previously only available for TensorFlow.JS, as a Coral model. This enables real-time privacy preserving understanding of where people (and body parts) are on a camera frame. We first demoed this at CES 2020 and it was one of our most popular demos. Using BodyPix we can remove people from the frame, display only their outline, and aggregate over time to see heat maps of population flow.

Here are two possible applications of BodyPix: Body-part segmentation and anonymous population flow. Both are running on the Dev Board.

We’re excited to add BodyPix to the portfolio of projects the community is using to extend our models far beyond our demos—including tackling today’s biggest challenges. For example, Neuralet has taken our MobileNet V2 SSD Detection model and used it to implement Smart Social Distancing. Using the bounding box of person detection, they can compute a region for safe distancing and let a user know if social distance isn’t being maintained. The best part is this is done without any sort of facial recognition or tracking, with Coral we can accomplish this in real-time in a privacy preserving manner.

We can’t wait to see more projects that the community can make with BodyPix. Beyond anonymous population flow there’s endless possibilities with background and body part manipulation. Let us know what you come up with at our community channels, including GitHub and StackOverflow.

________________________

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, including balena, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform and share your feedback at [email protected].

Building a more resilient world together

Posted by Billy Rutledge, Director of the Coral team

UNDP Hackster.io COVID19 Detect Protect Poster

Recently, we’ve seen communities respond to the challenges of the coronavirus pandemic by using technology in new ways to effect positive change. It’s increasingly important that our systems are able to adapt to new contexts, handle disruptions, and remain efficient.

At Coral, we believe intelligence at the edge is a key ingredient towards building a more resilient future. By making the latest machine learning tools easy-to-use and accessible, innovators can collaborate to create solutions that are most needed in their communities. Developers are already using Coral to build solutions that can understand and react in real-time, while maintaining privacy for everyone present.

Helping our communities stay safe, together

As mandatory isolation measures begin to relax, compliance with safe social distancing protocol has become a topic of primary concern for experts across the globe. Businesses and individuals have been stepping up to find ways to use technology to help reduce the risk and spread. Many efforts are employing the benefits of edge AI—here are a few early stage examples that have inspired us.

woman and child crossing the street

In Belgium, engineers at Edgise recently used Coral to develop an occupancy monitor to aid businesses in managing capacity. With the privacy preserving properties of edge AI, businesses can anonymously count how many customers enter and exit a space, signaling when the area is too full.

A research group at the Sathyabama Institute of Science and Technology in India are using Coral to develop a wearable device to serve as a COVID-19 cough counter and health monitor, allowing medical professionals to better care for low risk patients in an outpatient capacity. Coral’s Edge TPU enables biometric data to be processed efficiently, without draining the limited power resources available in wearable devices.

All across the US, hospitals are seeking solutions to ensure adherence to hygiene policy amongst hospital staff. In one example, a device incorporates the compact, affordable and offline benefits of the Coral modules to aid in handwashing practices at numerous stations throughout a facility.

And around the world, members of the PyImageSearch community are exploring how to train a COVID-19: Face Mask Detector model using TensorFlow that can be used to identify whether people are wearing a mask. Open source frameworks can empower anyone to develop solutions, and with Coral components we can help bring those benefits to everyone.

Eliciting a global response

In an effort to rally greater community involvement, Coral has joined The United Nations Development Programme and Hackster.io, as a sponsor of the COVID-19 Detect and Protect Challenge. The initiative calls on developers to build affordable and reproducible solutions that support response efforts in developing countries. All ideas are welcome—whether they use ML or not—and we encourage you to participate.

To make edge ML capabilities even easier to integrate, we’re also announcing a price reduction for the Coral products widely used for experimentation and prototyping. Our Dev Board will now be offered at $129.99, the USB Accelerator at $59.99, the Camera Module at $19.99, and the Enviro Board at $14.99. Additionally, we are introducing the USB Accelerator into 10 new markets: Ghana, Thailand, Singapore, Oman, Philippines, Indonesia, Kenya, Malaysia, Israel, and Vietnam. For more details, visit Coral.ai/products.

We’re excited to see the solutions developers will bring forward with Coral. And as always, please keep sending us feedback at [email protected]

Want to use AutoML Tables from a Jupyter Notebook? Here’s how

While there’s no doubt that machine learning (ML) can be a great tool for businesses of all shapes and sizes, actually building ML models can seem daunting at first. Cloud AutoML—Google Cloud’s suite of products—provides tools and functionality to help you build ML models that are tailored to your specific needs, without needing deep ML expertise.

AutoML solutions provide a user interface that walks you through each step of model building, including importing data, training your model on the data, evaluating model performance, and predicting values with the model. But, what if you want to use AutoML products outside of the user interface? If you’re working with structured data, one way to do it is by using the AutoML Tables SDK, which lets you trigger—or even automate—each step of the process through code. 

There is a wide variety of ways that the SDK can help embed AutoML capabilities into applications. In this post, we’ll use an example to show how you can use the SDK from end-to-end within your Jupyter Notebook. Jupyter Notebooks are one of the most popular development tools for data scientists. They enable you to create interactive, shareable notebooks with code snippets and markdown for explanations. Without leaving Google Cloud’s hosted notebook environment, AI Platform Notebooks, you can leverage the power of AutoML technology.

There are several benefits of using AutoML technology from a notebook. Each step and setting can be codified so that it runs the same every time by everyone. Also, it’s common, even with AutoML, to need to manipulate the source data before training the model with it. By using a notebook, you can use common tools like pandas and numpy to preprocess the data in the same workflow. Finally, you have the option of creating a model with another framework, and ensemble that together with the AutoML model, for potentially better results. Let’s get started!

Understanding the data

The business problem we’ll investigate in this blog is how to identify fraudulent credit card transactions. The technical challenge we’ll face is how to deal with imbalanced datasets: only 0.17% of the transactions in the dataset we’re using are marked as fraud. More details on this problem are available in the research paper Calibrating Probability with Undersampling for Unbalanced Classification.

To get started, you’ll need a Google Cloud Platform project with billing enabled. To create a project, follow the instructions here. For a smooth experience, check that the necessary storage and ML APIs are enabled. Then, follow this link to access BigQuery public datasets in the Google Cloud console.

In the Resources tree in the bottom-left corner, navigate through the list of datasets until you find ml-datasets, and then select the ulb-fraud-detection table within it.

ulb-fraud-detection.png

Click the Preview tab to preview sample records from the dataset. Each record has the following columns:

  • Time is the number of seconds between the first transaction in the dataset and the time of the selected transaction.
  • V1-V28 are columns that have been transformed via a dimensionality reduction technique called PCA that has anonymized the data.
  • Amount is the transaction amount.
ulb-fraud-detection 1.png

Set up your Notebook Environment

Now that we’ve looked at the data, let’s set up our development environment. The notebook we’ll use can be found in AI Hub. Select the “Open in GCP” button, then choose to either deploy the notebook in a new or existing notebook server.

set up Notebook Environment.png
ai hub.png

Configure the AutoML Tables SDK

Next, let’s highlight key sections of the notebook. Some details, such as setting the project ID, are omitted for brevity, but we highly recommend running the notebook end-to-end when you have an opportunity.

We’ve recently released a new and improved AutoML Tables client library. You will first need to install the library and initialize the Tables client.

By the way, we recently announced that AutoML Tables can now be used in Kaggle kernels. You can learn more in this tutorial notebook, but the setup is similar to what you see here.

Import the Data 

The first step is to create a BigQuery dataset, which is essentially a container for the data. Next, import the data from the BigQuery fraud detection dataset. You can also import from a CSV file in Google Cloud Storage or directly from a pandas dataframe.

Train the Model

First, we have to specify which column we would like to predict, or our target column, with set_target_column(). The target column for our example will be “Class”—either 1 or 0, if the transaction is fraudulent or not.

Then, we’ll specify which columns to exclude from the model. We’ll only exclude the target column, but you could also exclude IDs or other information you don’t want to include in the model.

There are a few other things you might want to do that aren’t necessary needed in this example:

  • Set weights on individual columns

  • Create your own custom test/train/validation split and specify the column to use for the split

  • Specify which timestamp column to use for time-series problems

  • Override the data types and nullable status that AutoML Tables inferred during data import

The one slightly unusual thing that we did in this example is override the default optimization objective. Since this is a very imbalanced dataset, it’s recommended that you optimize for AU-PRC, or the area under the Precision/Recall curve, rather than the default AU-ROC.

Evaluate the Model

After training has been completed, you can review various performance statistics on the model, such as the accuracy, precision, recall, and so on. The metrics are returned in a nested data structure, and here we are pulling out the AU-PRC and AU-ROC from that data structure.

Deploy and Predict with the Model

To enable online predictions, the model must first be deployed. (You can perform batch predictions without deploying the model).

We’ll create a hypothetical transaction record with similar characteristics and predict on it. After invoking the predict() API with this record, we receive a data structure with each class and its score. The code below finds the class with the maximum score.

Conclusion 

Now that we’ve seen how you can use AutoML Tables straight from your notebook to produce an accurate model of a complex problem, all with a minimal amount of code, what’s next?

To find out more, the AutoML Tables documentation is a great place to start. When you’re ready to use AutoML in a notebook, the SDK guide has detailed descriptions of each operation and parameter. You might also find our samples on Github helpful.

After you feel comfortable with AutoML Tables, you might want to look at other AutoML products. You can apply what you’ve learned to solve problems in Natural Language, Translation, Video Intelligence, and Video domains.

Find me on Twitter at @kweinmeister, and good luck with your next AutoML experiment!

Discover insights from text with AutoML Natural Language, now generally available

Organizations are managing and processing greater volumes of text-heavy, unstructured data than ever before. To manage this information more efficiently, organizations are looking to machine learning to help with the complex sorting, processing, and analysis this content needs. In particular, natural language processing is a valuable tool used to reveal the structure and meaning of text, and today we’re excited to announce that AutoML Natural Language is generally available. 

AutoML Natural Language has many features that make it a great match for these data processing challenges. It includes common machine learning tasks like classification, sentiment analysis, and entity extraction, which have a wide variety of applications, such as: 

  • Categorizing digital content, including news, blogs, and tweets, in real time to allow content creators to see patterns and insights—a great example is Meredith, which is categorizing text content across its entire portfolio of media properties in months instead of years

  • Identifying sentiment in customer feedback

  • Turning dark, unstructured scanned data into classified and searchable content 

We’re also introducing support for PDFs, including native PDFs and PDFs of scanned images. To further unlock the most complex and challenging use cases—such as understanding legal documents or document classification for organizations with large and complex content taxonomies—AutoML Natural Language now supports 5,000 classification labels, training up to 1 million documents, and document size up to 10 MB. 

One customer using this new functionality is Chicory, which develops custom digital shopping and marketing solutions for the grocery industry. 

“AutoML Natural Language allows us to solve complex classification problems at scale. We are using AutoML to classify and translate recipe ingredient data across a network of 1,300 recipe websites into actual grocery products that consumers can purchase seamlessly through our partnerships with dozens of leading grocery retailers like Kroger, Amazon, and Instacart,” Asaf Klibansky, Director of Engineering at Chicory explains. “With the expansion of the max classification label size to the thousands, we can expand our label/ingredient taxonomy to be more detailed than ever, providing our shoppers with better matches during their grocery shopping experience—a business challenge we have been trying to perfect since Chicory began. 

“Also, we see better model performance than we were able to achieve using open source libraries, and we have increased visibility into the individual label performance that we did not have before,” Klibanky continues. “This has allowed us to identify insufficient or poor quality training data per label quickly and reduce the time and cost between model iterations.” 

We’re continuously improving the quality of our models in partnership with Google AI research through better fine-tuning techniques, and larger model search spaces. We’re also introducing more advanced features to help AutoML Natural Language understand documents better. 

For example, AutoML Text & Document Entity Extraction will now look at more than just text to incorporate the spatial structure and layout information of a document for model training and prediction. This spatial awareness leads to better understanding of the entire document, and is especially valuable in cases where both the text and its location on the “page” are important, such as invoices, receipts, resumes, and contracts.

GCP AutoML.png
Identifying applicant skills by location on the document.

We also launched preferences for enterprise data residency for AutoML Natural Language customers in Europe and across the globe to better serve organizations in regulated industries. Many customers are already taking advantage of this functionality, which allows you to create a dataset, train a model, and make predictions while keeping your data and related machine learning processing within the EU or any other applicable region. Finally, AutoML Natural Language is FedRAMP-authorized at the Moderate level, making it easier for federal agencies to benefit from Google AI technology.

To learn more about AutoML Natural Language and the Natural Language API, check out our website. We can’t wait to hear what you discover with your data.

Better bandit building: Advanced personalization the easy way with AutoML Tables

As demand grows for features like personalization systems, efficient information retrieval, and anomaly detection, the need for a solution to optimize these features has grown as well. Contextual bandit is a machine learning framework designed to tackle these—and other—complex situations.

With contextual bandit, a learning algorithm can test out different actions and automatically learn which one has the most rewarding outcome for a given situation. It’s a powerful, generalizable approach for solving key business needs in industries from healthcare to finance, and almost everything in between.

While many businesses may want to use bandits, applying it to your data can be challenging, especially without a dedicated ML team. It requires model building, feature engineering, and creating a pipeline to conduct this approach.

Using Google Cloud AutoML Tables, however, we were able to create a contextual bandit model pipeline that performs as good or better than other models, without needing a specialist for tuning or feature engineering.

A better bandit building solution: AutoML Tables

Before we get too deep into what contextual bandits are and how they work, let’s briefly look at why AutoML Tables is such a powerful tool for training them. Our contextual bandits model pipeline takes in structured data in the form of a simple database table, uses the contextual bandit and meta-learning theories to perform automated machine learning, and creates a model that can be used to suggest optimal future actions related to the problem. 

In our research paper, “AutoML for Contextual Bandits”—which we presented at the ACM RecSys Conference REVEAL workshop—we illustrated how to set this up using the standard, commercially available Google Cloud product.

As we describe in the paper, AutoML Tables enables users with little machine learning expertise to easily train a model using a contextual bandit approach. It does this with:

  • Automated Feature Engineering, which is applied to the raw input data

  • Architecture Search to compute the best architecture(s) for our bandits formulation task—e.g. to find the best predictor model for the expected reward of each episode

  • Hyper-parameter Tuning through search

  • Model Selection where models that have achieved promising results are passed onto the next stage

  • Model Tuning and Ensembling

This solution could be a game-changer for businesses that want to perform bandit machine learning but don’t have the resources to implement it from scratch. 

Bandits, explained

Now that we’ve seen how AutoML Tables handles bandits, we can learn more about what, exactly, they are. As with many topics, bandits are best illustrated with the help of an example. Let’s say you are an online retailer that wants to show personalized product suggestions on your homepage.

You can only show a limited number of products to a specific customer, and you don’t know which ones will have the best reward. In this case, let’s make the reward $0 if the customer doesn’t buy the product, and the item price if they do.

To try to maximize your reward, you could utilize a multi-armed bandit (MAB) algorithm, where each product is a bandit—a choice available for the algorithm to try. As we can see below, the multi-armed bandit agent must choose to show the user item 1 or item 2 during each play. Each play is independent of the other—sometimes the user will buy item 2 for $22, sometimes the user will buy item 2 twice earning a reward of $44.

1 multi-armed bandits agent.png

The multi-armed bandit approach balances exploration and exploitation of bandits.

2 exploration and exploitation.png

To continue our example, you probably want to show a camera enthusiast products related to cameras (exploitation), but you also want to see what other products they may be interested in, like gaming gadgets or wearables (exploration). A good practice is to exploit more at the beginning, when the agent’s information about the environment is less accurate, and gradually adapt this policy as more knowledge is gained.

Now let’s say we have a customer that’s a professional interior designer and an avid knitting hobbyist. They may be ordering wallpaper and mirrors during working hours and browsing different yarns when they’re home. Depending on what time of day they access our website, we may want to show them different products.

The contextual bandit algorithm is an extension of the multi-armed bandit approach where we factor in the customer’s environment, or context, when choosing a bandit. The context affects how a reward is associated with each bandit, so as contexts change, the model should learn to adapt its bandit choice, as shown below.

3 contextual bandits.png

Not only do you want your contextual bandit approach to find the maximum reward, you also want to reduce the reward loss when you’re exploring different bandits. When judging the performance of a model, the metric that measures reward loss is regret—the difference between the cumulative reward from the optimal policy and the model’s cumulative sum of rewards over time. The lower the regret, the better the model.

How contextual bandits on AutoML Tables measures up

In “AutoML for Contextual Bandits” we used different data sets to compare our bandit model powered by AutoML Tables to previous work. Namely, we compared our model to the online cover algorithm implementation for Contextual Bandit in the Vowpal Wabbit library, which is considered one of the most sophisticated options available for contextual bandit learning.

Using synthetic data we generated, we found that our AutoML Tables model reduced the regret metric as the number of data blocks increased, and outperformed the Vowpal Wabbit offering.

4 regret curve on synthetic data.png

We also compared our model’s performance with other models on some other well-known datasets that the contextual bandit approach has been tried on. These datasets have been used in other popular work in the field, and aim to test contextual bandit models on applications as diverse as chess and telescope data. 

We consistently found that our AutoML model performed well against other approaches, and was exceptionally better than the Vowpal Wabbit solution on some datasets.

5 regret curve gamma telescope.png
6 chess.png
7 covertype.png
7 dou shou.png

Contextual bandits is an exciting method for solving the complex problems businesses face today, and AutoML Tables makes it accessible for a wide range of organizations—and performs extremely well, to boot. To learn more about our solution, check out “AutoML for Contextual Bandits.” Then, if you have more direct questions or just want more information, reach out to us at [email protected]


The Google Cloud Bandits Solutions Team contributed to this report: Joe Cheuk, Cloud Application Engineer; Praneet Dutta, Cloud Machine Learning Engineer; Jonathan S Kim, Customer Engineer; Massimo Mascaro, Technical Director, Office of the CTO, Applied AI

Updates from Coral: Mendel Linux 4.0 and much more!

Posted by Carlos Mendonça (Product Manager), Coral TeamIllustration of the Coral Dev Board placed next to Fall foliage

Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we’re announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.

We have made significant updates to improve performance and stability. Mendel Linux 4.0 release Day is based on Debian 10 Buster and includes upgraded GStreamer pipelines and support for Python 3.7, OpenCV, and OpenCL. The Linux kernel has also been updated to version 4.14 and U-Boot to version 2017.03.3.

We’ve also made it possible to use the Dev Board’s GPU to convert YUV to RGB pixel data at up to 130 frames per second on 1080p resolution, which is one to two orders of magnitude faster than on Mendel Linux 3.0 release Chef. These changes make it possible to run inferences with YUV-producing sources such as cameras and hardware video decoders.

To upgrade your Dev Board or SoM, follow our guide to flash a new system image.

MediaPipe on Coral

MediaPipe is an open-source, cross-platform framework for building multi-modal machine learning perception pipelines that can process streaming data like video and audio. For example, you can use MediaPipe to run on-device machine learning models and process video from a camera to detect, track and visualize hand landmarks in real-time.

Developers and researchers can prototype their real-time perception use cases starting with the creation of the MediaPipe graph on desktop. Then they can quickly convert and deploy that same graph to the Coral Dev Board, where the quantized TensorFlow Lite model will be accelerated by the Edge TPU.

As part of this first release, MediaPipe is making available new experimental samples for both object and face detection, with support for the Coral Dev Board and SoM. The source code and instructions for compiling and running each sample are available on GitHub and on the MediaPipe documentation site.

New Teachable Sorter project tutorial

New Teachable Sorter project tutorial

A new Teachable Sorter tutorial is now available. The Teachable Sorter is a physical sorting machine that combines the Coral USB Accelerator’s ability to perform very low latency inference with an ML model that can be trained to rapidly recognize and sort different objects as they fall through the air. It leverages Google’s new Teachable Machine 2.0, a web application that makes it easy for anyone to quickly train a model in a fun, hands-on way.

The tutorial walks through how to build the free-fall sorter, which separates marshmallows from cereal and can be trained using Teachable Machine.

Coral is now on TensorFlow Hub

Earlier this month, the TensorFlow team announced a new version of TensorFlow Hub, a central repository of pre-trained models. With this update, the interface has been improved with a fresh landing page and search experience. Pre-trained Coral models compiled for the Edge TPU continue to be available on our Coral site, but a select few are also now available from the TensorFlow Hub. On the site, you can find models featuring an Overlay interface, allowing you to test the model’s performance against a custom set of images right from the browser. Check out the experience for MobileNet v1 and MobileNet v2.

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the new Coral partnerships page. We hope you’ll use the new features offered on Coral.ai as a resource and encourage you to keep sending us feedback at [email protected].

Bringing Google AutoML to 3.5 Million Data Scientists on Kaggle

Recently, Kaggle hit a significant milestone by surpassing over 3.5 million users that use our platform to learn and apply machine learning. AI is one of the world’s most powerful emerging technologies, but even with its growing numbers, its adoption has been hampered by the limited amount of data scientists who have access to the tools and expertise to leverage it effectively. Kaggle’s mission is to empower our community of data scientists by providing them with the skills and tools they need to lead in their field, and now we’re advancing that mission by integrating AutoML into our platform. 

Why we’re excited about AutoML

AutoML stepped into our spotlight earlier this year, where it led for most of our all-day machine learning competition at Kaggle Days at Cloud Next ’19, before being narrowly edged out by a team of data scientists in the closing moments of the event. The strong performance even made headlines and generated excitement for its future. 

What especially drew our interest was that the team using AutoML was able to get strong performing results quickly, with low effort and no domain expertise or supervision. What’s more, they spent very little time on data prep, and virtually no time on feature engineering, model selection, and hyperparameter tuning. The time efficiency of AutoML became even more clear during the IEEE competition, where it took thousands of teams several weeks to beat the AutoML benchmark by a significant margin on our private leaderboard.

competitiion submission.png
This figure shows submission scores (individual points) during the first four weeks of the competition, compared to the AutoML Tables benchmark score posted at the start of the competition (green line). The dashed blue line represents the 90th percentile of daily submission scores. The AutoML Tables benchmark beats >90% of daily submissions for approximately the first two weeks of the competition.

The simplicity and efficacy of the tool brings promising potential for people with data science problems—but not necessarily deep data science backgrounds—to create powerful models.

How it works

Automated machine learning tools (AMLTs) have been on the market for several years and come in varying flavors, but they all generally look to automate the end-to-end process of training a machine learning model based on minimally preprocessed input data. Google Brain published their seminal paper on automated machine learning in 2016, and the exciting results from research, combined with the potential to make machine learning more accessible led Google Cloud to invest in making AutoML accessible, through its AI Platform

Cloud AutoML is now a suite of products that helps users build custom machine learning models for a diverse set of tasks on data ranging from vision to language to structured data. The exact use varies by each individual product, but all follow the general pattern of ingesting your data from their SDK or web UI, giving you a few knobs to adjust, and then outputting a trained model that can be deployed to GCP with one click. Today’s release focuses on enabling our community to use the SDK directly within Kaggle Notebooks.

Getting started with AutoML on Kaggle

Kaggle’s integration with AutoML follows in the footsteps of our prior work bringing BigQuery to Kaggle Notebooks.

To get started, simply link your GCP account and authorize access to the cloud services you’d like to use. Enabling Cloud Storage at the same time will make it easy for AutoML to access your data.

python unknown.png
AutoML for Kaggle.png

Once you link your Google account, you’ll want to double-check that your cloud account is ready for you to start using AutoML. To do this, make sure you’ve enabled the ML APIs and billing for your GCP project. AutoML is a paid service, and free tier limits and charges vary by the individual product you are using. In order to make it more accessible to more Kagglers, we plan to offer GCP credits throughout the year to subsidize the costs of using the service, and all new Google accounts that sign up for GCP get a $300.

From there, you’re ready to get started!

You can now easily run AutoML using the built-in client SDKs within your Kaggle Notebook, or by using the web interface within the cloud console. To get started with AutoML in your Notebook, check out the documentation or one of our tutorials. To learn more about the topic of automated machine learning and how it can improve your data science workflow, take a look at our explainer video.

Keep up with the latest with Kaggle

We’re looking forward to seeing what you think about these new tools, and we’ll continue to invest in new ways to make our platform and machine learning more accessible. Follow Kaggle’s YouTube channel to keep up with the latest on what Kaggle’s doing, including an upcoming workshop on model selection, weekly live coding, and more.

Coral moves out of beta

Posted by Vikram Tank (Product Manager), Coral Team

microchips on coral colored background

Last March, we launched Coral beta from Google Research. Coral helps engineers and researchers bring new models out of the data center and onto devices, running TensorFlow models efficiently at the edge. Coral is also at the core of new applications of local AI in industries ranging from agriculture to healthcare to manufacturing. We’ve received a lot of feedback over the past six months and used it to improve our platform. Today we’re thrilled to graduate Coral out of beta, into a wider, global release.

Coral is already delivering impact across industries, and several of our partners are including Coral in products that require fast ML inferencing at the edge.

In healthcare, Care.ai is using Coral to build a device that enables hospitals and care centers to respond quickly to falls, prevent bed sores, improve patient care, and reduce costs. Virgo SVS is also using Coral as the basis of a polyp detection system that helps doctors improve the accuracy of endoscopies.

In a very different use case, Olea Edge employs Coral to help municipal water utilities accurately measure the amount of water used by their commercial customers. Their Meter Health Analytics solution uses local AI to reduce waste and predict equipment failure in industrial water meters.

Nexcom is using Coral to build gateways with local AI and provide a platform for next-gen, AI-enabled IoT applications. By moving AI processing to the gateway, existing sensor networks can stay in service without the need to add AI processing to each node.

From prototype to production

Microchips on white background

Coral’s Dev Board is designed as an integrated prototyping solution for new product development. Under the heatsink is the detachable Coral SoM, which combines Google’s Edge TPU with the NXP IMX8M SoC, Wi-Fi and Bluetooth connectivity, memory, and storage. We’re happy to announce that you can now purchase the Coral SoM standalone. We’ve also created a baseboard developer guide to help integrate it into your own production design.

Our Coral USB Accelerator allows users with existing system designs to add local AI inferencing via USB 2/3. For production workloads, we now offer three new Accelerators that feature the Edge TPU and connect via PCIe interfaces: Mini PCIe, M.2 A+E key, and M.2 B+M key. You can easily integrate these Accelerators into new products or upgrade existing devices that have an available PCIe slot.

The new Coral products are available globally and for sale at Mouser; for large volume sales, contact our sales team. By the end of 2019, we’ll continue to expand our distribution of the Coral Dev Board and SoM into new markets including: Taiwan, Australia, New Zealand, India, Thailand, Singapore, Oman, Ghana and the Philippines.

Better resources

We’ve also revamped the Coral site with better organization for our docs and tools, a set of success stories, and industry focused pages. All of it can be found at a new, easier to remember URL Coral.ai.

To help you get the most out of the hardware, we’re also publishing a new set of examples. The included models and code can provide solutions to the most common on-device ML problems, such as image classification, object detection, pose estimation, and keyword spotting.

For those looking for a more in-depth application—and a way to solve the eternal problem of squirrels plundering your bird feeder—the Smart Bird Feeder project shows you how to perform classification with a custom dataset on the Coral Dev board.

Finally, we’ll soon release a new version of the Mendel OS that updates the system to Debian Buster, and we’re hard at work on more improvements to the Edge TPU compiler and runtime that will improve the model development workflow.

The official launch of Coral is, of course, just the beginning, and we’ll continue to evolve the platform. Please keep sending us feedback at [email protected].

How Moorfields is using AutoML to enable clinicians to develop machine learning solutions

The democratization of AI and machine learning holds the promise for outcomes with enormous human benefit, and nowhere is this more apparent than in health and life sciences. One such example is Moorfields Eye Hospital NHS Foundation Trust, the leading provider of eye health services in the UK and a world-class centre of excellence for ophthalmic research and education.

In 2016, Moorfields announced a five-year partnership with DeepMind Health to explore whether artificial intelligence (AI) technology could help clinicians improve patient care. Last year, as a result of this partnership, Moorfields announced a major milestone for the treatment of eye disease. Its AI system could quickly interpret eye scans from routine clinical practice for over 50 sight-threatening eye diseases—as accurately as world-leading expert doctors.

Today, Moorfields has announced another new advancement, which has been published in The Lancet Digital Health. Using Google Cloud AutoML Vision, clinicians without prior experience in coding or deep learning were able to develop models to accurately detect common diseases from medical images.

As Pearse Keane, Consultant Ophthalmologist at Moorfields Eye Hospital who led this project said:

“At present, the development of AI systems requires highly specialised technical expertise. If this technology can be used more widely—in particular by healthcare professionals without computer programming experience—it will really speed up the development of these systems with the potential for significant patient benefits.’

Although the ability to create classification models without deep understanding of AI is attractive, comparative performance against expertly-designed models is still limited to more simple classification tasks. Pearse adds: “The process needs refining and regulation, but our results show promise for the future expansion of AI in medical diagnosis.”

Google Cloud AutoML is a set of products that allows users without ML expertise to develop and train high-quality machine learning models. By applying Google’s cutting-edge research in transfer learning and neural architecture search technology,  users can leverage the results of existing state-of-the-art ML models to build new ones with brand new data. Because the most complex part of the model–feature extraction–is pre-trained, classification in a new dataset is fast and accurate. The team at Moorfields was able to quickly train and evaluate five different models using Cloud Auto ML.

The Moorfields team started by identifying five public open-source datasets that their researchers could use to test and train models. These included de-identified medical images from the fields of ophthalmology, radiology, and dermatology such as eye scans, chest x-rays and photos of skin lesions. After learning  how to use Cloud Auto ML Vision by reviewing ten hours of online documentation, two researchers assembled and reviewed data sets simultaneously. They then worked together to build the models. After the images were uploaded to Google Cloud, AutoML Vision was used to train each model for up to 24 hours. 

The resulting models were then compared to published results from deep learning studies. All of the models the researchers created except one performed as well as state-of-the-art deep learning algorithms. The research demonstrates the potential for clinicians without AI expertise to explore and develop technologies to transform patient care. Beyond allowing clinicians to build and test diagnostic models, AutoML can be used to train physicians in the basics of deep learning. While the focus of this research was not centered around interpretability, it is understood to be of critical importance for medical applications.

AI continues to pave the way for advancements that improve lives on a global scale—from business to healthcare to education. Cloud AutoML has already been used by researchers to assess and track environmental change, by scientists to help monitor endangered species, and by The New York Times to digitize and preserve 100 years of history in its photo archive. We’re excited to see how businesses and organizations across the world apply AI to solve the problems that matter most.

Coral summer updates: Post-training quant support, TF Lite delegate, and new models!

Posted by Vikram Tank (Product Manager), Coral Team

Summer updates cartoon

Coral’s had a busy summer working with customers, expanding distribution, and building new features — and of course taking some time for R&R. We’re excited to share updates, early work, and new models for our platform for local AI with you.

The compiler has been updated to version 2.0, adding support for models built using post-training quantization—only when using full integer quantization (previously, we required quantization-aware training)—and fixing a few bugs. As the Tensorflow team mentions in their Medium post “post-training integer quantization enables users to take an already-trained floating-point model and fully quantize it to only use 8-bit signed integers (i.e. `int8`).” In addition to reducing the model size, models that are quantized with this method can now be accelerated by the Edge TPU found in Coral products.

We’ve also updated the Edge TPU Python library to version 2.11.1 to include new APIs for transfer learning on Coral products. The new on-device back propagation API allows you to perform transfer learning on the last layer of an image classification model. The last layer of a model is removed before compilation and implemented on-device to run on the CPU. It allows for near-real time transfer learning and doesn’t require you to recompile the model. Our previously released imprinting API, has been updated to allow you to quickly retrain existing classes or add new ones while leaving other classes alone. You can now even keep the classes from the pre-trained base model. Learn more about both options for on-device transfer learning.

Until now, accelerating your model with the Edge TPU required that you write code using either our Edge TPU Python API or in C++. But now you can accelerate your model on the Edge TPU when using the TensorFlow Lite interpreter API, because we’ve released a TensorFlow Lite delegate for the Edge TPU. The TensorFlow Lite Delegate API is an experimental feature in TensorFlow Lite that allows for the TensorFlow Lite interpreter to delegate part or all of graph execution to another executor—in this case, the other executor is the Edge TPU. Learn more about the TensorFlow Lite delegate for Edge TPU.

Coral has also been working with Edge TPU and AutoML teams to release EfficientNet-EdgeTPU: a family of image classification models customized to run efficiently on the Edge TPU. The models are based upon the EfficientNet architecture to achieve the image classification accuracy of a server-side model in a compact size that’s optimized for low latency on the Edge TPU. You can read more about the models’ development and performance on the Google AI Blog, and download trained and compiled versions on the Coral Models page.

And, as summer comes to an end we also want to share that Arrow offers a student teacher discount for those looking to experiment with the boards in class or the lab this year.

We’re excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].