Developer updates from Coral

Posted by The Coral Team

We’re always excited to share updates to our Coral platform for building edge ML applications. In this post, we have some interesting demos, interfaces, and tutorials to share, and we’ll start by pointing you to an important software update for the Coral Dev Board.

Important update for the Dev Board / SoM

If you have a Coral Dev Board or Coral SoM, please install our latest Mendel update as soon as possible to receive a critical fix to part of the SoC power configuration. To get it, just log onto your board and install the update as follows:

Dev Board / Som

This will install a patch from NXP for the Dev Board / SoM’s SoC, without which it’s possible the SoC will overstress and the lifetime of the device could be reduced. If you recently flashed your board with the latest system image, you might already have this fix (we also updated the flashable image today), but it never hurts to fetch all updates, as shown above.

Note: This update does not apply to the Dev Board Mini.

Manufacturing demo

We recently published the Coral Manufacturing Demo, which demonstrates how to use a single Coral Edge TPU to simultaneously accomplish two common manufacturing use-cases: worker safety and visual inspection.

The demo is designed for two specific videos and tasks (worker keepout detection and apple quality grading) but it is designed to be easily customized with different inputs and tasks. The demo, written in C++, requires OpenGL and is primarily targeted at x86 systems which are prevalent in manufacturing gateways – although ARM Cortex-A systems, like the Coral Dev Board, are also supported.

demo image

Web Coral

We’ve been working hard to make ML acceleration with the Coral Edge TPU available for most popular systems. So we’re proud to announce support for WebUSB, allowing you to use the Coral USB Accelerator directly from Chrome. To get started, check out our WebCoral demo, which builds a webpage where you can select a model and run an inference accelerated by the Edge TPU.

 Edge TPU

New models repository

We recently released a new models repository that makes it easier to explore the various trained models available for the Coral platform, including image classification, object detection, semantic segmentation, pose estimation, and speech recognition. Each family page lists the various models, including details about training dataset, input size, latency, accuracy, model size, and other parameters, making it easier to select the best fit for the application at hand. Lastly, each family page includes links to training scripts and example code to help you get started. Or for an overview of all our models, you can see them all on one page.

Models, trained TensorFlow models for the Edge TPU

Transfer learning tutorials

Even with our collection of pre-trained models, it can sometimes be tricky to create a task-specific model that’s compatible with our Edge TPU accelerator. To make this easier, we’ve released some new Google Colab tutorials that allow you to perform transfer learning for object detection, using MobileDet and EfficientDet-Lite models. You can find these and other Colabs in our GitHub Tutorials repo.

We are excited to share all that Coral has to offer as we continue to evolve our platform. Keep an eye out for more software and platform related news coming this summer. To discover more about our edge ML platform, please visit Coral.ai and share your feedback at [email protected].

Doubling down on the edge with Coral’s new accelerator

Posted by The Coral Team

Coral image

Moving into the fall, the Coral platform continues to grow with the release of the M.2 Accelerator with Dual Edge TPU. Its first application is in Google’s Series One room kits where it helps to remove interruptions and makes the audio clearer for better video meetings. To help even more folks build products with Coral intelligence, we’re dropping the prices on several of our products. And for those folks that are looking to level up their at home video production, we’re sharing a demo of a pose based AI director to make multi-camera video easier to make.

Coral M.2 Accelerator with Dual Edge TPU

The newest addition to our product family brings two Edge TPU co-processors to systems in an M.2 E-key form factor. While the design requires a dual bus PCIe M.2 slot, it brings enhanced ML performance (8 TOPS) to tasks such as running two models in parallel or pipelining one large model across both Edge TPUs.

The ability to scale across multiple edge accelerators isn’t limited to only two Edge TPUs. As edge computing expands to local data centers, cell towers, and gateways, multi-Edge TPU configurations will be required to help process increasingly sophisticated ML models. Coral allows the use of a single toolchain to create models for one or more Edge TPUs that can address many different future configurations.

A great example of how the Coral M.2 Accelerator with Dual Edge TPU is being used is in the Series One meeting room kits for Google Meet.

The new Series One room kits for Google Meet run smarter with Coral intelligence

Coral image

Google’s new Series One room kits use our Coral M.2 Accelerator with Dual Edge TPU to bring enhanced audio clarity to video meetings. TrueVoice®, a multi-channel noise cancellation technology, minimizes distractions to ensure every voice is heard with up to 44 channels of echo and noise cancellation, making distracting sounds like snacking or typing on a keyboard a concern of the past.

Enabling the clearest possible communication in challenging environments was the target for the Google Meet hardware team. The consideration of what makes a challenging environment was not limited to unusually noisy environments, such as lunchrooms doubling as conference rooms. Any conference room can present challenging acoustics that make it difficult for all participants to be heard.

The secret to clarity without expensive and cumbersome equipment is to use virtual audio channels and AI driven sound isolation. Read more about how Coral was used to enhance and future-proof the innovative design.

Expanding the AI edge

Earlier this year, we reduced the prices of our prototyping devices and sensors. We are excited to share further price drops on more of our products. Our System-on-Module is now available for $99.99, and our Mini PCIe Accelerator, M.2 Accelerator A+E Key, and M.2 Accelerator B+M key are now available at $24.99. We hope this lower price will make our edge AI more accessible to more creative minds around the world. Later, this month our SoM offering will also expand to include 2 and 4GB RAM options.

Multi-cam with AI

Coral image

As we expand our platform and product family, we continue to keep new edge AI use cases in mind. We are continually inspired by our developer community’s experimentation and implementations. When recently faced with the challenges of multicam video production from home, Markku Lepistö, Solutions Architect at Google Cloud, created this real-time pose-based multicam tool he so aptly dubbed, AI Director.

We love seeing such unique implementations of on-device ML and invite you to share your own projects and feedback at [email protected].

For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform.

Summer updates from Coral

Posted by the Coral Team

Summer has arrived along with a number of Coral updates. We’re happy to announce a new partnership with balena that helps customers build, manage, and deploy IoT applications at scale on Coral devices. In addition, we’ve released a series of updates to expand platform compatibility, make development easier, and improve the ML capabilities of our devices.

Open-source Edge TPU runtime now available on GitHub

First up, our Edge TPU runtime is now open-source and available on GitHub, including scripts and instructions for building the library for Linux and Windows. Customers running a platform that is not officially supported by Coral, including ARMv7 and RISC-V can now compile the Edge TPU runtime themselves and start experimenting. An open source runtime is easier to integrate into your customized build pipeline, enabling support for creating Yocto-based images as well as other distributions.

Windows drivers now available for the Mini PCIe and M.2 accelerators

Coral customers can now also use the Mini PCIe and M.2 accelerators on the Microsoft Windows platform. New Windows drivers for these products complement the previously released Windows drivers for the USB accelerator and make it possible to start prototyping with the Coral USB Accelerator on Windows and then to move into production with our Mini PCIe and M.2 products.

New fresh bits on the Coral ML software stack

We’ve also made a number of new updates to our ML tools:

  • The Edge TPU compiler is now version 14.1. It can be updated by running sudo apt-get update && sudo apt-get install edgetpu, or follow the instructions here
  • Our new Model Pipelining API allows you to divide your model across multiple Edge TPUs. The C++ version is currently in beta and the source is on GitHub
  • New embedding extractor models for EfficientNet, for use with on-device backpropagation. Embedding extractor models are compiled with the last fully-connected layer removed, allowing you to retrain for classification. Previously, only Inception and MobileNet were available and now retraining can also be done on EfficientNet
  • New Colab notebooks to retrain a classification model with TensorFlow 2.0 and build C++ examples

Balena partners with Coral to enable AI at the edge

We are excited to share that the Balena fleet management platform now supports Coral products!

Companies running a fleet of ML-enabled devices on the edge need to keep their systems up-to-date with the latest security patches in order to protect data, model IP and hardware from being compromised. Additionally, ML applications benefit from being consistently retrained to recognize new use cases with maximum accuracy. Coral + balena together, bring simplicity and ease to the provisioning, deployment, updating, and monitoring of your ML project at the edge, moving early prototyping seamlessly towards production environments with many thousands of devices.

Read more about all the benefits of Coral devices combined with balena container technology or get started deploying container images to your Coral fleet with this demo project.

New version of Mendel Linux

Mendel Linux (5.0 release Eagle) is now available for the Coral Dev Board and SoM and includes a more stable package repository that provides a smoother updating experience. It also brings compatibility improvements and a new version of the GPU driver.

New models

Last but not least, we’ve recently released BodyPix, a Google person-segmentation model that was previously only available for TensorFlow.JS, as a Coral model. This enables real-time privacy preserving understanding of where people (and body parts) are on a camera frame. We first demoed this at CES 2020 and it was one of our most popular demos. Using BodyPix we can remove people from the frame, display only their outline, and aggregate over time to see heat maps of population flow.

Here are two possible applications of BodyPix: Body-part segmentation and anonymous population flow. Both are running on the Dev Board.

We’re excited to add BodyPix to the portfolio of projects the community is using to extend our models far beyond our demos—including tackling today’s biggest challenges. For example, Neuralet has taken our MobileNet V2 SSD Detection model and used it to implement Smart Social Distancing. Using the bounding box of person detection, they can compute a region for safe distancing and let a user know if social distance isn’t being maintained. The best part is this is done without any sort of facial recognition or tracking, with Coral we can accomplish this in real-time in a privacy preserving manner.

We can’t wait to see more projects that the community can make with BodyPix. Beyond anonymous population flow there’s endless possibilities with background and body part manipulation. Let us know what you come up with at our community channels, including GitHub and StackOverflow.

________________________

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, including balena, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform and share your feedback at [email protected].

Building a more resilient world together

Posted by Billy Rutledge, Director of the Coral team

UNDP Hackster.io COVID19 Detect Protect Poster

Recently, we’ve seen communities respond to the challenges of the coronavirus pandemic by using technology in new ways to effect positive change. It’s increasingly important that our systems are able to adapt to new contexts, handle disruptions, and remain efficient.

At Coral, we believe intelligence at the edge is a key ingredient towards building a more resilient future. By making the latest machine learning tools easy-to-use and accessible, innovators can collaborate to create solutions that are most needed in their communities. Developers are already using Coral to build solutions that can understand and react in real-time, while maintaining privacy for everyone present.

Helping our communities stay safe, together

As mandatory isolation measures begin to relax, compliance with safe social distancing protocol has become a topic of primary concern for experts across the globe. Businesses and individuals have been stepping up to find ways to use technology to help reduce the risk and spread. Many efforts are employing the benefits of edge AI—here are a few early stage examples that have inspired us.

woman and child crossing the street

In Belgium, engineers at Edgise recently used Coral to develop an occupancy monitor to aid businesses in managing capacity. With the privacy preserving properties of edge AI, businesses can anonymously count how many customers enter and exit a space, signaling when the area is too full.

A research group at the Sathyabama Institute of Science and Technology in India are using Coral to develop a wearable device to serve as a COVID-19 cough counter and health monitor, allowing medical professionals to better care for low risk patients in an outpatient capacity. Coral’s Edge TPU enables biometric data to be processed efficiently, without draining the limited power resources available in wearable devices.

All across the US, hospitals are seeking solutions to ensure adherence to hygiene policy amongst hospital staff. In one example, a device incorporates the compact, affordable and offline benefits of the Coral modules to aid in handwashing practices at numerous stations throughout a facility.

And around the world, members of the PyImageSearch community are exploring how to train a COVID-19: Face Mask Detector model using TensorFlow that can be used to identify whether people are wearing a mask. Open source frameworks can empower anyone to develop solutions, and with Coral components we can help bring those benefits to everyone.

Eliciting a global response

In an effort to rally greater community involvement, Coral has joined The United Nations Development Programme and Hackster.io, as a sponsor of the COVID-19 Detect and Protect Challenge. The initiative calls on developers to build affordable and reproducible solutions that support response efforts in developing countries. All ideas are welcome—whether they use ML or not—and we encourage you to participate.

To make edge ML capabilities even easier to integrate, we’re also announcing a price reduction for the Coral products widely used for experimentation and prototyping. Our Dev Board will now be offered at $129.99, the USB Accelerator at $59.99, the Camera Module at $19.99, and the Enviro Board at $14.99. Additionally, we are introducing the USB Accelerator into 10 new markets: Ghana, Thailand, Singapore, Oman, Philippines, Indonesia, Kenya, Malaysia, Israel, and Vietnam. For more details, visit Coral.ai/products.

We’re excited to see the solutions developers will bring forward with Coral. And as always, please keep sending us feedback at [email protected]

New Coral products for 2020

Posted by Billy Rutledge, Director Google Research, Coral Team

More and more industries are beginning to recognize the value of local AI, where the speed of local inference allows considerable savings on bandwidth and cloud compute costs, and keeping data local preserves user privacy.

Last year, we launched Coral, our platform of hardware components and software tools that make it easy to prototype and scale local AI products. Our product portfolio includes the Coral Dev Board, USB Accelerator, and PCIe Accelerators, all now available in 36 countries.

Since our release, we’ve been excited by the diverse range of applications already built on Coral across a broad set of industries that range from healthcare to agriculture to smart cities. And for 2020, we’re excited to announce new additions to the Coral platform that will expand the possibilities even further.

First up is the Coral Accelerator Module, an easy to integrate multi-chip package that encapsulates the Edge TPU ASIC. The module exposes both PCIe and USB interfaces and can easily integrate into custom PCB designs. We’ve been working closely with Murata to produce the module and you can see a demo at CES 2020 by visiting their booth at the Las Vegas Convention Center, Tech East, Central Plaza, CP-18. The Coral Accelerator Module will be available in the first half of 2020.

Coral Accelerator Module, a new multi-chip module with Google Edge TPU

Coral Accelerator Module, a new multi-chip module with Google Edge TPU

Next, we’re announcing the Coral Dev Board Mini, which provides a smaller form-factor, lower-power, and lower-cost alternative to the Coral Dev Board. The Mini combines the new Coral Accelerator Module with the MediaTek 8167s SoC to create a board that excels at 720P video encoding/decoding and computer vision use cases. The board will be on display during CES 2020 at the MediaTek showcase located in the Venetian, Tech West, Level 3. The Coral Dev Board Mini will be available in the first half of 2020.

We’re also offering new variations to the Coral System-on-Module, now available with 2GB and 4GB LPDDR4 RAM in addition to the original 1GB LPDDR4 configuration. We’ll be showcasing how the SoM can be used in smart city, manufacturing, and healthcare applications, as well as a few new SoC and MCU explorations we’ve been working on with the NXP team at CES 2020 in their pavilion located at the Las Vegas Convention Center, Tech East, Central Plaza, CP-18.

Finally, Asus has chosen the Coral SOM as the base to their Tinker Edge T product, a maker friendly single-board computer that features a rich set of I/O interfaces, multiple camera connectors, programmable LEDs, and color-coded GPIO header. The Tinker Edge T board will be available soon — more details can be found here from Asus.

Come visit Coral at CES Jan 7-10 in Las Vegas:

  • NXP exhibit (LVCC, Tech East, Central Plaza, CP-18)
  • Mediatek exhibit (Venetian, Tech West, Level 3)
  • Murata exhibit (LVCC, South Hall 2, MP26061)

And, as always, we are always looking for ways to improve the platform, so keep reaching out to us at [email protected]

Updates from Coral: Mendel Linux 4.0 and much more!

Posted by Carlos Mendonça (Product Manager), Coral TeamIllustration of the Coral Dev Board placed next to Fall foliage

Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we’re announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.

We have made significant updates to improve performance and stability. Mendel Linux 4.0 release Day is based on Debian 10 Buster and includes upgraded GStreamer pipelines and support for Python 3.7, OpenCV, and OpenCL. The Linux kernel has also been updated to version 4.14 and U-Boot to version 2017.03.3.

We’ve also made it possible to use the Dev Board’s GPU to convert YUV to RGB pixel data at up to 130 frames per second on 1080p resolution, which is one to two orders of magnitude faster than on Mendel Linux 3.0 release Chef. These changes make it possible to run inferences with YUV-producing sources such as cameras and hardware video decoders.

To upgrade your Dev Board or SoM, follow our guide to flash a new system image.

MediaPipe on Coral

MediaPipe is an open-source, cross-platform framework for building multi-modal machine learning perception pipelines that can process streaming data like video and audio. For example, you can use MediaPipe to run on-device machine learning models and process video from a camera to detect, track and visualize hand landmarks in real-time.

Developers and researchers can prototype their real-time perception use cases starting with the creation of the MediaPipe graph on desktop. Then they can quickly convert and deploy that same graph to the Coral Dev Board, where the quantized TensorFlow Lite model will be accelerated by the Edge TPU.

As part of this first release, MediaPipe is making available new experimental samples for both object and face detection, with support for the Coral Dev Board and SoM. The source code and instructions for compiling and running each sample are available on GitHub and on the MediaPipe documentation site.

New Teachable Sorter project tutorial

New Teachable Sorter project tutorial

A new Teachable Sorter tutorial is now available. The Teachable Sorter is a physical sorting machine that combines the Coral USB Accelerator’s ability to perform very low latency inference with an ML model that can be trained to rapidly recognize and sort different objects as they fall through the air. It leverages Google’s new Teachable Machine 2.0, a web application that makes it easy for anyone to quickly train a model in a fun, hands-on way.

The tutorial walks through how to build the free-fall sorter, which separates marshmallows from cereal and can be trained using Teachable Machine.

Coral is now on TensorFlow Hub

Earlier this month, the TensorFlow team announced a new version of TensorFlow Hub, a central repository of pre-trained models. With this update, the interface has been improved with a fresh landing page and search experience. Pre-trained Coral models compiled for the Edge TPU continue to be available on our Coral site, but a select few are also now available from the TensorFlow Hub. On the site, you can find models featuring an Overlay interface, allowing you to test the model’s performance against a custom set of images right from the browser. Check out the experience for MobileNet v1 and MobileNet v2.

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the new Coral partnerships page. We hope you’ll use the new features offered on Coral.ai as a resource and encourage you to keep sending us feedback at [email protected].

Coral moves out of beta

Posted by Vikram Tank (Product Manager), Coral Team

microchips on coral colored background

Last March, we launched Coral beta from Google Research. Coral helps engineers and researchers bring new models out of the data center and onto devices, running TensorFlow models efficiently at the edge. Coral is also at the core of new applications of local AI in industries ranging from agriculture to healthcare to manufacturing. We’ve received a lot of feedback over the past six months and used it to improve our platform. Today we’re thrilled to graduate Coral out of beta, into a wider, global release.

Coral is already delivering impact across industries, and several of our partners are including Coral in products that require fast ML inferencing at the edge.

In healthcare, Care.ai is using Coral to build a device that enables hospitals and care centers to respond quickly to falls, prevent bed sores, improve patient care, and reduce costs. Virgo SVS is also using Coral as the basis of a polyp detection system that helps doctors improve the accuracy of endoscopies.

In a very different use case, Olea Edge employs Coral to help municipal water utilities accurately measure the amount of water used by their commercial customers. Their Meter Health Analytics solution uses local AI to reduce waste and predict equipment failure in industrial water meters.

Nexcom is using Coral to build gateways with local AI and provide a platform for next-gen, AI-enabled IoT applications. By moving AI processing to the gateway, existing sensor networks can stay in service without the need to add AI processing to each node.

From prototype to production

Microchips on white background

Coral’s Dev Board is designed as an integrated prototyping solution for new product development. Under the heatsink is the detachable Coral SoM, which combines Google’s Edge TPU with the NXP IMX8M SoC, Wi-Fi and Bluetooth connectivity, memory, and storage. We’re happy to announce that you can now purchase the Coral SoM standalone. We’ve also created a baseboard developer guide to help integrate it into your own production design.

Our Coral USB Accelerator allows users with existing system designs to add local AI inferencing via USB 2/3. For production workloads, we now offer three new Accelerators that feature the Edge TPU and connect via PCIe interfaces: Mini PCIe, M.2 A+E key, and M.2 B+M key. You can easily integrate these Accelerators into new products or upgrade existing devices that have an available PCIe slot.

The new Coral products are available globally and for sale at Mouser; for large volume sales, contact our sales team. By the end of 2019, we’ll continue to expand our distribution of the Coral Dev Board and SoM into new markets including: Taiwan, Australia, New Zealand, India, Thailand, Singapore, Oman, Ghana and the Philippines.

Better resources

We’ve also revamped the Coral site with better organization for our docs and tools, a set of success stories, and industry focused pages. All of it can be found at a new, easier to remember URL Coral.ai.

To help you get the most out of the hardware, we’re also publishing a new set of examples. The included models and code can provide solutions to the most common on-device ML problems, such as image classification, object detection, pose estimation, and keyword spotting.

For those looking for a more in-depth application—and a way to solve the eternal problem of squirrels plundering your bird feeder—the Smart Bird Feeder project shows you how to perform classification with a custom dataset on the Coral Dev board.

Finally, we’ll soon release a new version of the Mendel OS that updates the system to Debian Buster, and we’re hard at work on more improvements to the Edge TPU compiler and runtime that will improve the model development workflow.

The official launch of Coral is, of course, just the beginning, and we’ll continue to evolve the platform. Please keep sending us feedback at [email protected].

Coral summer updates: Post-training quant support, TF Lite delegate, and new models!

Posted by Vikram Tank (Product Manager), Coral Team

Summer updates cartoon

Coral’s had a busy summer working with customers, expanding distribution, and building new features — and of course taking some time for R&R. We’re excited to share updates, early work, and new models for our platform for local AI with you.

The compiler has been updated to version 2.0, adding support for models built using post-training quantization—only when using full integer quantization (previously, we required quantization-aware training)—and fixing a few bugs. As the Tensorflow team mentions in their Medium post “post-training integer quantization enables users to take an already-trained floating-point model and fully quantize it to only use 8-bit signed integers (i.e. `int8`).” In addition to reducing the model size, models that are quantized with this method can now be accelerated by the Edge TPU found in Coral products.

We’ve also updated the Edge TPU Python library to version 2.11.1 to include new APIs for transfer learning on Coral products. The new on-device back propagation API allows you to perform transfer learning on the last layer of an image classification model. The last layer of a model is removed before compilation and implemented on-device to run on the CPU. It allows for near-real time transfer learning and doesn’t require you to recompile the model. Our previously released imprinting API, has been updated to allow you to quickly retrain existing classes or add new ones while leaving other classes alone. You can now even keep the classes from the pre-trained base model. Learn more about both options for on-device transfer learning.

Until now, accelerating your model with the Edge TPU required that you write code using either our Edge TPU Python API or in C++. But now you can accelerate your model on the Edge TPU when using the TensorFlow Lite interpreter API, because we’ve released a TensorFlow Lite delegate for the Edge TPU. The TensorFlow Lite Delegate API is an experimental feature in TensorFlow Lite that allows for the TensorFlow Lite interpreter to delegate part or all of graph execution to another executor—in this case, the other executor is the Edge TPU. Learn more about the TensorFlow Lite delegate for Edge TPU.

Coral has also been working with Edge TPU and AutoML teams to release EfficientNet-EdgeTPU: a family of image classification models customized to run efficiently on the Edge TPU. The models are based upon the EfficientNet architecture to achieve the image classification accuracy of a server-side model in a compact size that’s optimized for low latency on the Edge TPU. You can read more about the models’ development and performance on the Google AI Blog, and download trained and compiled versions on the Coral Models page.

And, as summer comes to an end we also want to share that Arrow offers a student teacher discount for those looking to experiment with the boards in class or the lab this year.

We’re excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].

Coral updates: Project tutorials, a downloadable compiler, and a new distributor

Posted by Vikram Tank (Product Manager), Coral Team

coral hardware

We’re committed to evolving Coral to make it even easier to build systems with on-device AI. Our team is constantly working on new product features, and content that helps ML practitioners, engineers, and prototypers create the next generation of hardware.

To improve our toolchain, we’re making the Edge TPU Compiler available to users as a downloadable binary. The binary works on Debian-based Linux systems, allowing for better integration into custom workflows. Instructions on downloading and using the binary are on the Coral site.

We’re also adding a new section to the Coral site that showcases example projects you can build with your Coral board. For instance, Teachable Machine is a project that guides you through building a machine that can quickly learn to recognize new objects by re-training a vision classification model directly on your device. Minigo shows you how to create an implementation of AlphaGo Zero and run it on the Coral Dev Board or USB Accelerator.

Our distributor network is growing as well: Arrow will soon sell Coral products.

Updates from Coral: A new compiler and much more

Posted by Vikram Tank (Product Manager), Coral Team

Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.

Today, we’re updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.

We’re also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).

To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.

In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it’s four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.

The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone – including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”

Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.

We’re excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].

You can see the full release notes on Coral site.

AWS Greengrass Pro Tips

10 tips to help developers get the best out of AWS Greengrass

e

AWS Greengrass is a fantastic technology: it extends AWS cloud to the edge and let us build serverless solutions that span cloud, edge, and on-prem compute.

But as any new tech — it’s still a bit rough around some edges and far from being easy to use. While our AWS friends are working hard to make it more civilized, here are 10 tips to help you get the best out of AWS Greengrass right now.

These are PRO tips, and assume a basic grasp on AWS Greengrass which includes some hands-on experience. This tips are best suited for those running a real IoT project with AWS. If you are new to this services, please bookmark this and come back after reviewing Greengrass tutorial and AWS Greengrass Troubleshooting.

Tip #1 — Nail your development setup

There are many ways to set up a dev environment for different tastes — but some approaches are more efficient than others. Greengrassing via AWS web console is fine for a first tutorial, but after you fat-finger your subscriptions a few times you’ll be looking for a better way.

I prefer old-fashion local development, and I’m fanatical about GitOps and infrastructure-as-code. This influenced my setup:

The editor of choice, git, AWS CLI with named profiles to jump between testing and production accounts — all run on my Mac. Greengrass core software is installed on a Vagrant VM; all the pre-requisites and installations are codified in a Vagrantfile.

I use greengo.io to operate Greengrass Group as code. Greengo lets me define the entire group (core, subscriptions, policies, resources, etc) as a simple YAML. It then creates it in AWS as defined, takes care of creating Lambda functions in AWS from the local code. It downloads the certificates and configuration file that I put straight to Greengrass VM thanks to helper scripts. As a bonus, greengo also knows to cleans it all up!

With all that in place, I edit lambda functions and greengrass definitions alike with convenience of my favorite editor. I make changes, update, deploy, rinse, repeat. I jump onto Greengrasss VM with vagrant ssh to check on greengrass well-being via logs, start/stop the daemon, explore it and pull various tricks described below.

All the work is captured as code, tracked by git, goes to GitHub right off my laptop, and can be easily reproduced. A complete code example — a codified AWS ML inference tutorial — is on GitHub for you to check out and reproduce: dzimine/aws-greengrass-ml-inference.

This setup serves me well, but this is of course not the only approach. You can take a three common ways to create your Lambda functions with AWS and creatively extend them to your Greengrass development. You might like Cloud9 + Greengrass on AmazonVM, as Jan Metzner demonstrated on IOT402 Workshop at re:Invent 2018.

If you’re a master of CloudFormation templates, check out the elegant GrassFormation and the development flow that comes with it. Whichever you choose, do yourself a favor — nail your development setup first.

Tip #2 — Greengrass in Docker: proceed with caution

I managed Greengrass in Docker before the official support in v.1.7, using a couple of docker tricks: devicemapper storage driver instead of default overlay2, and –privileged flag. It worked, and had no limitations on Greengrass functionality.

But with newer versions of Docker, Linux kernel, and Greengrass, it gradually stopped working. I abandoned the case, but you always try your luck at GitHub – dzimine/aws-greengrass-docker).

Now we can officially run AWS IoT Greengrass in a Docker Container. It is a great convenience, but a close look shows that it is more of a work-around. Making greengrass containers run inside docker container is admittedly difficult and unlikely ever suite for production. So AWS introduced a configuration option to run Lambdas outside of a container, as OS processes — on per-lambda or per-group basis.

Choosing this option brings limitations: Connectors, local resource access, and local machine learning models are not supported. Hey, not a big loss! Connectors are in their infancy — and for that matter I’d much rather want them for Lambdas and StepFunctions.

When lambdas run as OS processes they can access any local resources — so no need for configuring local resources. And a local ML stack can be easily custom-built into a Greengrass image to your liking.

Some concerns to consider
A bigger concern with this approach, and overall with optional containerization for Lambdas, is that it smells hidden dependencies — making things fragile.

The IoT workflow is about defining deployments in the cloud and pushing it to a fleet of devices. Some devices don’t support containerized Lambdas: be it greengrass running inside Docker or on a constrained OS with no containerization.

From v1.7, Greengrass says “It’s OK, it will run as long as a group is opted out from containerization”. But devices don’t advertise their capabilities, nor do they have a guard to reject an unsupported group deployment.

The knowledge of what group is safe to run on what devices must reside outside the system and inside designer’s head — admittedly not the most reliable store.

While enabling containerization for Lambda is a seemingly legit change in a group definition , it can break the deployment or cause function failures. Not to mention that running code with and without containerization may differ in many ways— I did hit weird code bugs that manifested only when containerized.

AWS Docs warn to use this option with caution, and prescribe use cases where running without containerization may worth the tradeoff. If you want to adopt Docker with Greengrass, I’d recommend goling all the way: drop containerization, run in Docker in dev and production, use the same containers.

5 Steps for Greengrass in Docker
If the limitations outlined above don’t deter you from this approach, here’s how to run Greengrass in docker in 5 simple steps:

1. Get the zipped repo with Dockefile and other artefacts from the Cloud Front. Why not GitHub? May I clone it to my GitHub, or is it against the license? Ask AWS. For now, CloudFront:

curl https://d1onfpft10uf5o.cloudfront.net/greengrass-core/downloads/1.7.1/aws-greengrass-docker-1.7.1.tar.gz | tar -x

2. Build the docker image with docker-compose

cd aws-greengrass-docker-1.7.1
PLATFORM=x86-64 GREENGRASS_VERSION=1.7.1 docker-compose build

3. Use greengo to create a group: it will place the certificates and configuration for your into the /certs/ and /config. Or die-hard with AWS console, download and sort out the certificates and config. Attention: The deployment must have all Lambdas opted out of containers: or pinned=True in API, CLI, or greengo.yaml definition.

4. Run Docker with docker-compose:

PLATFORM=x86–64 GREENGRASS_VERSION=1.7.1 docker-compose up

5. Profit. For any complications, like a need for ARM image or a misfortune of running on Windows — open README.md and enjoy the detailed instructions.

You might ask “why not using Docker in a development setup instead of a heavy Vagrant VM”? You may: I do it myself sometimes, like a testbed for greengo development.

But for a production-ready IoT deployment I prefer an environment to be the most representative of the target deployment — which is not always the case with GG Docker limitations.

Tip # 3 — Basic troubleshooting hints

When something goes wrong — which it likely will — here are the things to check:

  1. Make sure greengrassd is started.
    Look for any errors in the output.
/greengrass/ggc/core/greengrassd start

If it doesn’t start, it most likely misses some pre-requisites, which you can quickly check by greengrass-dependency-checker script. You can get the checker for the most recent version from aws-greengrass-samples repo.

There is also AWS Device Tester, a heavier option typically used by device manufactures to certify their device as Greengrass-capable.

2. Check system logs for any error.
Look under /greengrass/ggc/var/log/system/, start with runtime.log. There are few other logs with various aspects of greengrass functionality, described in AWS docs. I mostly look at runtime.log:

tail /greengrass/ggc/var/log/system/runtime.log

It helps to set the logs to DEBUG level (the default is INFO). It is set up at Greengrass Group level via AWS console, CLI, API, or greengo will do it for you. A group must be successfully deployed to apply the new log configuration.

But what if I am troubleshooting a failure in the initial deployment? Tough luck! There is a trick is to fake a deployment locally with just a logging config in it but it is quite cumbersome; it would be much better to set it up in config.json file. Join my ask to AWS to do so:

3. Check the deployment
Once you trigger deployment and reaches “In Progress” state, a blast of messages pops up in runtime.log as the greengrass handshakes, downloads the artefacts, and restarts the subsystems. The deployment artifacts are stored locally at /greengrass/ggc/deployment:

  • group/group.json - a deployment definition. Explore on your own:
  • /greengrass/ggc/deployment/lambdas - lambda code is deployed here. The content there is an extracted Lambda zip package. Just as you had it on your Dev machine before shipping up to AWS.

Key hint: these artifacts are what the local greengrass takes as an input. Toying with them is a great way to creatively mess with Greengrass.

I enthusiastically persuade you to write a profound logging in your Lambda code and set log levels to DEBUG for both system and user logging configuration.

Tip #4 — Checking connection

Greengrass daemon starts, but nothing happens. Deployment is stuck forever with status “In progress’. What is wrong? Most likely, it is a connection problem.

Greengrass relies on MQTT (port 8883) for messaging and HTTPS (port 443) for downloading deployment artifacts. Make sure that your network connection allows the traffic.

Checking HTTPS is obvious, here is how to test your MQTT connection (source):

openssl s_client -connect 
YOUR_ENDPOINT.iot.eu-west-1.amazonaws.com:8883
-CAfile root.ca.pem
-cert CERTIFICATE.pem
-key CERTIFICATE.key

In a typical corporate network port 8883 might be blocked, or your device may be behind a proxy. Starting from version 1.7, Greengrass can deal with a proxy. AWS docs also describe how to reconfigure Greengrass to pass MQTT traffic through port 443, but there is a confusing note that HTTPS traffic still goes over 8883.

Source: AWS IoT Greengrass docs

Doesn’t this defeat the purpose? Greengrass needs HTTPS to get the deployment artifacts. If there is no way to switch it off 8443, how is it supposed to work? I didn’t experiment with this; if you suffer from over-secured networks please try and report your findings.

Tip #5 — Troubleshooting Lambda functions

When deployment is succeeded but nothing seems to be happening as expected, it’s time to look at Lambdas.

Check if the lambda functions are running:

ps aux | grep lambda_runtime

Look at the Lambda logs: dive under /greengrass/ggc/var/log/user/... until you find them.

A function may crash before it creates a log file: for example, the core failed to even start the Lambda for reasons like missing dependencies or misconfigured runtime.

Try and run the lambda manually: now that you know where it is stored locally on the greengrass device — yes, /greengrass/ggc/deployment/lambdas/... — go there and just run it in place, observe results.

Tip #6 — Getting inside Lambda container

Greenness runs lambdas in containers (unless you explicitly opted out). As such, it sometimes happen that a function that runs fine when you launch it on device manually, still fails when running under greengrass.

Find out the PID of your lambda and get inside the container with nsenter:

sudo nsenter -t $PID -m /bin/bash

Now you are in the containerized environment of your Lambda function and can see and check mounted file systems, access to resources, dependencies, environment variables — everything.

This trick is easy for pinned (long-lived) functions, but common event-triggered lambdas don’t live long enough to jump in their container: you might need to temporarily pin them to be able to get in.

If modifying the lambda definitions in a Greengrass group, redeploying the group, and remembering to switch it back later feels like too much trouble, hack it on the device: set pinned=True in the function section of group.json under /greengrass/ggc/deployment/group and restart the greengrassd. The next deployment will reset your local hacks.

Tip #7 — Name your Lambda for happy troubleshooting

What convention to use for naming Lambda functions and handlers? I used to call all my lambda entry points handler and place it in function.py files, like this:

# MyHelloWorldLambda/function.py
def handler():
return {'green': 'grass'}

But a check for running Lambda functions produced output like this:

ps aux | grep lambda_runtime
20170 ? Ss 0:01 python2.7 /runtime/python2.7/lambda_runtime.py --handler=function.handler
20371 ? Ss 0:01 python2.7 /runtime/python2.7/lambda_runtime.py --handler=function.handler

Not too informative, eh? Instead, consider naming either handler or function entry-point by the Lambda function name for more insightful output and easier troubleshooting. Then, finding Lambda process PID for the tricks in Tip 5 will be a breeze.

Tip #8 — Hack a quick code change in Lambda on Greengrass

What is a typical development workflow for greengrass goes?

  1. Make a change to your Lambda function
  2. Test it locally
  3. Deploy the function to the AWS Lambda
  4. Update/increment the alias and make sure the greengrass lambda definition points to this very alias
  5. Deploy the group to your greengrass device

After these steps, the lambda is ready to be called. This is great when deploying final production code to a fleet of field devices.

But during developing, especially in debugging, we often need to try a quick code change, as tiny as printing out a value. The full workflow feels ways too long and cumbersome. While our AWS friends are hopefully working on integrating a local debugger (why not if Azure IoT edge does it?), here’s a quick hack I use:

  • I ssh to my Vagrant VM with greengrass core
  • Go to the deployment folder
  • Find the lambda in question and edit it in place
  • Restart greengrassd, and test the change

If I don’t like the update, the next deploy will revert the change. If I do like the updated, I copy the code over the function code to my local development environment and deploy.

Here greengo.io comes handy again, taking care of cumbersome routines like repackaging & uploading lambda, incrementing versions, updating aliases, and repointing IoT lambda definitions to the right alias: all behind a simple command greengo update-lambda MY_FUNCTION

Update and deploy Lambda with greengo — quick and easy.

Tip #9 — Lambda on-demand? Lambda long-lived? Both!

AWS Greengrass supports two lifecycle models for Lambdas: “on-demand” and “long-lived”.

Lambda On-Demand
The default behavior is “on-demand” which is the same as Lambda in the cloud — this is where the function is invoked by an event.

To trigger Greengrass “on-demand” lambda on MQTT event, you configure a subscription on a topic with the lambda function as a target. Greengrass will spawn the function execution on each topic message, pass a message as a parameter to a function handler, which then runs for the time not exceeding a configured timeout and shuts down. Variables and execution context is not retained between invocations.

Lambda Long-Lived
A “long-lived” function starts when Greengrass daemon starts which allows us to `pin` a function — make it long-running. This is a blessing for many use cases, like a video stream analysis, or protocol proxy where the function listens to the signals from BTLE or ZigBee devices and transfers them over MQTT.

The best kept secret? You can combine both. Yes, a long-lived function can be triggered on-demand. I can use this for good causes, like for keeping a non-critical state in memory between invocations.

For example: my function runs anomaly detection on the stream of device data with PEWMA algorithm. It must keep several previous data points to compute the averages. These data must persist between function executions triggered by data messages.

To achieve this, combine “long-lived” with “on-demand”:

  1. configure a subscription to fire the function on receiving the device data: greengrass dutifully calls the handler with the data payload as messages arrive
  2. make the function long-living: now I can just keep the state in memory, between handler invocations. If the Lambda restarts and looses the state, it’s OK: the algorithm will quickly recover so a true persistence doesn’t worth the trouble.

For a complete example, take a look at the AnomalyCatcher function code in my Greengrass IIoT prototype.

Remember that once the Lambda is configured “long-lived”, the functions won’t run in parallel: invocation are queued and handler is invoked one at a time. Check if it’s appropriate for your message volume, and mind your handler code from anything that might block a queue.

Tip #10 — Use systemd to manage greengrass lifecycle

When your Greengrass device reboots, you would want greengrass to start automatically, wouldn’t you? Well, it won’t until you make it so. Use systemd to manage the life cycle of greengrass daemon process.

You’ll need to modify the useSystemd flag to true in config.json, and set up a systemd script, like the one in this simple example:

[Unit]
Description=Greengrass Daemon
[Service]
Type=forking
PIDFile=/var/run/greengrassd.pid
Restart=on-failure
ExecStart=/greengrass/ggc/core/greengrassd start
ExecReload=/greengrass/ggc/core/greengrassd restart
ExecStop=/greengrass/ggc/core/greengrassd stop
[Install]
WantedBy=multi-user.target

Here are detailed step-by-step instructions for Raspberry-PI. If your greengrass device supports another init system — like upstart, SystemV, or runit — check the manual and configure the greengrassd as a daemon according to it.

I hope you enjoyed my 10 Greengrass tips and put them to a good use in your IoT projects. It took some pain to figure them out: your claps 👏 would be the best reward. Be sure to visit AWS Greengrass Troubleshooting page, and check AWS IoT Greengrass Forum for more.

Please add your own tips in the comments here for the rest of us. For more stories and discussions on or follow me on IoT, DevOps, and Serverless, follow me on Twitter @dzimine.


AWS Greengrass Pro Tips was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introducing Coral: Our platform for development with local AI

Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team

AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google’s been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we’re expanding the ways that people can build out their ideas and products by introducing Coral into public beta.

Coral is a platform for building intelligent devices with local AI.

Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN’s locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.

Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.

Coral Camera Module, Dev Board and USB Accelerator

For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.

To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.

When you’re ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we’ll be releasing the baseboard schematics for those who want to build custom carrier boards.

Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we’re sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.

For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.

Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.