Skip to content
KenkoGeek
  • Home
  • News
  • Home
  • News

Category Archives: IoT

  • Home Google
  • Archive by category "IoT"
  • 22 Nov 2019
  • By editor
  • Categories: AI, Artificial Intelligence, AutoML, Coral, Edge TPU, GCP, Google, IoT, machine learning, TensorFlow
Updates from Coral: Mendel Linux 4.0 and much more!
Posted by Carlos Mendonça (Product Manager), Coral TeamIllustration of the Coral Dev Board placed next to Fall foliage

Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we’re announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.

We have made significant updates to improve performance and stability. Mendel Linux 4.0 release Day is based on Debian 10 Buster and includes upgraded GStreamer pipelines and support for Python 3.7, OpenCV, and OpenCL. The Linux kernel has also been updated to version 4.14 and U-Boot to version 2017.03.3.

We’ve also made it possible to use the Dev Board’s GPU to convert YUV to RGB pixel data at up to 130 frames per second on 1080p resolution, which is one to two orders of magnitude faster than on Mendel Linux 3.0 release Chef. These changes make it possible to run inferences with YUV-producing sources such as cameras and hardware video decoders.

To upgrade your Dev Board or SoM, follow our guide to flash a new system image.

MediaPipe on Coral

MediaPipe is an open-source, cross-platform framework for building multi-modal machine learning perception pipelines that can process streaming data like video and audio. For example, you can use MediaPipe to run on-device machine learning models and process video from a camera to detect, track and visualize hand landmarks in real-time.

Developers and researchers can prototype their real-time perception use cases starting with the creation of the MediaPipe graph on desktop. Then they can quickly convert and deploy that same graph to the Coral Dev Board, where the quantized TensorFlow Lite model will be accelerated by the Edge TPU.

As part of this first release, MediaPipe is making available new experimental samples for both object and face detection, with support for the Coral Dev Board and SoM. The source code and instructions for compiling and running each sample are available on GitHub and on the MediaPipe documentation site.

New Teachable Sorter project tutorial

New Teachable Sorter project tutorial

A new Teachable Sorter tutorial is now available. The Teachable Sorter is a physical sorting machine that combines the Coral USB Accelerator’s ability to perform very low latency inference with an ML model that can be trained to rapidly recognize and sort different objects as they fall through the air. It leverages Google’s new Teachable Machine 2.0, a web application that makes it easy for anyone to quickly train a model in a fun, hands-on way.

The tutorial walks through how to build the free-fall sorter, which separates marshmallows from cereal and can be trained using Teachable Machine.

Coral is now on TensorFlow Hub

Earlier this month, the TensorFlow team announced a new version of TensorFlow Hub, a central repository of pre-trained models. With this update, the interface has been improved with a fresh landing page and search experience. Pre-trained Coral models compiled for the Edge TPU continue to be available on our Coral site, but a select few are also now available from the TensorFlow Hub. On the site, you can find models featuring an Overlay interface, allowing you to test the model’s performance against a custom set of images right from the browser. Check out the experience for MobileNet v1 and MobileNet v2.

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the new Coral partnerships page. We hope you’ll use the new features offered on Coral.ai as a resource and encourage you to keep sending us feedback at [email protected].

  • 22 Oct 2019
  • By editor
  • Categories: AI, Artificial Intelligence, AutoML, Coral, Edge TPU, GCP, Google, IoT, machine learning, TensorFlow
Coral moves out of beta
Posted by Vikram Tank (Product Manager), Coral Team

microchips on coral colored background

Last March, we launched Coral beta from Google Research. Coral helps engineers and researchers bring new models out of the data center and onto devices, running TensorFlow models efficiently at the edge. Coral is also at the core of new applications of local AI in industries ranging from agriculture to healthcare to manufacturing. We’ve received a lot of feedback over the past six months and used it to improve our platform. Today we’re thrilled to graduate Coral out of beta, into a wider, global release.

Coral is already delivering impact across industries, and several of our partners are including Coral in products that require fast ML inferencing at the edge.

In healthcare, Care.ai is using Coral to build a device that enables hospitals and care centers to respond quickly to falls, prevent bed sores, improve patient care, and reduce costs. Virgo SVS is also using Coral as the basis of a polyp detection system that helps doctors improve the accuracy of endoscopies.

In a very different use case, Olea Edge employs Coral to help municipal water utilities accurately measure the amount of water used by their commercial customers. Their Meter Health Analytics solution uses local AI to reduce waste and predict equipment failure in industrial water meters.

Nexcom is using Coral to build gateways with local AI and provide a platform for next-gen, AI-enabled IoT applications. By moving AI processing to the gateway, existing sensor networks can stay in service without the need to add AI processing to each node.

From prototype to production

Microchips on white background

Coral’s Dev Board is designed as an integrated prototyping solution for new product development. Under the heatsink is the detachable Coral SoM, which combines Google’s Edge TPU with the NXP IMX8M SoC, Wi-Fi and Bluetooth connectivity, memory, and storage. We’re happy to announce that you can now purchase the Coral SoM standalone. We’ve also created a baseboard developer guide to help integrate it into your own production design.

Our Coral USB Accelerator allows users with existing system designs to add local AI inferencing via USB 2/3. For production workloads, we now offer three new Accelerators that feature the Edge TPU and connect via PCIe interfaces: Mini PCIe, M.2 A+E key, and M.2 B+M key. You can easily integrate these Accelerators into new products or upgrade existing devices that have an available PCIe slot.

The new Coral products are available globally and for sale at Mouser; for large volume sales, contact our sales team. By the end of 2019, we’ll continue to expand our distribution of the Coral Dev Board and SoM into new markets including: Taiwan, Australia, New Zealand, India, Thailand, Singapore, Oman, Ghana and the Philippines.

Better resources

We’ve also revamped the Coral site with better organization for our docs and tools, a set of success stories, and industry focused pages. All of it can be found at a new, easier to remember URL Coral.ai.

To help you get the most out of the hardware, we’re also publishing a new set of examples. The included models and code can provide solutions to the most common on-device ML problems, such as image classification, object detection, pose estimation, and keyword spotting.

For those looking for a more in-depth application—and a way to solve the eternal problem of squirrels plundering your bird feeder—the Smart Bird Feeder project shows you how to perform classification with a custom dataset on the Coral Dev board.

Finally, we’ll soon release a new version of the Mendel OS that updates the system to Debian Buster, and we’re hard at work on more improvements to the Edge TPU compiler and runtime that will improve the model development workflow.

The official launch of Coral is, of course, just the beginning, and we’ll continue to evolve the platform. Please keep sending us feedback at [email protected].

  • 6 Aug 2019
  • By editor
  • Categories: AI, Artificial Intelligence, AutoML, Coral, Edge TPU, GCP, Google, IoT, machine learning, TensorFlow
Coral summer updates: Post-training quant support, TF Lite delegate, and new models!

Posted by Vikram Tank (Product Manager), Coral Team

Summer updates cartoon

Coral’s had a busy summer working with customers, expanding distribution, and building new features — and of course taking some time for R&R. We’re excited to share updates, early work, and new models for our platform for local AI with you.

The compiler has been updated to version 2.0, adding support for models built using post-training quantization—only when using full integer quantization (previously, we required quantization-aware training)—and fixing a few bugs. As the Tensorflow team mentions in their Medium post “post-training integer quantization enables users to take an already-trained floating-point model and fully quantize it to only use 8-bit signed integers (i.e. `int8`).” In addition to reducing the model size, models that are quantized with this method can now be accelerated by the Edge TPU found in Coral products.

We’ve also updated the Edge TPU Python library to version 2.11.1 to include new APIs for transfer learning on Coral products. The new on-device back propagation API allows you to perform transfer learning on the last layer of an image classification model. The last layer of a model is removed before compilation and implemented on-device to run on the CPU. It allows for near-real time transfer learning and doesn’t require you to recompile the model. Our previously released imprinting API, has been updated to allow you to quickly retrain existing classes or add new ones while leaving other classes alone. You can now even keep the classes from the pre-trained base model. Learn more about both options for on-device transfer learning.

Until now, accelerating your model with the Edge TPU required that you write code using either our Edge TPU Python API or in C++. But now you can accelerate your model on the Edge TPU when using the TensorFlow Lite interpreter API, because we’ve released a TensorFlow Lite delegate for the Edge TPU. The TensorFlow Lite Delegate API is an experimental feature in TensorFlow Lite that allows for the TensorFlow Lite interpreter to delegate part or all of graph execution to another executor—in this case, the other executor is the Edge TPU. Learn more about the TensorFlow Lite delegate for Edge TPU.

Coral has also been working with Edge TPU and AutoML teams to release EfficientNet-EdgeTPU: a family of image classification models customized to run efficiently on the Edge TPU. The models are based upon the EfficientNet architecture to achieve the image classification accuracy of a server-side model in a compact size that’s optimized for low latency on the Edge TPU. You can read more about the models’ development and performance on the Google AI Blog, and download trained and compiled versions on the Coral Models page.

And, as summer comes to an end we also want to share that Arrow offers a student teacher discount for those looking to experiment with the boards in class or the lab this year.

We’re excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].

  • 31 May 2019
  • By editor
  • Categories: AIY Projects, Artificial Intelligence, Coral, Edge TPU, Edge TPU Accelerator, GCP, Google, Google Coral, IoT, machine intelligence, machine learning, maker, prototype, TensorFlow, TensorFlow Lite, TF Lite, TPU
Coral updates: Project tutorials, a downloadable compiler, and a new distributor

Posted by Vikram Tank (Product Manager), Coral Team

coral hardware

We’re committed to evolving Coral to make it even easier to build systems with on-device AI. Our team is constantly working on new product features, and content that helps ML practitioners, engineers, and prototypers create the next generation of hardware.

To improve our toolchain, we’re making the Edge TPU Compiler available to users as a downloadable binary. The binary works on Debian-based Linux systems, allowing for better integration into custom workflows. Instructions on downloading and using the binary are on the Coral site.

We’re also adding a new section to the Coral site that showcases example projects you can build with your Coral board. For instance, Teachable Machine is a project that guides you through building a machine that can quickly learn to recognize new objects by re-training a vision classification model directly on your device. Minigo shows you how to create an implementation of AlphaGo Zero and run it on the Coral Dev Board or USB Accelerator.

Our distributor network is growing as well: Arrow will soon sell Coral products.

  • 11 Apr 2019
  • By editor
  • Categories: AI, AIY, AIY Projects, Artificial Intelligence, Coral, Edge TPU, Edge TPU Dev Board, GCP, Google, Google Coral, IoT, machine learning, machine learning accelerator, maker, prototype, TensorFlow, TensorFlow Lite
Updates from Coral: A new compiler and much more

Posted by Vikram Tank (Product Manager), Coral Team

Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.

Today, we’re updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.

We’re also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).

To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.

In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it’s four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.

The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone – including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”

Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.

We’re excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].

You can see the full release notes on Coral site.

  • 7 Mar 2019
  • By editor
  • Categories: AWS, cloud-computing, HowTo, IoT, programming, serverless
AWS Greengrass Pro Tips

10 tips to help developers get the best out of AWS Greengrass

e

AWS Greengrass is a fantastic technology: it extends AWS cloud to the edge and let us build serverless solutions that span cloud, edge, and on-prem compute.

But as any new tech — it’s still a bit rough around some edges and far from being easy to use. While our AWS friends are working hard to make it more civilized, here are 10 tips to help you get the best out of AWS Greengrass right now.

These are PRO tips, and assume a basic grasp on AWS Greengrass which includes some hands-on experience. This tips are best suited for those running a real IoT project with AWS. If you are new to this services, please bookmark this and come back after reviewing Greengrass tutorial and AWS Greengrass Troubleshooting.

Tip #1 — Nail your development setup

There are many ways to set up a dev environment for different tastes — but some approaches are more efficient than others. Greengrassing via AWS web console is fine for a first tutorial, but after you fat-finger your subscriptions a few times you’ll be looking for a better way.

I prefer old-fashion local development, and I’m fanatical about GitOps and infrastructure-as-code. This influenced my setup:

The editor of choice, git, AWS CLI with named profiles to jump between testing and production accounts — all run on my Mac. Greengrass core software is installed on a Vagrant VM; all the pre-requisites and installations are codified in a Vagrantfile.

I use greengo.io to operate Greengrass Group as code. Greengo lets me define the entire group (core, subscriptions, policies, resources, etc) as a simple YAML. It then creates it in AWS as defined, takes care of creating Lambda functions in AWS from the local code. It downloads the certificates and configuration file that I put straight to Greengrass VM thanks to helper scripts. As a bonus, greengo also knows to cleans it all up!

With all that in place, I edit lambda functions and greengrass definitions alike with convenience of my favorite editor. I make changes, update, deploy, rinse, repeat. I jump onto Greengrasss VM with vagrant ssh to check on greengrass well-being via logs, start/stop the daemon, explore it and pull various tricks described below.

All the work is captured as code, tracked by git, goes to GitHub right off my laptop, and can be easily reproduced. A complete code example — a codified AWS ML inference tutorial — is on GitHub for you to check out and reproduce: dzimine/aws-greengrass-ml-inference.

This setup serves me well, but this is of course not the only approach. You can take a three common ways to create your Lambda functions with AWS and creatively extend them to your Greengrass development. You might like Cloud9 + Greengrass on AmazonVM, as Jan Metzner demonstrated on IOT402 Workshop at re:Invent 2018.

If you’re a master of CloudFormation templates, check out the elegant GrassFormation and the development flow that comes with it. Whichever you choose, do yourself a favor — nail your development setup first.

Tip #2 — Greengrass in Docker: proceed with caution

I managed Greengrass in Docker before the official support in v.1.7, using a couple of docker tricks: devicemapper storage driver instead of default overlay2, and –privileged flag. It worked, and had no limitations on Greengrass functionality.

But with newer versions of Docker, Linux kernel, and Greengrass, it gradually stopped working. I abandoned the case, but you always try your luck at GitHub – dzimine/aws-greengrass-docker).

Now we can officially run AWS IoT Greengrass in a Docker Container. It is a great convenience, but a close look shows that it is more of a work-around. Making greengrass containers run inside docker container is admittedly difficult and unlikely ever suite for production. So AWS introduced a configuration option to run Lambdas outside of a container, as OS processes — on per-lambda or per-group basis.

Choosing this option brings limitations: Connectors, local resource access, and local machine learning models are not supported. Hey, not a big loss! Connectors are in their infancy — and for that matter I’d much rather want them for Lambdas and StepFunctions.

When lambdas run as OS processes they can access any local resources — so no need for configuring local resources. And a local ML stack can be easily custom-built into a Greengrass image to your liking.

Some concerns to consider
A bigger concern with this approach, and overall with optional containerization for Lambdas, is that it smells hidden dependencies — making things fragile.

The IoT workflow is about defining deployments in the cloud and pushing it to a fleet of devices. Some devices don’t support containerized Lambdas: be it greengrass running inside Docker or on a constrained OS with no containerization.

From v1.7, Greengrass says “It’s OK, it will run as long as a group is opted out from containerization”. But devices don’t advertise their capabilities, nor do they have a guard to reject an unsupported group deployment.

The knowledge of what group is safe to run on what devices must reside outside the system and inside designer’s head — admittedly not the most reliable store.

While enabling containerization for Lambda is a seemingly legit change in a group definition , it can break the deployment or cause function failures. Not to mention that running code with and without containerization may differ in many ways— I did hit weird code bugs that manifested only when containerized.

AWS Docs warn to use this option with caution, and prescribe use cases where running without containerization may worth the tradeoff. If you want to adopt Docker with Greengrass, I’d recommend goling all the way: drop containerization, run in Docker in dev and production, use the same containers.

5 Steps for Greengrass in Docker
If the limitations outlined above don’t deter you from this approach, here’s how to run Greengrass in docker in 5 simple steps:

1. Get the zipped repo with Dockefile and other artefacts from the Cloud Front. Why not GitHub? May I clone it to my GitHub, or is it against the license? Ask AWS. For now, CloudFront:

curl https://d1onfpft10uf5o.cloudfront.net/greengrass-core/downloads/1.7.1/aws-greengrass-docker-1.7.1.tar.gz | tar -x

2. Build the docker image with docker-compose

cd aws-greengrass-docker-1.7.1
PLATFORM=x86-64 GREENGRASS_VERSION=1.7.1 docker-compose build

3. Use greengo to create a group: it will place the certificates and configuration for your into the /certs/ and /config. Or die-hard with AWS console, download and sort out the certificates and config. Attention: The deployment must have all Lambdas opted out of containers: or pinned=True in API, CLI, or greengo.yaml definition.

4. Run Docker with docker-compose:

PLATFORM=x86–64 GREENGRASS_VERSION=1.7.1 docker-compose up

5. Profit. For any complications, like a need for ARM image or a misfortune of running on Windows — open README.md and enjoy the detailed instructions.

You might ask “why not using Docker in a development setup instead of a heavy Vagrant VM”? You may: I do it myself sometimes, like a testbed for greengo development.

But for a production-ready IoT deployment I prefer an environment to be the most representative of the target deployment — which is not always the case with GG Docker limitations.

Tip # 3 — Basic troubleshooting hints

When something goes wrong — which it likely will — here are the things to check:

  1. Make sure greengrassd is started.
    Look for any errors in the output.
/greengrass/ggc/core/greengrassd start

If it doesn’t start, it most likely misses some pre-requisites, which you can quickly check by greengrass-dependency-checker script. You can get the checker for the most recent version from aws-greengrass-samples repo.

There is also AWS Device Tester, a heavier option typically used by device manufactures to certify their device as Greengrass-capable.

2. Check system logs for any error.
Look under /greengrass/ggc/var/log/system/, start with runtime.log. There are few other logs with various aspects of greengrass functionality, described in AWS docs. I mostly look at runtime.log:

tail /greengrass/ggc/var/log/system/runtime.log

It helps to set the logs to DEBUG level (the default is INFO). It is set up at Greengrass Group level via AWS console, CLI, API, or greengo will do it for you. A group must be successfully deployed to apply the new log configuration.

But what if I am troubleshooting a failure in the initial deployment? Tough luck! There is a trick is to fake a deployment locally with just a logging config in it but it is quite cumbersome; it would be much better to set it up in config.json file. Join my ask to AWS to do so:

To troubleshoot a failing AWS Greengrass group deployment, I need DEBUG log level. To set it to DEBUG, I need to set it in a group definition and deploy. Deployment is failing. To troubleshoot a failing deployment… CATCH 22! May I please set log level in config? #awswishlist

 — @dzimine

3. Check the deployment
Once you trigger deployment and reaches “In Progress” state, a blast of messages pops up in runtime.log as the greengrass handshakes, downloads the artefacts, and restarts the subsystems. The deployment artifacts are stored locally at /greengrass/ggc/deployment:

  • group/group.json - a deployment definition. Explore on your own:
  • /greengrass/ggc/deployment/lambdas - lambda code is deployed here. The content there is an extracted Lambda zip package. Just as you had it on your Dev machine before shipping up to AWS.

Key hint: these artifacts are what the local greengrass takes as an input. Toying with them is a great way to creatively mess with Greengrass.

I enthusiastically persuade you to write a profound logging in your Lambda code and set log levels to DEBUG for both system and user logging configuration.

Tip #4 — Checking connection

Greengrass daemon starts, but nothing happens. Deployment is stuck forever with status “In progress’. What is wrong? Most likely, it is a connection problem.

Greengrass relies on MQTT (port 8883) for messaging and HTTPS (port 443) for downloading deployment artifacts. Make sure that your network connection allows the traffic.

Checking HTTPS is obvious, here is how to test your MQTT connection (source):

openssl s_client -connect 
YOUR_ENDPOINT.iot.eu-west-1.amazonaws.com:8883
-CAfile root.ca.pem
-cert CERTIFICATE.pem
-key CERTIFICATE.key

In a typical corporate network port 8883 might be blocked, or your device may be behind a proxy. Starting from version 1.7, Greengrass can deal with a proxy. AWS docs also describe how to reconfigure Greengrass to pass MQTT traffic through port 443, but there is a confusing note that HTTPS traffic still goes over 8883.

Source: AWS IoT Greengrass docs

Doesn’t this defeat the purpose? Greengrass needs HTTPS to get the deployment artifacts. If there is no way to switch it off 8443, how is it supposed to work? I didn’t experiment with this; if you suffer from over-secured networks please try and report your findings.

Tip #5 — Troubleshooting Lambda functions

When deployment is succeeded but nothing seems to be happening as expected, it’s time to look at Lambdas.

Check if the lambda functions are running:

ps aux | grep lambda_runtime

Look at the Lambda logs: dive under /greengrass/ggc/var/log/user/... until you find them.

A function may crash before it creates a log file: for example, the core failed to even start the Lambda for reasons like missing dependencies or misconfigured runtime.

Try and run the lambda manually: now that you know where it is stored locally on the greengrass device — yes, /greengrass/ggc/deployment/lambdas/... — go there and just run it in place, observe results.

Tip #6 — Getting inside Lambda container

Greenness runs lambdas in containers (unless you explicitly opted out). As such, it sometimes happen that a function that runs fine when you launch it on device manually, still fails when running under greengrass.

Find out the PID of your lambda and get inside the container with nsenter:

sudo nsenter -t $PID -m /bin/bash

Now you are in the containerized environment of your Lambda function and can see and check mounted file systems, access to resources, dependencies, environment variables — everything.

This trick is easy for pinned (long-lived) functions, but common event-triggered lambdas don’t live long enough to jump in their container: you might need to temporarily pin them to be able to get in.

If modifying the lambda definitions in a Greengrass group, redeploying the group, and remembering to switch it back later feels like too much trouble, hack it on the device: set pinned=True in the function section of group.json under /greengrass/ggc/deployment/group and restart the greengrassd. The next deployment will reset your local hacks.

Tip #7 — Name your Lambda for happy troubleshooting

What convention to use for naming Lambda functions and handlers? I used to call all my lambda entry points handler and place it in function.py files, like this:

# MyHelloWorldLambda/function.py
def handler():
return {'green': 'grass'}

But a check for running Lambda functions produced output like this:

ps aux | grep lambda_runtime
20170 ? Ss 0:01 python2.7 /runtime/python2.7/lambda_runtime.py --handler=function.handler
20371 ? Ss 0:01 python2.7 /runtime/python2.7/lambda_runtime.py --handler=function.handler

Not too informative, eh? Instead, consider naming either handler or function entry-point by the Lambda function name for more insightful output and easier troubleshooting. Then, finding Lambda process PID for the tricks in Tip 5 will be a breeze.

Tip #8 — Hack a quick code change in Lambda on Greengrass

What is a typical development workflow for greengrass goes?

  1. Make a change to your Lambda function
  2. Test it locally
  3. Deploy the function to the AWS Lambda
  4. Update/increment the alias and make sure the greengrass lambda definition points to this very alias
  5. Deploy the group to your greengrass device

After these steps, the lambda is ready to be called. This is great when deploying final production code to a fleet of field devices.

But during developing, especially in debugging, we often need to try a quick code change, as tiny as printing out a value. The full workflow feels ways too long and cumbersome. While our AWS friends are hopefully working on integrating a local debugger (why not if Azure IoT edge does it?), here’s a quick hack I use:

  • I ssh to my Vagrant VM with greengrass core
  • Go to the deployment folder
  • Find the lambda in question and edit it in place
  • Restart greengrassd, and test the change

If I don’t like the update, the next deploy will revert the change. If I do like the updated, I copy the code over the function code to my local development environment and deploy.

Here greengo.io comes handy again, taking care of cumbersome routines like repackaging & uploading lambda, incrementing versions, updating aliases, and repointing IoT lambda definitions to the right alias: all behind a simple command greengo update-lambda MY_FUNCTION

Update and deploy Lambda with greengo — quick and easy.

Tip #9 — Lambda on-demand? Lambda long-lived? Both!

AWS Greengrass supports two lifecycle models for Lambdas: “on-demand” and “long-lived”.

Lambda On-Demand
The default behavior is “on-demand” which is the same as Lambda in the cloud — this is where the function is invoked by an event.

To trigger Greengrass “on-demand” lambda on MQTT event, you configure a subscription on a topic with the lambda function as a target. Greengrass will spawn the function execution on each topic message, pass a message as a parameter to a function handler, which then runs for the time not exceeding a configured timeout and shuts down. Variables and execution context is not retained between invocations.

Lambda Long-Lived
A “long-lived” function starts when Greengrass daemon starts which allows us to `pin` a function — make it long-running. This is a blessing for many use cases, like a video stream analysis, or protocol proxy where the function listens to the signals from BTLE or ZigBee devices and transfers them over MQTT.

The best kept secret? You can combine both. Yes, a long-lived function can be triggered on-demand. I can use this for good causes, like for keeping a non-critical state in memory between invocations.

For example: my function runs anomaly detection on the stream of device data with PEWMA algorithm. It must keep several previous data points to compute the averages. These data must persist between function executions triggered by data messages.

To achieve this, combine “long-lived” with “on-demand”:

  1. configure a subscription to fire the function on receiving the device data: greengrass dutifully calls the handler with the data payload as messages arrive
  2. make the function long-living: now I can just keep the state in memory, between handler invocations. If the Lambda restarts and looses the state, it’s OK: the algorithm will quickly recover so a true persistence doesn’t worth the trouble.

For a complete example, take a look at the AnomalyCatcher function code in my Greengrass IIoT prototype.

Remember that once the Lambda is configured “long-lived”, the functions won’t run in parallel: invocation are queued and handler is invoked one at a time. Check if it’s appropriate for your message volume, and mind your handler code from anything that might block a queue.

Tip #10 — Use systemd to manage greengrass lifecycle

When your Greengrass device reboots, you would want greengrass to start automatically, wouldn’t you? Well, it won’t until you make it so. Use systemd to manage the life cycle of greengrass daemon process.

You’ll need to modify the useSystemd flag to true in config.json, and set up a systemd script, like the one in this simple example:

[Unit]
Description=Greengrass Daemon
[Service]
Type=forking
PIDFile=/var/run/greengrassd.pid
Restart=on-failure
ExecStart=/greengrass/ggc/core/greengrassd start
ExecReload=/greengrass/ggc/core/greengrassd restart
ExecStop=/greengrass/ggc/core/greengrassd stop
[Install]
WantedBy=multi-user.target

Here are detailed step-by-step instructions for Raspberry-PI. If your greengrass device supports another init system — like upstart, SystemV, or runit — check the manual and configure the greengrassd as a daemon according to it.

I hope you enjoyed my 10 Greengrass tips and put them to a good use in your IoT projects. It took some pain to figure them out: your claps 👏 would be the best reward. Be sure to visit AWS Greengrass Troubleshooting page, and check AWS IoT Greengrass Forum for more.

Please add your own tips in the comments here for the rest of us. For more stories and discussions on or follow me on IoT, DevOps, and Serverless, follow me on Twitter @dzimine.


AWS Greengrass Pro Tips was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • 6 Mar 2019
  • By editor
  • Categories: AI, AIY, Artificial Intelligence, Coral, Edge TPU, Edge TPU Dev Board, GCP, Google, Google Coral, IoT, machine intelligence, machine learning, machine learning accelerator, maker, prototype, TensorFlow, TensorFlow Lite
Introducing Coral: Our platform for development with local AI

Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team

AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google’s been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we’re expanding the ways that people can build out their ideas and products by introducing Coral into public beta.

Coral is a platform for building intelligent devices with local AI.

Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN’s locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.

Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.

Coral Camera Module, Dev Board and USB Accelerator

For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.

To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.

When you’re ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we’ll be releasing the baseboard schematics for those who want to build custom carrier boards.

Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we’re sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.

For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.

Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Most Viewed News
  • Microsoft Azure Government is First Commercial Cloud to Achieve DoD Impact Level 5 Provisional Authorization, General Availability of DoD Regions (357)
  • Introducing Coral: Our platform for development with local AI (333)
  • Enabling connected transformation with Apache Kafka and TensorFlow on Google Cloud Platform (314)
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
Tags
aws Azure Books cloud Developer Development DevOps GCP Google HowTo Learn Linux news Noticias OpenBooks SysOps Tutorials

KenkoGeek © 2019 All Right Reserved