Running Cognitive Services on Azure IoT Edge

This blog post is co-authored by Emmanuel Bertrand, Senior Program Manager, Azure IoT.

We recently announced Azure Cognitive Services in containers for Computer Vision, Face, Text Analytics, and Language Understanding. You can read more about Azure Cognitive Services containers in this blog, “Brining AI to the edge.”

Today, we are happy to announce the support for running Azure Cognitive Services containers for Text Analytics and Language Understanding containers on edge devices with Azure IoT Edge. This means that all your workloads can be run locally where your data is being generated while keeping the simplicity of the cloud to manage them remotely, securely and at scale.

Image displaying containers that align with Vision and Langauge

Whether you don’t have a reliable internet connection, or want to save on bandwidth cost, have super low latency requirements, or are dealing with sensitive data that needs to be analyzed on-site, Azure IoT Edge with the Cognitive Services containers gives you consistency with the cloud. This allows you to run your analysis on-site and a single pane of glass to operate all your sites.

These container images are directly available to try as IoT Edge modules on the Azure Marketplace:

  • Key Phrase Extraction extracts key talking points and highlights in text either from English, German, Spanish, or Japanese.
  • Language Detection detects the natural language of text with a total of 120 languages supported.
  • Sentiment Analysis detects the level of positive or negative sentiment for input text using a confidence score across a variety of languages.
  • Language Understanding applies custom machine learning intelligence to a user’s conversational and natural language text to predict overall meaning and pull out relevant and detailed information.

Please note, the Face and Recognize Text containers are still gated behind a preview, thus are not yet available via the marketplace. However you can deploy them manually by first signing up to for the preview to get access.

In this blog, we describe how to provision Language Detection container on your edge device locally and how you manage it through Azure IoT.

Set up an IoT Edge device and its IoT Hub

Follow the first steps in this quick-start for setting up your IoT Edge device and your IoT Hub.

It first walks your through creating an IoT Hub and then registering an IoT Edge device to your IoT hub. Here is a screenshot of a newly created edge device called “LanguageDetection” under the IoT Hub called “CSContainers”. Select the device, copy its primary connection string, and save it for later.

Screenshot of a newly created Edge device

Next, it guides you through setting up the IoT Edge device. If you don’t have a physical edge device, it is recommended to deploy the Ubuntu Server 16.04 LTS and Azure IoT Edge runtime virtual machine (VM) which is available on the Azure Marketplace. It is an Azure Virtual Machine that comes with IoT Edge pre-installed.

The last step is to connect your IoT Edge device to your IoT Hub by giving it its connection string created above. To do that, edit the device configuration file under /etc/iotedge/config.yaml file and update the connection string. After the connection string is update, restart the edge device with sudo systemctl restart iotedge.

Provisioning a Cognitive Service (Language Detection IoT Edge module)

The images are directly available as IoT edge modules from the Iot Hub marketplace.

Screenshot of Language Detection Container offering in the Azure Marketplace

Here we’re using the Language Detection image as an example, however other images work the same way. To download the image, search for the image and select Get it now, this will take you to the Azure portal “Target Devices for IoT Edge Module” page. Select your subscription with your IoT Hub, select Find Device and your IoT Edge device, then click the Select and Create buttons.

Screenshot of target devices for IoT Edge modules

Screenshot for selecting device to deploy on and creating

Configuring your Cognitive Service

Now you’re almost ready to deploy the Cognitive Service to your IoT Edge device. But in order to run a container you need to get a valid API key and billing endpoints, then pass them as environment variables in the module details.

Screenshot of setting deployment modules

Go to the Azure portal and open the Cognitive Services blade. If you don’t have a Cognitive Service that matches the container, in this case a Text Analytics service, then select add and create one. Once you have a Cognitive Service get the endpoint and API key, you’ll need this to fire up the container:

Screenshot showing the retrival of the endpoint and API key

The endpoint is strictly used for billing only, no customer data ever flows that way. Copy your billing endpoint value to the “billing” environment variable and copy your API key value to the “apikey” environment variable.

Deploy the container

All required info is now filled in and you only need to complete the IoT Edge deployment. Select Next and then Submit. Verify that the deployment is happening properly by refreshing the IoT Edge device details section.

Verify that the deployment is happening properly by refreshing the IoT Edge device details section.

Screenshot of IoT Edge device details section

Trying it out

To try things out, we’ll make an HTTP call to the IoT Edge device that has the Cognitive Service container running.

For that, we’ll first need to make sure that the port 5000 of the edge device is open. If you’re using the pre-built Ubuntu with IoT Edge Azure VM as an edge device, first go to VM details, then Settings, Networking, and Outbound port rule to add an outbound security rule to open port 5000. Also copy the Public IP address of your device.

Now you should be able to query the Cognitive Service running on your IoT Edge device from any machine with a browser. Open your favorite browser and go to http://your-iot-edge-device-ip-address:5000.

Now, select Service API Description or jump directly to http://your-iot-edge-device-ip-address:5000/swagger. This will give you a detailed description of the API.

Screenshot showing a detailed description of the Language Detection Cognitive Service API

Select Try it out and then Execute, you can change the input value as you like.

Screenshot of executing the API

The result will show up further down on the page and should look something like the following image:

Screenshot of results after executing the API

Next steps

You are now up and running! You are running the Cognitive Services on your own IoT Edge device, remotely managed via your central IoT Hub. You can use this setup to manage millions of devices in a secure way.

You can play around with the various Cognitive Services already available in the Azure Marketplace and try out various scenarios. Have fun!

Announcing Azure Integration Service Environment for Logic Apps

A new way to integrate with resources in your virtual network

We strive with every service to provide experiences that significantly improve the development experience. We’re always looking for common pain points that everybody building software in the cloud deals with. And once we find those pain points, we build best-of-class software to address the need.

In critical business scenarios, you need to have the confidence that your data is flowing between all the moving parts. The core Logic Apps offering is a great, multi-faceted service for integrating between data sources and services, but sometimes it is necessary to have dedicated service to ensure that your integration processes are as performant as can be. That’s why we developed the Integration Service Environment (ISE), a fully isolated integration environment.

What is an Integration Service Environment?

An Integration Service Environment is a fully isolated and dedicated environment for all enterprise-scale integration needs. When you create a new Integration Service Environment, it is injected into your Azure virtual network, which allows you to deploy Logic Apps as a service on your VNET.

  • Direct, secure access to your virtual network resources. Enables Logic Apps to have secure, direct access to private resources, such as virtual machines, servers, and other services in your virtual network including Azure services with service endpoints and on-premises resources via an Express Route or site to site VPN.
  • Consistent, highly reliable performance. Eliminates the noisy neighbor issue, removing fear of intermittent slowdowns that can impact business critical processes with a dedicated runtime where only your Logic Apps execute in.
  • Isolated, private storage. Sensitive data subject to regulation is kept private and secure, opening new integration opportunities.
  • Predicable pricing. Provides a fixed monthly cost for Logic Apps. Each Integration Service Environment includes the free usage of 1 Standard Integration Account and 1 Enterprise connector. If your Logic Apps action execution count exceeds 50 million action executions per month, the Integration Service Environment could provide better value.

Integration Service Environments are available in every region that Logic Apps is currently available in, with the exception of the following locations:

  • West Central US
  • Brazil South
  • Canada East

Logic Apps is great for customers who require a highly reliable, private integration service for all their data and services. You can try the public preview by signing up for an Azure account. If you’re an existing customer, you can find out how to get started by visiting our documentation, “Connect to Azure virtual networks from Azure Logic Apps by using an integration service environment.”

Announcing Azure Monitor AIOps Alerts with Dynamic Thresholds

We are happy to announce that Metric Alerts with Dynamic Thresholds is now available in public preview. Dynamic Thresholds are a significant enhancement to Azure Monitor Metric Alerts. With Dynamic Thresholds you no longer need to manually identify and set thresholds for alerts. The alert rule leverages advanced machine learning (ML) capabilities to learn metrics’ historical behavior, while identifying patterns and anomalies that indicate possible service issues.

Metric Alerts with Dynamic Thresholds are supported through a simple Azure portal experience, as well as provides support for Azure workloads operations at scale by allowing users to configure alert rules through an Azure Resource Manager (ARM) API in a fully automated manner.

Why and when should I apply Dynamic Thresholds to my metrics alerts?

Smart metric pattern recognition – A big pain point with setting static threshold is that you need to identify patterns on your own and create an alert rule for each pattern. With Dynamic Thresholds, we use a unique ML technology to identify the patterns and come up with a single alert rule that has the right thresholds and accounts for seasonality patterns such as hourly, daily, or weekly. Let’s take the example of HTTP requests rate. As you can see below, there is definite seasonality here. Instead of setting two or more different alert rules for weekdays and weekends, you can now get Azure Monitor to analyze your data and come up with a single alert rule with Dynamic Thresholds that changes between weekdays and weekends.

Server request (Platform) graph

Scalable alerting – Wouldn’t it be great if you could automatically apply an alert rule on CPU usage to any virtual machine (VM) or application that you create? With Dynamic Thresholds, you can create a single alert rule that can then be applicable automatically to any resource that you create. You don’t need to provide thresholds. The alert rule will identify the baseline for the resource and define the thresholds automatically for you. With Dynamic Thresholds, you now have a scalable approach that will save a significant amount of time on management and creation of alerts rules.

Domain knowledge – Setting a threshold often requires a lot of domain knowledge. Dynamic Thresholds eliminates that need with the use of your ML algorithms. Further, we have optimized the algorithms for common use cases such as CPU usage for a VM or requests duration for an application. So you can have full confidence that the alert will capture any anomalies while still reducing the noise for you.

Intuitive configuration – Dynamic Thresholds allow setting up metric alerts rules using high-level concepts, alleviating the need to have extensive domain knowledge about the metric. This is expressed by only requiring users to select the sensitivity for deviations (low, medium, high) and boundaries (lower, higher, or both thresholds) based on the business impact of the alert in the UI or ARM API.

Screenshot of intuitive configuration with Dynamic Thresholds

Dynamic Thresholds also allow you to configure a minimum amount of deviations required within a certain time window for the system to raise an alert, the default time window is four deviations in 20 minutes. The user can configure this and choose what he/she would like to be alerted on by changing the failing periods and time window.

Setting number of violations to trigger the alert screenshot

Setting number of violations to trigger the alert screenshot

Metric Alerts with Dynamic Threshold is currently available for free during the public preview. To see the pricing that will be effective at general availability, visit our pricing page. To get started, please refer to the documentation, “Metric Alerts with Dynamic Thresholds in Azure Monitor (Public Preview).” We would love to hear your feedback! If you have any questions or suggestions, please reach out to us at [email protected].

Please note, Dynamic Threshold based alerts are available for all Azure Monitor based metric sources listed in the documentation, “Supported resources for metric alerts in Azure Monitor.”

Improving the TypeScript support in Azure Functions

TypeScript is becoming increasingly popular in the JavaScript community. Since Azure Functions runs Node.js, and TypeScript compiles to JavaScript, motivated users already could get TypeScript code up and running in Azure Functions. However, the experience wasn’t seamless, and things like our default folder structure made getting started a bit tricky. Today we’re pleased to announce a set of tooling improvements that improve this situation. Azure Functions users can now easily develop with TypeScript when building their event-driven applications!

For those unfamiliar, TypeScript is a superset of JavaScript which provides optional static typing, classes, and interfaces. These features allow you to catch bugs earlier in the development process, leading to more robust software engineering. TypeScript also indirectly enables you to leverage modern JavaScript syntax, since TypeScript is compatible with ECMAScript 2015.

With this set of changes to the Azure Functions Core Tools and the Azure Functions Extension for Visual Studio Code, Azure Functions now supports TypeScript out of the box! Included with these changes are a set of templates for TypeScript, type definitions, and npm scripts. Read on to learn more details about the new experience.

Templates for TypeScript

In the latest version of the Azure Functions Core Tools and the Azure Functions Extension for VS Code, you’re given the option to use TypeScript when creating functions. To be more precise, on creation of a new function app, you will now see the option to specify TypeScript on language stack selection. This action will opt you into default package.json and .tsconfig files, setting up their app to be TypeScript compatible. After this, when creating a function, you will be able to select from a number of TypeScript specific function templates. Each template represents one possible trigger, and there is an equivalent present in TypeScript for each template supported in JavaScript.

Select from a number of TypeScript specific function templates for different triggers

The best part of this new flow is that to transpile and run TypeScript functions, you don’t have to take any actions at all that are unique to Functions. For example, what this means is that when a user hits F5 to start debugging Visual Studio Code, Code will automatically run the required installation tasks, transpile the TypeScript code, and start the Azure Functions host. This local development experience is best in class, and is exactly how a user would start debugging any other app in VS Code.

Learn more about how to get your TypeScript functions up and running in our documentation.

Type definitions for Azure Functions

The @azure/functions package on npm contains type definitions for Azure Functions. Have you ever wondered what’s an Azure Function object is shaped like? Or maybe, the context object that is passed into every JavaScript function? This package helps! To get the most of TypeScript, this should to be imported in every .ts function. JavaScript purists can benefit too – including this package in your code gives you a richer Intellisense experience. Check out the @azure/functions package on npm to learn more!

Npm scripts

Included by default in the TypeScript function apps is a package.json file including a few simple npm scripts. These scripts allow Azure Functions to fit directly into your typical development flow by calling specific Azure Functions Core Tools commands. For instance, ‘npm start’ will automatically run ‘func start’, meaning that after creating a function app you don’t have to treat it differently than any other Node.js project.

To see these in action, check out our example repo!

Try it yourself!

With either the Azure Functions Core Tools or the Azure Functions Extension for VS Code, you can try out the improved experience for TypeScript in Azure Functions on your local machine, even if you don’t have an Azure account.

Next steps

As always, feel free to reach out to the team with any feedback on our GitHub or Twitter. Happy coding!