How to be more productive as a developer: 5 app integrations for Google Chat that can help

Posted by Mario Tapia, Product Marketing Manager, Google Workspace

In today’s fast-paced and ever-changing world, it is more important than ever for developers to be able to work quickly and efficiently. With so many different tools and applications available, it can be difficult to know which ones will help you be the most productive. In this blog post, we will discuss five different DevOps application integrations for Google Chat that can help you improve your workflows and be more productive as a developer.

PagerDuty for Google Chat

PagerDuty helps automate, orchestrate, and accelerate responses to unplanned work across an organization. PagerDuty for Google Chat empowers developers, DevOps, IT operations, and business leaders to prevent and resolve business-impacting incidents for an exceptional customer experience—all from Google Chat. With PagerDuty for Google Chat, get notifications, see and share details with link previews, and act by creating or updating incidents.

How to: Use PagerDuty for Google Chat

Asana for Google Chat

Asana helps you manage projects, focus on what’s important, and organize work in one place for seamless collaboration. With Asana for Google Chat, you can easily create tasks, get notifications, update tasks, assign them to the right people, and track your progress.

How to: Use Asana for Google Chat

Jira

Jira makes it easy to manage your issues and bugs. With Jira for Google Chat, you can receive notifications, easily create issues, assign them to the right people, and track your progress while keeping everyone in the loop.

How to: Use Jira for Google Chat

Jenkins

Jenkins allows you to automate your builds and deployments. With Jenkins for Google Chat, development and operations teams can connect into their Jenkins pipeline and stay up to date by receiving software build notifications or trigger a build directly in Google Chat.

How to: Use Jenkins for Google Chat

GitHub

GitHub lets you manage your code and collaborate with your team. Integrations like GitHub for Google Chat make the entire development process fit easily into a developer’s workflow. With GitHub, teams can quickly push new commits, make pull requests, do code reviews, and provide real-time feedback that improves the quality of their code—all from Google Chat.

How to: Use GitHub for Google Chat

Next steps

These are just a few of the many different application integrations that can help you be more productive as a developer, check out the Google Workspace Marketplace for more integrations you or the team might already be using. By using the right tools and applications, you can easily stay connected with your team, manage your tasks and projects, and automate your builds and deployments.

To keep track of all the latest announcements and developer updates for Google Workspace please subscribe to our monthly newsletter or follow us @workspacedevs.

Burst 4K encoding on Azure Kubernetes Service

Burst encoding in the cloud with Azure and Media Excel HERO platform.

Content creation has never been as in demand as it is today. Both professional and user-generated content has increased exponentially over the past years. This puts a lot of stress on media encoding and transcoding platforms. Add the upcoming 4K and even 8K to the mix and you need a platform that can scale with these variables. Azure Cloud compute offers a flexible way to grow with your needs. Microsoft offers various tools and products to fully support on-premises, hybrid, or native cloud workloads. Azure Stack offers support to a hybrid scenario for your computing needs and Azure ARC helps you to manage hybrid setups.

Finding a solution

Generally, 4K/UHD live encoding is done on dedicated hardware encoder units, which cannot be hosted in a public cloud like Azure. With such dedicated hardware units hosted on-premise that need to push 4K into the Azure data center the immediate problem we face is a need for high bandwidth network connection between the encoder unit on-premise and Azure data center. In general, it’s a best practice to ingest into multiple regions, increasing the load on the network connected between the encoder and the Azure Datacenter.

How do we ingest 4K content reliably into the public cloud?

Alternatively, we can encode the content in the cloud. If we can run 4K/UHD live encoding in Azure, its output can be ingested into Azure Media Services over the intra-Azure network backbone which provides sufficient bandwidth and reliability.

How can we reliably run and scale 4K/UHD live encoding on the Azure cloud as a containerized solution? Let’s explore below. 

Azure Kubernetes Service

With Azure Kubernetes Services (AKS) Microsoft offers a managed Kubernetes platform to customers. It is a hosted Kubernetes platform without having to spend a lot of time creating a cluster with all the necessary configuration burden like networking, cluster masters, and OS patching of the cluster nodes. It also comes with pre-configured monitoring seamlessly integrating with Azure Monitor and Log Analytics. Of course, it still offers flexibility to integrate your own tools. Furthermore, it is still just the plain vanilla Kubernetes and as such is fully compatible with any existing tooling you might have running on any other standard Kubernetes platform.

Media Excel encoding

Media Excel is an encoding and transcoding vendor offering physical appliance and software-based encoding solutions. Media Excel has been partnering with Microsoft for many years and engaging in Azure media customer projects. They are also listed as recommended and tested contribution encoder for Azure Media Services for fMP4. There has also work done by both Media Excel and Microsoft to integrate SCTE-35 timed metadata from Media Excel encoder to an Azure Media Services Origin supporting Server-Side Ad Insertion (SSAI) workflows.

Networking challenge

With increasing picture quality like 4K and 8K, the burden on both compute and networking becomes a significant architecting challenge. In a recent engagement with a customer, we needed to architect a 4K live streaming platform with a challenge of limited bandwidth capacity from the customer premises to one of our Azure Datacenters. We worked with Media Excel to build a scalable containerized encoding platform on AKS. Utilizing cloud compute and minimizing network latency between Encoder and Azure Media Services Packager. Multiple bitrates with a top bitrate up to [email protected] of the same source are generated in the cloud and ingested into the Azure Media Services platform for further processing. This includes Dynamic Encryption and Packaging. This setup enables the following benefits:

  • Instant scale to multiple AKS nodes
  • Eliminate network constraints between customer and Azure Datacenter
  • Automated workflow for containers and easy separation of concern with container technology
  • Increased level of security of high-quality generated content to distribution
  • Highly redundant capability
  • Flexibility to provide various types of Node pools for optimized media workloads

In this particular test, we proved that the intra-Azure network is extremely capable of shipping high bandwidth, latency-sensitive 4K packets from a containerized encoder instance running in West Europe to both East US and Honk Kong Datacenter Regions. This allows the customer to place origin closer to them for further content conditioning.

High-level Architecture of used Azure components for 4K encoding in the Azure cloud.

Workflow:

  1. Azure Pipeline is triggered to deploy onto the AKS cluster. In the YAML file (which you can find on Github) there is a reference to the Media Excel Container in Azure Container Registry.
  2. AKS starts deployment and pulls container from Azure Container Registry.
  3. During Container start custom PHP script is loaded and container is added to the HMS (Hero Management Service). And placed into the correct device pool and job.
  4. Encoder loads source and (in this case) push 4K Livestream into Azure Media Services.
  5. Media Services packaged Livestream into multiple formats and apply DRM (digital rights management).
  6. Azure Content Deliver Network scales livestream.

Scale through Azure Container Instances

With Azure Kubernetes Services you get the power of Azure Container Instances out of the box. Azure Container Instances are a way to instantly scale to pre-provisioned compute power at your disposal. When deploying Media Excel encoding instances to AKS you can specify where these instances will be created. This offers you the flexibility to work with variables like increased density on cheaper nodes for low-cost low priority encoding jobs or more expensive nodes for high throughput high priority jobs. With Azure Container Instances you can instantly move workloads to standby compute power without provisioning time. You only pay for the compute time offering full flexibility for customer demand and future change in platform needs. With Media Excel’s flexible Live/File based encoding roles you can easily move workloads across different compute power offered by AKS and Azure Container Instances.

Container Creating on Azure Kubernetes Services (AKS)

Media Excel Hero Management System showing all Container Instances.

Azure DevOps pipeline to bring it all together

All the general benefits that come with containerized workload apply in the following case. For this particular proof-of-concept, we created an automated deployment pipeline in Azure DevOps for easy testing and deployment. With a deployment YAML and Pipeline YAML we can easily automate deployment, provisioning and scaling of a Media Excel encoding container. Once DevOps pushes the deployment job onto AKS a container image is pulled from Azure Container Registry. Although container images can be bulky utilizing node side caching of layers any additional container pull is greatly improved down to seconds. With the help of Media Excel, we created a YAML file container pre- and post-container lifecycle logic that will add and remove a container from Media Excel’s management portal. This offers an easy single pane of glass management of multiple instances across multiple node types, clusters, and regions.

This deployment pipeline offers full flexibility to provision certain multi-tenant customers or job priority on specific node types. This unlocks the possibility of provision encoding jobs on GPU enabled nodes for maximum throughput or using cheaper generic nodes for low priority jobs.

Deployment Release Pipeline in Azure DevOps.

Azure Media Services and Azure Content Delivery Network

Finally, we push the 4K stream into Azure Media Services. Azure Media Services is a cloud-based platform that enables you to build solutions that achieve broadcast-quality video streaming, enhance accessibility and distribution, analyze content, and much more. Whether you’re an app developer, a call center, a government agency, or an entertainment company, Media Services helps you create apps that deliver media experiences of outstanding quality to large audiences on today’s most popular mobile devices and browsers.

Azure Media Services is seamlessly integrated with Azure Content Delivery Network. With Azure Content Delivery Network we offer a true multi CDN with choices of Azure Content Delivery Network from Microsoft, Azure Content Delivery Network from Verizon, and Azure Content Delivery Network from Akamai. All of this through a single Azure Content Delivery Network API for easy provisioning and management. As an added benefit, all CDN traffic between Azure Media Services Origin and CDN edge is free of charge.

With this setup, we’ve demonstrated that Cloud encoding is ready to handle real-time 4K encoding across multiple clusters. Thanks to Azure services like AKS, Container Registry, Azure DevOps, Media Services, and Azure Content Delivery Network, we demonstrated how easy it is to create an architecture that is capable of meeting high throughput time-sensitive constraints.

Building Xbox game streaming with Site Reliability best practices

Last month, we started sharing the DevOps journey at Microsoft through the stories of several teams at Microsoft and how they approach DevOps adoption. As the next story in this series, we want to share the transition one team made from a classic operations role to a Site Reliability Engineering (SRE) role: the story of the Xbox Reliability Engineering and Operations (xREO) team.

This transition was not easy and came out of necessity when Microsoft decided to bring Xbox games to gamers wherever they are through cloud game streaming (project xCloud). In order to deliver cutting-edge technology with top-notch customer experience, the team had to redefine the way it worked—improving collaboration with the development team, investing in automation, and get involved in the early stages of the application lifecycle. In this blog, we’ll review some of the key learnings the team collected along the way. To explore the full story of the team, see the journey of the xREO team.

Consistent gameplay requirements and the need to collaborate

A consistent experience is crucial to a successful game streaming session. To ensure gamers experience a game streamed from the cloud, it has to feel like it is running on a nearby console. This means creating a globally distributed cloud solution that runs on many data centers, close to end users. Azure’s global infrastructure makes this possible, but operating a system running on top of so many Azure regions is a serious challenge.

The Xbox developers who have started architecting and building this technology understood that they could not just build this system and “throw it over the wall” to operations. Both teams had to come together and collaborate through the entire application lifecycle so the system can be designed from the start with considerations on how it will be operated in a production environment.

Mobile device showing a racing game streamed from the cloud

Architecting a cloud solution with operations in mind

In many large organizations, it is common to see development and operation teams working in silos. Developers don’t always consider operation when planning and building a system, while operations teams are not empowered to touch code even though they deploy it and operate it in production. With an SRE approach, system reliability is baked into the entire application lifecycle and the team that operates the system in production is a valued contributor in the planning phase. In a new approach, involving the xREO team in the design phase enabled a collaborative environment, making joint technology choices and architecting a system that could operate with the requirements needed to scale.

Leveraging containers to clearly define ownership

One of the first technological decisions the development and xREO teams made together was to implement a microservices architecture utilizing container technologies. This allowed the development teams to containerize .NET Core microservices they would own and remove the dependency from the cloud infrastructure that was running the containers and was to be owned by the xREO team.

Another technological decision both teams made early on, was to use Kubernetes as the underlying container orchestration platform. This allowed the xREO team to leverage Azure Kubernetes Service (AKS), a managed Kubernetes cloud platform that simplifies the deployment of Kubernetes clusters, removing a lot of the operational complexity the team would have to face running multiple clusters across several Azure regions. These joint choices made ownership clear—the developers are responsible for everything inside the containers and the xREO team is responsible for the AKS clusters and other Azure services make the cloud infrastructure hosting these containers. Each team owns the deployment, monitoring and operation of its respective piece in production.

This kind of approach creates clear accountability and allows for easier incident management in production, something that can be very challenging in a monolithic architecture where infrastructure and application logic have code dependencies and are hard to untangle when things go sideways.

Two members of the xREO team, seated in a conference room in front of a laptop.

Scaling through infrastructure automation

Another best practice the xREO team invested in was infrastructure automation. Deploying multiple cloud services manually on each Azure region was not scalable and would take too much time. Using a practice known as “infrastructure as code” (IaC) the team used Azure Resource Manager templates to create declarative definitions of cloud environments that allow deployments to multiple Azure regions with minimal effort.

With infrastructure managed as code, it can also be deployed using continuous integration and continuous delivery (CI/CD) to bring further automation to the process of deploying new Azure resources to existing data centers, updating infrastructure definitions or bringing online new Azure regions when needed. Both IaC and CI/CD, allowed the team to remain lean, avoid repetitive mundane work and remove most of the risk of human error that comes with manual steps. Instead of spending time on manual work and checklists, the team can focus on further improving the platform and its resilience.

Site Reliability Engineering in action 

The journey of the xREO team started with a need to bring the best customer experience to gamers. This is a great example that shows how teams who want to delight customers with new experiences through cutting edge innovation must evolve the way they design, build, and operate software. Shifting their approach to operations and collaborating more closely with the development teams was the true transformation the xREO team has undergone.

With this new mindset in place, the team is now well positioned to continue building more resilience and further scale the system and by so, deliver the promise of cloud game streaming to every gamer.

Resources

Unlocking the promise of IoT: A Q&A with Vernon Turner

Vernon Turner is the Founder and Chief Strategist at Causeway Connections, an information and communications technology research firm. For nearly a decade, he’s been serving on global, national, and state steering committees, advising governments, businesses, and communities on IoT-based solution implementation. He recently talked with us about the importance of distinguishing between IoT hype and reality, and identifies three steps businesses need to take to make a successful digital transformation.

What is the promise of IoT?

The promise of more and more data from more and more connected sensors boils down to unprecedented insights and efficiencies. Businesses get more visibility into their operations, a better understanding of their customers, and the ability to personalize offerings and experiences like never before, as well as the ability to cut operational costs via automation and business-process efficiencies.

But just dabbling with IoT won’t unlock real business value. To do that, companies need to change everything, how they make products, how they go to market, their strategy, and their organizational structure. They need to really transform. And to do that, they need to do three things, lead with the customer experience, migrate to offering subscription-based IoT-enabled services, and have a voice in an emergent ecosystem of partners related to their business.

Why is the customer experience so important to fulfilling the promise of IoT?

There can be a lot of hype around IoT-enabled offerings. 

I recently toured several so-called smart buildings with a friend in the construction industry. He showed me that just filling a building with IoT-enabled gadgets doesn’t make it smart. A truly smart building goes beyond connected features and addresses the specific, real-world needs of tenants, leaseholders, and building managers.

If it doesn’t radically change the customer experience, it doesn’t fulfill the promise of IoT.

What’s the disconnect? Why aren’t “smart” solution vendors delivering what customers want?

Frankly, it’s easier to sell a product than an experience.

Customer experience should be at the center of the pitch for IoT, because IoT enables customers to have much more information about the product, in real-time, across the product lifecycle. But putting customer experience first requires making hard changes. It means adopting new strategies, business models, and organization charts, as well as new approaches to product development, sales and marketing, and talent management. And it means asking suppliers to create new business models to support sharing data across the product lifecycle.

Why is the second step to digital transformation, migrating to offering subscription-based, IoT-enabled services, so important?

To survive in our digitally transforming economy, it’s essential for businesses and their suppliers, to move from selling static products to a subscription-based services business model.

As sensors and other connected devices become increasingly omnipresent, customers see more real-time data showing them exactly what they’re consuming, and how the providers of the services they’re consuming are performing. By moving to a subscription (or “X as a service”) model, businesses can provide more tailored offerings, grow their customer base, and position themselves for success in the digital age.

When companies embrace transformation, it can have a ripple effect across their operations. Business units can respond to market needs to create a new service by combining microservices using the rapid software development techniques of DevOps. These services drive a shift from infrequent, low-business-value interactions with customers to continuous engagement between customers and companies’ sales and business units. This improves customer relationships, staving off competition, and introducing new sales opportunities.

What challenges should companies be prepared for as they migrate to offering subscription services?

For a subscription-based services model to work, most companies need to make significant changes to their culture and organizational structure.

Financial planning needs to stop reviewing past financial statements and start focusing on future recurring revenue. Instead of concentrating on margin-based products, sales should start selling outcomes that add value for customers. Marketing must be driven by data about the customer experience and what the customer needs, rather than what serves the branding campaign.

From now on, rapid change, responsiveness to the customer, and the ability to customize and scale services are going to be the norm in business.

You mentioned the importance of participating in an emergent ecosystem of partners. What does that mean? Why does it matter?

As digital business processes mature and subscription models become the standard, customers will demand ways to integrate their relationships with IT and business vendors in an ecosystem connected by a single platform.

Early results show that vendors who actively participate in their solution platform’s ecosystem enjoy a higher net promoter score (NPS). In the short term, they gain stickiness with customers. And in the long run, they become more relevant across their ecosystem, gain a competitive advantage over peers inside and outside their ecosystem, and deliver more value to customers.

How does ecosystem participation increase the value delivered to customers?

Because everyone’s using the same platform, customers get transparency into the performance of suppliers. Service-level management becomes the first point of contact between businesses and suppliers. Key performance indicators trigger automatic responses to customer experiences. Response times to resolve issues are mediated by the platform.

These tasks and functions are carried out within the ecosystem and orchestrated by third-party service management companies. But that’s not to say businesses in the ecosystem don’t still have an individual, separate relationship with their customers. Rather, the ecosystem acts as a gateway for IT and business suppliers to integrate their offerings into customer services. Business and product outcomes from the ecosystem feed research and development, product design, and manufacturing, leading to continual improvement in services delivery and customer experience.

To conclude, let’s go back to something we talked about earlier. For builders, a truly smart building is one that does more than just keep the right temperature. It also monitors and secures wireless networks, optimizes lighting based on tenants’ specific needs, manages energy use, and so on to deliver comfortable, customized work, living, or shopping environments. To deliver that kind of customer experience takes an ecosystem of partners, all working in concert. For companies to unlock the value of IoT, they need to participate actively in that ecosystem.

Learn how Azure helps businesses unlock the value of IoT.

Sharing the DevOps journey at Microsoft

Today, more and more organizations are focused on delivering new digital solutions to customers and finding that the need for increased agility, improved processes, and collaboration between development and operation teams is becoming business-critical. For over a decade, DevOps has been the answer to these challenges. Understanding the need for DevOps is one thing, but the actual adoption of DevOps in the real world is a whole other challenge. How can an organization with multiple teams and projects, with deeply rooted existing processes, and with considerable legacy software change its ways and embrace DevOps?

At Microsoft, we know something about these challenges. As a company that has been building software for decades, Microsoft consists of thousands of engineers around the world that deliver many different products. From Office, to Azure, to Xbox we also found we needed to adapt to a new way of delivering software. The new era of the cloud unlocks tremendous potential for innovation to meet our customers’ growing demand for richer and better experienceswhile our competition is not slowing down. The need to accelerate innovation and to transform how we work is real and urgent.

The road to transformation is not easy and we believe that the best way to navigate this challenging path is by following the footsteps of those who have already walked it. This is why we are excited to share our own DevOps journey at Microsoft with learnings from teams across the company who have transformed through the adoption of DevOps.

 

More than just tools

An organization’s success is achieved by providing engineers with the best tools and latest practices. At Microsoft, the One Engineering System (1ES) team drives various efforts to help teams across the company become high performing. The team initially focused on tool standardization and saw some good results—source control issues decreased, build times and build reliability improved. But over time it became clear that the focus on tooling is not enough, to help teams, 1ES had to focus on culture change as well. Approaching culture change can be tricky, do you start with quick wins, or try to make a fundamental change at scale? What is the right engagement model for teams of different sizes and maturity levels? Learn more about the experimental journey of the One Engineering System team.

Redefining IT roles and responsibilities

The move to the cloud can challenge the definitions of responsibilities in an organization. As development teams embrace cloud innovation, IT operations teams find that the traditional models of ownership over infrastructure no longer apply. The Manageability Platforms team in the Microsoft Core Service group (previously Microsoft IT), found that the move to Azure required rethinking the way IT and development teams work together. How can the centralized IT model be decentralized so the team can move away from mundane, day-to-day work while improving the relationship with development teams? Explore the transformation of the Manageability Platforms team.

Streamlining developer collaboration

Developer collaboration is a key component of innovation. With that in mind, Microsoft open-sourced the .NET framework to invite the community to collaborate and innovate on .NET. As the project was open-sourced over time, its scale and complexity became apparent. The project spanned over many repositories, each with its own structure using multiple different continuous integration (CI) systems, making it hard for developers to move between repositories. The .NET infrastructure team at Microsoft decided to invest in streamlining developer processes. That challenge was approached by focusing on standardizing repo structure, shared tooling, and converging on a single CI system so both internal and external contributors to the project would benefit. Learn more about the investments made by the .NET infrastructure team.

A journey of continuous learning

DevOps at Microsoft is a journey, not a destination. Teams adapt, try new things, and continue to learn how to change and improve. As there is always more to learn, we will continue to share the transformation stories of additional teams at Microsoft in the coming months. As an extension of this continuous internal learning journey, we invite you to join us on the journey and learn how to embrace DevOps and empower your teams to build better solutions, faster and deliver them to happier customers.

DevOps at Microsoft

Resources


Azure. Invent with purpose.

October 2019 unified Azure SDK preview

Welcome back to another release of the unified Azure Data client libraries. For the most part, the API surface areas of the SDKs have been stabilized based on your feedback. Thank you to everyone who has been submitting issues on GitHub and keep the feedback coming.

Please grab the October preview libraries and try them out—throw demanding performance scenarios at them, integrate them with other services, try to debug an issue, or generally build your scenario and let us know what you find.

Our goal is to release these libraries before the end of the year but we are driven by quality and feedback and your participation is key.

Getting started

As we did for the last three releases, we have created four pages that unify all the key information you need to get started and give feedback. You can find them here:

For those of you who want to dive deep into the content, the release notes linked above and the changelogs they point to give more details on what has changed. Here we are calling out a few high-level items.

APIs locking down

The surface area for Azure Key Vault and Storage Libraries are nearly API-complete based on the feedback you’ve given us so far. Thanks again to everyone who has sent feedback, and if anyone has been waiting to try things out and give feedback, now is the time.

Batch API support in Storage

You can now use batching APIs with the SDKs for Storage to handle manipulating large numbers of items in parallel. In Java and .NET you will find a new batching library package in the release notes while in JavaScript and Python the feature is in the core library.

Unified credentials

The Azure SDKs that depend on Azure Identity make getting credentials for services much easier.

Each library supports the concept of a DefaultAzureCredential and depending on where your code runs, it will select the right credential for logging in. For example, if you’re writing code and have signed into Visual Studio or performed an az login from the CLI, the client libraries can automatically pick up the sign-in token from those tools. When you move the code to a service environment, it will attempt to use a managed identity if one is available. See the language specific READMEs for Azure Identity for more.

Working with us and giving feedback

So far, the community has filed hundreds of issues against these new SDKs with feedback ranging from documentation issues to API surface area change requests to pointing out failure cases. Please keep that coming. We work in the open on GitHub and you can submit issues here:

In addition, we’re excited to say we’ll be attending Microsoft Ignite 2019, so please come and talk to us in person. Finally, please tweet at us at @AzureSdk.

Get started with Azure for free.

Building cloud-native applications with Azure and HashiCorp

With each passing year, more and more developers are building cloud-native applications. As developers build more complex applications they are looking to innovators like Microsoft Azure and HashiCorp to reduce the complexity of building and operating these applications. HashiCorp and Azure have worked together on a myriad of innovations. Examples of this innovation include tools that connect cloud-native applications to legacy infrastructure and tools that secure and automate the continuous deployment of customer applications and infrastructure. Azure is deeply committed to being the best platform for open source software developers like HashiCorp to deliver their tools to their customers in an easy-to use, integrated way. Azure innovation like the managed applications platform that power HashiCorp’s Consul Service on Azure are great examples of this commitment to collaboration and a vibrant open source startup ecosystem. We’re also committed to the development of open standards that help these ecosystems move forward and we’re thrilled to have been able to collaborate with HashiCorp on both the CNAB (Cloud Native Application Bundle) and SMI (Service Mesh Interface) specifications.

Last year at HashiConf 2018, I had the opportunity to share how we had started to integrate Terraform and Packer into the Azure platform. I’m incredibly excited to get the opportunity to return this year to share how these integrations are progressing and to share a new collaboration on cloud native networking. With this new work we now have collaborations that help customers connect and operate their applications on Azure using HashiCorp technology.

Connect — HashiCorp Consul Service on Azure

After containers and Kubernetes, one of the most important innovations in microservices has been the development of the concept of a service mesh. Earlier this year we partnered with HashiCorp and others to announce the release of Service Mesh Interface, a collaborative, implementation agnostic API for the configuration and deployment of service mesh technology. We collaborated with HashiCorp to produce a control rules implementation of the traffic access control (TAC) using Consul Connect. Today we’re excited that Azure customers can take advantage of HashiCorp Consul Services on Azure powered by the Azure Managed Applications platform. HashiCorp Consul provides a solution to simplify and secure service networking and with this new managed offering, our joint customers can focus on the value of Consul while confident that the experts at HashiCorp are taking care of the management of the service. Reducing complexity for customers and enabling them to focus on cloud native innovation.

Provision — HashiCorp Terraform on Azure

HashiCorp Terraform is a great tool for doing declarative deployment to Azure. We’re seeing great momentum with adoption of HashiCorp Terraform on Azure as the number of customers has doubled since the beginning of the year – customers are using Terraform to automate Azure infrastructure deployment and operation in a variety of scenarios. 

The momentum is fantastic on the contribution front as well with nearly 180 unique contributors to the Terraform provider for Azure Resource Manager. The involvement from the community with our increased 3-week cadence of releases (currently at version 1.32) ensures more coverage of Azure services by Terraform. Additionally, after customer and community feedback regarding the need for additional Terraform modules for Azure, we’ve been working hard at adding high quality modules and now have doubled the number of Azure modules in the terraform registry, bringing it to over 120 modules. 

We believe all these additional integrations enable customers to manage infrastructure as code more easily and simplify managing their cloud environments. Learn more about Terraform on Azure.

Microsoft and HashiCorp are working together to provide integrated support for Terraform on Azure. Customers using Terraform on Microsoft’s Azure cloud are mutual customers, and both companies are united to provide troubleshooting and support services. This joint entitlement process provides collaborative support across companies and platforms while delivering a seamless customer experience. Customers using Terraform Provider for Azure can file support tickets to Microsoft support. Customers using Terraform on Azure support can file support tickets to Microsoft or HashiCorp.

Deploy — Collaborating on Cloud Native Application Bundles specification

One of the critical problems solved by containers is the hermetic packaging of a binary into a package that is easy to share and deploy around the world. But a cloud-native application is more than a binary, and this is what led to the co-development, with HashiCorp and others, of the Coud Native Application Bundle (CNAB) specification. CNABs  allow you to package images alongside configuration tools like Terraform and other artifacts to allow a user to seamlessly deploy an application from a single package. I’ve been excited to see the community work together to build the specification to a 1.0 release that shows CNAB is ready for all of the world’s deployment needs. Congratulations to the team on the work and the fantastic partnership.

If you want to learn more about the ways in which Azure and HashiCorp collaborate to make cloud-native development easier, please check out the links below:

Preview of custom content in Azure Policy guest configuration

Today we are announcing a preview of a new feature of Azure Policy. The guest configuration capability, which audits settings inside Linux and Windows virtual machines (VMs), is now ready for customers to author and publish custom content.

The guest configuration platform has been generally available for built-in content provided by Microsoft. Customers are using this platform to audit common scenarios such as who has access to their servers, what applications are installed, if certificates are up to date, and whether servers can connect to network locations.

An image of the Definitions page in Azure Policy.

Starting today, customers can use new tooling published to the PowerShell Gallery to author, test, and publish their own content packages both from their developer workstation and from CI/CD platforms such as Azure DevOps.

For example, if you are running an application on an Azure virtual machine that was developed by your organization, you can audit the configuration of that application in Azure and be notified when one of the VMs in your fleet is not compliant.

This is also an important milestone for compliance teams who need to audit configuration baselines. There is already a built-in policy to audit Windows machines using Microsoft’s recommended security configuration baseline.  Custom content expands the scenario to content from a popular source of configuration details, group policy. There is tooling available to convert from group policy format to the desired state configuration syntax used by Azure Policy guest configuration. Group policy is a common format used by organizations that publish regulatory standards, and a popular tool for enterprise organizations to manage servers in private datacenters.

Finally, customers that are publishing custom content packages can include third party tooling. Many customers have existing tools used for performing audits of settings inside virtual machines before they are released to production. As an example, the gcInSpec module is published as an open source project with maintainers from Microsoft and Chef. Customers can include this module in their content package to audit Windows virtual machines using their existing investment in Chef InSpec.

For more information, and to get started using custom content in Azure Policy guest configuration see the documentation page ”How to create Guest Configuration policies.”

Announcing the general availability of Python support in Azure Functions

Python support for Azure Functions is now generally available and ready to host your production workloads across data science and machine learning, automated resource management, and more. You can now develop Python 3.6 apps to run on the cross-platform, open-source Functions 2.0 runtime. These can be published as code or Docker containers to a Linux-based serverless hosting platform in Azure. This stack powers the solution innovations of our early adopters, with customers such as General Electric Aviation and TCF Bank already using Azure Functions written in Python for their serverless production workloads. Our thanks to them for their continued partnership!

In the words of David Havera, blockchain Chief Technology Officer of the GE Aviation Digital Group, “GE Aviation Digital Group’s hope is to have a common language that can be used for backend Data Engineering to front end Analytics and Machine Learning. Microsoft have been instrumental in supporting this vision by bringing Python support in Azure Functions from preview to life, enabling a real world data science and Blockchain implementation in our TRUEngine project.”

Throughout the Python preview for Azure Functions we gathered feedback from the community to build easier authoring experiences, introduce an idiomatic programming model, and create a more performant and robust hosting platform on Linux. This post is a one-stop summary for everything you need to know about Python support in Azure Functions and includes resources to help you get started using the tools of your choice.

Bring your Python workloads to Azure Functions

Many Python workloads align very nicely with the serverless model, allowing you to focus on your unique business logic while letting Azure take care of how your code is run. We’ve been delighted by the interest from the Python community and by the productive solutions built using Python on Functions.

Workloads and design patterns

While this is by no means an exhaustive list, here are some examples of workloads and design patterns that translate well to Azure Functions written in Python.

Simplified data science pipelines

Python is a great language for data science and machine learning (ML). You can leverage the Python support in Azure Functions to provide serverless hosting for your intelligent applications. Consider a few ideas:

  • Use Azure Functions to deploy a trained ML model along with a scoring script to create an inferencing application.

Azure Functions inferencing app

  • Leverage triggers and data bindings to ingest, move prepare, transform, and process data using Functions.
  • Use Functions to introduce event-driven triggers to re-training and model update pipelines when new datasets become available.

Automated resource management

As an increasing number of assets and workloads move to the cloud, there’s a clear need to provide more powerful ways to manage, govern, and automate the corresponding cloud resources. Such automation scenarios require custom logic that can be easily expressed using Python. Here are some common scenarios:

  • Process Azure Monitor alerts generated by Azure services.
  • React to Azure events captured by Azure Event Grid and apply operational requirements on resources.

Event-driven automated resource management

  • Leverage Azure Logic Apps to connect to external systems like IT service management, DevOps, or monitoring systems while processing the payload with a Python function.
  • Perform scheduled operational tasks on virtual machines, SQL Server, web apps, and other Azure resources.

Powerful programming model

To power accelerated Python development, Azure Functions provides a productive programming model based on event triggers and data bindings. The programming model is supported by a world class end-to-end developer experience that spans from building and debugging locally to deploying and monitoring in the cloud.

The programming model is designed to provide a seamless experience for Python developers so you can quickly start writing functions using code constructs that you’re already familiar with, or import existing .py scripts and modules to build the function. For example, you can implement your functions as asynchronous coroutines using the async def qualifier or send monitoring traces to the host using the standard logging module. Additional dependencies to pip install can be configured using the requirements.txt file.

Azure Functions programming model

With the event-driven programming model in Functions, based on triggers and bindings, you can easily configure the events that will trigger the function execution and any data sources the function needs to orchestrate with. This model helps increase productivity when developing apps that interact with multiple data sources by reducing the amount of boilerplate code, SDKs, and dependencies that you need to manage and support. Once configured, you can quickly retrieve data from the bindings or write back using the method attributes of your entry-point function. The Python SDK for Azure Functions provides a rich API layer for binding to HTTP requests, timer events, and other Azure services, such as Azure Storage, Azure Cosmos DB, Service Bus, Event Hubs, or Event Grid, so you can use productivity enhancements like autocomplete and Intellisense when writing your code. By leveraging the Azure Functions extensibility model, you can also bring your own bindings to use with your function, so you can also connect to other streams of data like Kafka or SignalR.

Azure Functions queue trigger example

Easier development

As a Python developer, you can use your preferred tools to develop your functions. The Azure Functions Core Tools will enable you to get started using trigger-based templates, run locally to test against real-time events coming from the actual cloud sources, and publish directly to Azure, while automatically invoking a server-side dependency build on deployment. The Core Tools can be used in conjunction with the IDE or text editor of your choice for an enhanced authoring experience.

You can also choose to take advantage of the Azure Functions extension for Visual Studio Code for a tightly integrated editing experience to help you create a new app, add functions, and deploy, all within a matter of minutes. The one-click debugging experience enables you to test your functions locally, set breakpoints in your code, and evaluate the call stack, simply with the press of F5. Combine this with the Python extension for Visual Studio Code, and you have an enhanced Python development experience with auto-complete, Intellisense, linting, and debugging.

Azure Functions Visual Studio Code development

For a complete continuous delivery experience, you can now leverage the integration with Azure Pipelines, one of the services in Azure DevOps, via an Azure Functions-optimized task to build the dependencies for your app and publish them to the cloud. The pipeline can be configured using an Azure DevOps template or through the Azure CLI.

Advance observability and monitoring through Azure Application Insights is also available for functions written in Python, so you can monitor your apps using the live metrics stream, collect data, query execution logs, and view the distributed traces across a variety of services in Azure.

Host your Python apps with Azure Functions

Host your Python apps with the Azure Functions Consumption plan or the Azure Functions Premium plan on Linux.

The Consumption plan is now generally available for Linux-based hosting and ready for production workloads. This serverless plan provides event-driven dynamic scale and you are charged for compute resources only when your functions are running. Our Linux plan also now has support for managed identities, allowing your app to seamlessly work with Azure resources such as Azure Key Vault, without requiring additional secrets.

Azure Functions Linux Consumption managed identities

The Consumption plan for Linux hosting also includes a preview of integrated remote builds to simplify dependency management. This new capability is available as an option when publishing via the Azure Functions Core Tools and enables you to build in the cloud on the same environment used to host your apps as opposed to configuring your local build environment in alignment with Azure Functions hosting.

Python remote build with Azure Functions

Workloads that require advanced features such as more powerful hardware, the ability to keep instances warm indefinitely, and virtual network connectivity can benefit from the Premium plan with Linux-based hosting now available in preview.

Azure Functions Premium plan virtual network integration

With the Premium plan for Linux hosting you can choose between bringing only your app code or bringing a custom Docker image to encapsulate all your dependencies, including the Azure Functions runtime as described in the documentation “Create a function on Linux using a custom image.” Both options benefit from avoiding cold start and from scaling dynamically based on events.

Azure Functions Premium plan hosting for code or containers

Next steps

Here are a few resources you can leverage to start building your Python apps in Azure Functions today:

On the Azure Functions team, we are committed to providing a seamless and productive serverless experience for developing and hosting Python applications. With so much being released now and coming soon, we’d love to hear your feedback and learn more about your scenarios. You can reach the team on Twitter and on GitHub. We actively monitor StackOverflow and UserVoice as well, so feel free to ask questions or leave your suggestions. We look forward to hearing from you!

Azure Stream Analytics now supports MATCH_RECOGNIZE

MATCH_RECOGNIZE in Azure Stream Analytics significantly reduces the complexity and cost associated with building, modifying, and maintaining queries that match sequence of events for alerts or further data computation.

What is Azure Stream Analytics?

Azure Stream Analytics is a fully managed serverless PaaS offering on Azure that enables customers to analyze and process fast moving streams of data and deliver real-time insights for mission critical scenarios. Developers can use a simple SQL language, extensible to include custom code, in order to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with milli-second latencies.

Traditional way to incorporate pattern matching in stream processing

Many customers use Azure Stream Analytics to continuously monitor massive amounts of data, detecting sequence of events and deriving alerts or aggregating data from those events. This in essence is pattern matching.

For pattern matching, customers traditionally relied on multiple joins, each one detecting a single event in particular. These joins are combined to find a sequence of events, compute results or create alerts. Developing queries for pattern matching is a complex process and very error prone, difficult to maintain and debug. Also, there are limitations when trying to express more complex patterns like Kleene Stars, Kleene Plus, or Wild Cards.

To address these issues and improve customer experience, Azure Stream Analytics provides a MATCH_RECOGNIZE clause to define patterns and compute values from the matched events. MATCH_RECOGNIZE clause increases user productivity as it is easy to read, write and maintain.

Typical scenario for MATCH_RECOGNIZE

Event matching is an important aspect of data stream processing. The ability to express and search for patterns in a data stream enable users to create simple yet powerful algorithms that can trigger alerts or compute values when a specific sequence of events is found.

An example scenario would be a food preparing facility with multiple cookers, each with its own temperature monitor. A shut down operation for a specific cooker need to be generated in case its temperature doubles within five minutes. In this case, the cooker must be shut down as temperature is increasing too rapidly and could either burn the food or cause a fire hazard.

Query
SELECT * INTO ShutDown from Temperature
MATCH_RECOGNIZE (
     LIMIT DURATION (minute, 5)
     PARTITON BY cookerId
     AFTER MATCH SKIP TO NEXT ROW
     MEASURES
         1 AS shouldShutDown
     PATTERN (temperature1 temperature2)
     DEFINE
         temperature1 AS temperature1.temp > 0,
         temperature2 AS temperature2.temp > 2 * MAX(temperature1.temp)
) AS T

In the example above, MATCH_RECOGNIZE defines a limit duration of five minutes, the measures to output when a match is found, the pattern to match and lastly how each pattern variable is defined. Once a match is found, an event containing the MEASURES values will be output into ShutDown. This match is partitioned over all the cookers by cookerId and are evaluated independently from one another.

MATCH_RECOGNIZE brings an easier way to express patterns matching, decreases the time spent on writing and maintaining pattern matching queries and enable richer scenarios that were practically impossible to write or debug before.

Get started with Azure Stream Analytics

Azure Stream Analytics enables the processing of fast-moving streams of data from IoT devices, applications, clickstreams, and other data streams in real-time. To get started, refer to the Azure Stream Analytics documentation.

When to use Azure Service Health versus the status page

If you’re experiencing problems with your applications, a great place to start investigating solutions is through your Azure Service Health dashboard. In this blog post, we’ll explore the differences between the Azure status page and Azure Service Health. We’ll also show you how to get started with Service Health alerts so you can stay better informed about service issues and take action to improve your workloads’ availability.

How and when to use the Azure status page

The Azure status page works best for tracking major outages, especially if you’re unable to log into the Azure portal or access Azure Service Health. Many Azure users visit the status page regularly. It predates Azure Service Health and has a friendly format that shows the status of all Azure services and regions at a glance.

681.1

The Azure status page, however, doesn’t show all information about the health of your Azure services and regions. The status page isn’t personalized, so you need to know exactly which services and regions you’re using and locate them in the grid. The status page also doesn’t include information about non-outage events that could affect your availability. For example, planned maintenance events and health advisories (think service retirements and misconfigurations). Finally, the status page doesn’t have a means of notifying you automatically in the event of an outage or a planned maintenance window that might affect you.

For all of these use cases, we created Azure Service Health.

How and when to use Azure Service Health

At the top of the Azure status page, you’ll find a button directing you to your personalized dashboard. One common misunderstanding is that this button allows you to personalize the status page grid of services and regions. Instead, the button takes you into the Azure portal to Azure Service Health, the best option for viewing Azure events that may impact the availability of your resources.

681.2

In Service Health, you’ll find information about everything from minor outages that affect you to planned maintenance events and other health advisories. The dashboard is personalized, so it knows which services and regions you’re using and can even help you troubleshoot by offering a list of potentially impacted resources for any given event.

681.3

Service Health’s most useful feature is Service Health alerts. With Service Health alerts, you’ll proactively receive notifications via your preferred channel—email, SMS, push notification, or even webhook into your internal ticketing system like ServiceNow or PagerDuty—if there’s an issue with your services and regions. You don’t have to keep checking Service Health or the status page for updates and can instead focus on other important work.

681.4

Set up your Service Health alerts today

Feel free to keep using the status page for quick updates on major outages. However, we highly encourage you make it a habit to visit Service Health to stay informed of all potential impacts to your availability and take advantage of rich features like automated alerting.

Set up your Azure Service Health alerts today in the Azure portal. For more in-depth guidance, visit the Azure Service Health documentation. Let us know if you have a suggestion by submitting an idea here.

Automate MLOps workflows with Azure Machine Learning service CLI

This blog was co-authored by Jordan Edwards, Senior Program Manager, Azure Machine Learning

This year at Microsoft Build 2019, we announced a slew of new releases as part of Azure Machine Learning service which focused on MLOps. These capabilities help you automate and manage the end-to-end machine learning lifecycle.

Image with reference to the title "Automate MLOps workflows with Azure Machine Learning service CLI"

Historically, Azure Machine Learning service’s management plane has been via its Python SDK. To make our service more accessible to IT and app development customers unfamiliar with Python, we have delivered an extension to the Azure CLI focused on interacting with Azure Machine Learning.

While it’s not a replacement for the Azure Machine Learning service Python SDK, it is a complimentary tool that is optimized to handle highly parameterized tasks which suit themselves well to automation. With this new CLI, you can easily perform a variety of automated tasks against the machine learning workspace, including:

  • Datastore management
  • Compute target management
  • Experiment submission and job management
  • Model registration and deployment

Combining these commands enables you to train, register their model, package it, and deploy your model as an API. To help you quickly get started with MLOps, we have also released a predefined template in Azure Pipelines. This template allows you to easily train, register, and deploy your machine learning models. Data scientists and developers can work together to build a custom application for their scenario built from their own data set.

The Azure Machine Learning service Command-Line Interface is an extension to the interface for the Azure platform. This extension provides commands for working with Azure Machine Learning service from the command-line and allows you to automate your machine learning workflows. Some key scenarios would include:

  • Running experiments to create machine learning models
  • Registering machine learning models for customer usage
  • Packaging, deploying, and tracking the lifecycle of machine learning models

To use the Azure Machine Learning CLI, you must have an Azure subscription. If you don’t have an Azure subscription, you can create a free account before you begin. Try the free or paid version of Azure Machine Learning service to get started today.

Next steps

Learn more about the Azure Machine Learning service.

Get started with a free trial of the Azure Machine Learning service.