Coral updates: Project tutorials, a downloadable compiler, and a new distributor

Posted by Vikram Tank (Product Manager), Coral Team

coral hardware

We’re committed to evolving Coral to make it even easier to build systems with on-device AI. Our team is constantly working on new product features, and content that helps ML practitioners, engineers, and prototypers create the next generation of hardware.

To improve our toolchain, we’re making the Edge TPU Compiler available to users as a downloadable binary. The binary works on Debian-based Linux systems, allowing for better integration into custom workflows. Instructions on downloading and using the binary are on the Coral site.

We’re also adding a new section to the Coral site that showcases example projects you can build with your Coral board. For instance, Teachable Machine is a project that guides you through building a machine that can quickly learn to recognize new objects by re-training a vision classification model directly on your device. Minigo shows you how to create an implementation of AlphaGo Zero and run it on the Coral Dev Board or USB Accelerator.

Our distributor network is growing as well: Arrow will soon sell Coral products.

Demonstrating our commitment to protecting user privacy and student data

Assessing third-party vendors for security risks and data privacy policies is a crucial responsibility of any higher education institution, and this can be a time-consuming and burdensome task for campus IT professionals. To help ease these challenges, the higher education information security community, EDUCAUSE, Internet2, and the Research & Education Networks Information Sharing & Analysis Center (REN-ISAC) created the Higher Education Cloud Vendor Assessment Toolkit (HECVAT). Today we’re demonstrating our core commitment to protecting user data, documenting our extensive platform security capabilities by completing this comprehensive security assessment for Google Cloud Platform (GCP) and G Suite.

Google and a team of campuses recently completed the NET+ service validation to launch a NET+ GCP offering which provides Internet2 higher education members key enhancements to our standard GCP education terms. As part of that rigorous peer-driven review, four universities examined multiple components of GCP, including security, identity management, networking, accessibility, and legal terms, to validate its capabilities. By completing the HECVAT process, we’ve strengthened our support for EDUCAUSE, Internet2, the REN-ISAC, and the global research community.

The HECVAT self-assessments for GCP and G Suite cover our existing certifications (from the ISO 27000 standards, for example) and compliance with industry standards and detail authentication, data encryption methods, disaster recovery plans, and more. By completing this rigorous self-survey, we’re demonstrating our commitment to transparency and documenting the strict security protocols built into our infrastructure.

“The Higher Education Cloud Vendor Assessment Tool (HECVAT) was created by a Higher Education Information Security Council working group, in collaboration with campus participants, EDUCAUSE, Internet2, and REN-ISAC, to help institutions rapidly assess cloud services and reduce the resources needed for assessments,” said Nick Lewis, Program Manager for Security and Identity at Internet2. “Google’s adoption of HECVAT as part of the Internet2 NET+ Google Cloud Platform offering assures campuses of Google’s ongoing commitment to higher education’s unique security needs, advanced higher education information security, and supporting collaborative work.”

You can find Google’s HECVAT self-assessments on REN-ISAC’s Cloud Broker Index. To learn more about how HECVAT works, read the recent blog, What’s Next for HECVAT, from EDUCAUSE.

Thinking about cloud security? Join us for a new round of Google Cloud Security Talks

As more and more organizations migrate to the cloud, it’s vital to understand the resources at your disposal to protect your users, applications, and data. To help you navigate the latest thinking in cloud security, we hope you’ll join us for the Google Cloud Security Talks, a live online event on June 10.

You’ll get a variety of expert insights on some of the most pressing cloud security topics, on Google Cloud and beyond, including:

  • Security essentials in the cloud
  • Enabling BeyondCorp in your organization today
  • Protecting yourself from bleeding edge phishing and malware attacks
  • Best practices around shared security
  • Solving security use-cases in G Suite
  • A deep dive into Cloud Security Command Center
  • Unifying user, device, and app management With Cloud Identity
  • Preventing data exfiltration on GCP

You can view the full agenda and register for the event at no cost to you on our website. We hope you can join us!

Scan BigQuery for sensitive data using Cloud DLP

Preventing the exposure of sensitive data is critically important for many businesses—particularly those that operate in industries with substantial compliance needs, such as finance and healthcare. Cloud Data Loss Prevention (DLP) can help meet those needs and protect sensitive data through data discovery, classification, and redaction. But in some cases, you might need more awareness and quick access to Cloud DLP capabilities in the context of other GCP services such as BigQuery. Today, we’re making it easier to discover, and classify sensitive data in BigQuery with the Scan with DLP button. This new feature makes it possible to run DLP scans with just a few clicks, directly from the BigQuery UI.

Cloud DLP in action.png
Cloud DLP in action

Here’s what you can do:

  • Detect common sensitive data types such as credit card numbers or custom sensitive data types to highlight intellectual property or proprietary business information.
  • Create triggers for automatic Cloud DLP scan scheduling.
  • Publish Cloud DLP scan findings to BigQuery and Cloud Security Command Center for further analysis and reporting.
  • De-identify and obfuscate sensitive data.
  • Use the Cloud DLP UI (Beta) to create, manage, and trigger DLP scans across multiple GCP services, such as BigQuery, Cloud Storage, and Datastore.
  • Scan a subset of your entire dataset using the sampling feature to keep your Cloud DLP costs under control.

Today, BigQuery customers can start using Cloud DLP to scan for sensitive data with a few clicks, following these simple steps:

1. Browse to a particular BigQuery table and choose Scan with DLP from the Export menu.

Scan with DLP.png

2. Complete the Cloud DLP scan job creation with a click, or specify custom configurations such as information types to scan, sampling versus full scanning, post-scan actions, and more.

choose input data.png

3.Once a Cloud DLP scan is completed, you will receive an email with links to the scan details page where you can analyze findings and take further actions.

You can also quickly scan your other cloud-based data repositories with the Data Loss Prevention (DLP) user interface, now available in beta. Through this new interface, you can run DLP scans with just a few clicks—no code required, and no hardware or VMs to manage. Get started today in the GCP console.

To learn more, check out our Cloud DLP documentation.

Cloud Scheduler, a fully managed cron job service from Google Cloud

At Google Cloud Next, we announced the general availability of Cloud Scheduler, a fully managed cron job service that allows any application to invoke batch, big data and cloud infrastructure operations. Since then, we have added an important new feature that allows you to trigger any service, running anywhere: on-prem, on Google Cloud or any third party datacenter.

Invoke any service with Cloud Scheduler
Now, you can securely invoke HTTP targets on a schedule to reach services running on Google Kubernetes Engine (GKE), Compute Engine, Cloud Run, Cloud Functions, or on on-prem systems or elsewhere with a public IP using industry-standard OAuth/OpenID Connect authentication.

new cloud scheduler ga.png

With Cloud Scheduler, you get the following benefits:

  • Reliable delivery: Cloud Scheduler offers at-least-once delivery of a job to the target, guaranteeing that mission-critical jobs are invoked for execution.
  • Secure Invocation: Use industry standard OAuth/OpenID Connect tokens to invoke your HTTP/S schedules in a secure fashion. (NEW)
  • Fault-tolerant execution: Cloud Scheduler lets you automate your retries and execute a job in a fault-tolerant manner by deploying to different regions, so you eliminate the risk of single point of failure as seen in traditional cron services.
  • Unified management experience: Cloud Scheduler lets you invoke your schedules through the UI, CLI or API and still have a single pane of glass management experience. It also supports the familiar Unix cron format to define your job schedules.

Better yet, Cloud Scheduler does all this in a fully managed serverless fashion, with no need to provision the underlying infrastructure, or manually intervene since it automatically retries failed jobs. You also pay only for the operations you run—GCP takes care of all resource provisioning, replication and scaling required to operate Cloud Scheduler. As a developer you simply create your schedules and Cloud Scheduler handles the rest.

How Cloud Scheduler works
To schedule a job, you can use the Cloud Scheduler UI, CLI or API to invoke your favorite HTTP/S endpoint, Cloud Pub/Sub topic or App Engine application. Cloud Scheduler runs a job by sending an HTTP request or Cloud Pub/Sub message to a specified target destination on a recurring schedule. The target handler executes the job and returns a response. If the job succeeds, a success code (2xx for HTTP/App Engine and 0 for Pub/Sub) is returned to Cloud Scheduler. If a job fails, an error is sent back to Cloud Scheduler, which then retries the job until the maximum number of attempts is reached. Once the job has been scheduled, you can monitor it on the Cloud Scheduler UI and check the status of the job.

Glue together an end-to-end solution
Cloud Scheduler can be used to architect interesting solutions like wiring together a reporting system on a schedule using Cloud Functions, Compute Engine, Cloud Pub/Sub and Stackdriver. Here’s an example from Garrett Kutcha from Target at Cloud Next 2019.

cloud scheduler architecture.png
Wondering how organizations are using serverless tools in real life? Step beyond hello world and learn about real-life serverless systems: from terabyte-level retail databases to IoT-powered smart cities, customers are using the power of serverless to increase productivity and revolutionize how their systems are architected and maintained.

You can also use Cloud Scheduler to do things like schedule database updates and push notifications, trigger CI/CD pipelines, schedule tasks such as image uploads, and invoke cloud functions. Tightly integrated with most Google Cloud Platform (GCP) products, the sky’s the limit with Cloud Scheduler!

Get started today
With Cloud Scheduler, you now have a modern, serverless solution to your job scheduling needs. To try out Cloud Scheduler today, check out the quickstart guide. Then, create and configure your own schedules using the documentation or start a free trial on GCP!

Build your own event-sourced system using Cloud Spanner

When you’re developing a set of apps or services that are coordinated by, or dependent on, an event, you can take an event-sourced approach to model that system in a thorough way. Event-sourced systems are great for solving a variety of very complex development problems, triggering a series of tasks based on an event, and creating an ordered list of events to process into audit logs. But there isn’t really an off-the-shelf solution for getting an event-sourced system up and running.

With that in mind, we are pleased to announce a newly published guide to Deploying Event-Sourced Systems with Cloud Spanner, our strongly consistent, scalable database service that’s well-suited for this project. This guide walks you through the why and the how of bringing this event-sourced system into being, and how to tackle some key challenges, like keeping messages in order, creating a schema registry, and triggering workloads based on published events. Based on the feedback we got while talking through this guide with teammates and customers, we went one step further: We are pleased to announce working versions of the apps described in the guide, available in our Github repo, so you can easily introduce event-sourced development into your own environment.

Getting started deploying event-sourced systems
In the guide, you’ll find out how to make an event-sourced system that uses Cloud Spanner as the ingest sink, then automatically publishes each record to Cloud Pub/Sub. Cloud Spanner as a sink solves two key challenges of creating event-sourced systems: performing multi-region writes and adding global timestamps.

Multi-region writes are necessary for systems that run in multiple places—think on-prem and cloud, east and west coast for disaster recovery, etc. Multi-region writes are also great for particular industries, like retailers that want to send events to a single system from each of their stores for things like inventory tracking, rewards updates, and real-time sales metrics.

Cloud Spanner also has the benefit of TrueTime, letting you easily add a globally consistent timestamp for each of your events. This establishes a ground truth of the order of all your messages for all time. This means you can make downstream assumptions anchored in that ground truth, which solves all sorts of complexity for dependent systems and services.

In the guide, we use Avro as a serialization format. The two key reasons for this are that the schema travels with the message, and BigQuery supports direct uploading of Avro records. So even if your events change over time, you can continue to process them using the same systems without having to maintain and update a secondary schema registry and versioning system. Plus, you get an efficient record format on the wire and for durable storage.

Finally, we discuss storing each record in Cloud Storage for archiving and replay. With this strategy, you can create highly reliable long-term storage and your system of record in Cloud Storage, while also allowing your system to replay records from any timestamp in the past. This concept can build the foundation for a backup and restore pattern for Cloud Spanner, month-over-month analysis of events, and even end-of-month reports or audit trails.

Here’s a look at the architecture described in the guide, and details about the services and how you can deploy them.

architecture.png
  • Poller app: Polls Cloud Spanner, converts the record format to Avro, and publishes to Cloud Pub/Sub.
  • Archiver: Gets events triggered by messages published to a Cloud Pub/Sub topic and writes those records to a global archive in Cloud Storage.
  • BQLoader: Gets triggered by records written to Cloud Storage and loads those records into a corresponding BigQuery table.
  • Janitor: Reads all entries written to the global archive at a fixed rate, then compresses them for long-term storage.
  • Replayer: Reads the records in order from long-term storage, decompresses them, and loads them into a new Cloud Pub/Sub stream.
  • Materializer: Filters records written to Cloud Pub/Sub, then loads them to a corresponding Redis (materialized view) database for easy query access.

Building your own event-sourced system
It can be a lot of work to build out each of these components, test them and then maintain them. The set of services we released in GitHub do just that. You can use these services out of the box, or clone the repo and use them as examples or starting points for more sophisticated, customized systems for your use cases. We encourage you to file bugs and to add feature requests for things you would like to see in our services. Here’s a bit more detail on each of the services:

Poller
The core service, Spez, is a polling system for Cloud Spanner. Spez is intended to be deployed in a Kubernetes cluster (we suggest Google Kubernetes Engine, naturally), and is a long-running service. Spez will poll Cloud Spanner at a fixed interval, look for any newly written records, serialize them to Avro and then publish them to Cloud Pub/Sub. It will also populate the table name and the Cloud Spanner TrueTime timestamp as metadata on the Cloud Pub/Sub record. All the configurable bits are located in config maps and loaded into the service via environment variables.

Archiver
There is also a Cloud Function included in the repo that will be triggered by a record being written to Cloud Pub/Sub, then take that record and store it in Cloud Storage. It will also create a unique name for the record and populate the table name and TrueTime timestamp as metadata on the blob. This sets up the ability to order, filter and replay the records without having to download each record first.

Replayer
We’ve added a feature in Spez that allows you to replay records from a given timestamp to another timestamp. Replayer allows you to select replay to Cloud Pub/Sub, a Cloud Spanner table or a Spez queue. You can use this to back up and restore a Cloud Spanner table, fan out Cloud Pub/Sub ledgers, create analytics platforms, or run monthly audits. We’re very excited about this one!

Spez queue
So, what is a Spez queue? Spez queues are Cloud Spanner-backed queues with a Java client listener that will trigger a function each time a record is added to the queue. Spez queues can guarantee exact ordering across the world (thanks to Cloud Spanner) as well as exactly-once delivery (as long as there is only one subscriber). Spez queues give you a high-performance, feature-rich alternative to Kafka of Cloud Pub/Sub as your event ledger or as a general coordination queue.

We’re very excited to share these resources with you. To get started, read the guide and download the services.

Announcing service monitor alliances for Azure Deployment Manager

Stylized image of a cloud.Azure Deployment Manager is a new set of features for Azure Resource Manager that greatly expands your deployment capabilities. If you have a complex service that needs to be deployed to several regions, if you’d like greater control over when your resources are deployed in relation to one another, or if you’d like to limit your customer’s exposure to bad updates by catching them while in progress, then Deployment Manager is for you. Deployment Manager allows you to perform staged rollouts of resources, meaning they are deployed region by region in an ordered fashion.

During Microsoft Build 2019, we announced that Deployment Manager now supports integrated health checks. This means that as your rollout proceeds, Deployment Manager will integrate with your existing service health monitor, and if during deployment unacceptable health signals are reported from your service, the deployment will automatically stop and allow you to troubleshoot.

In order to make health integration as easy as possible, we’ve been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. If you’re not already using a health monitor, these are great solutions to start with:

Datadog's logo.The logo for Site 24x7.Wavefront's logo.
Datadog, the leading monitoring and analytics platform for modern cloud environments. See how Datadog integrates with Azure Deployment Manager.Site24x7, the all-in-one private and public cloud services monitoring solution. See how Site24x7 integrates with Azure Deployment Manager.Wavefront, the monitoring and analytics platform for multi-cloud application environments. See how Wavefront integrates with Azure Deployment Manager.

These service monitors provide a simple copy/paste solution to integrate with Azure Deployment Manager’s health integrated rollout feature, allowing you to easily prevent bad updates from having far reaching impact across your user base. Stay tuned for Azure Monitor integration, which is coming soon.

Additionally, Azure Deployment Manager no longer requires sign-up for use, and is now completely open to the public!

To get started, check out the tutorial “Use Azure Deployment Manager with Resource Manager templates (Public preview)” or the documentation “Enable safe deployment practices with Azure Deployment Manager (Public preview)”.  If you want to try out the health integration feature, check out the tutorial “Use health check in Azure Deployment Manager (Public preview)” for an end to end walkthrough.

We’re excited to have you give Azure Deployment Manager a try, and, as always, we are listening to your feedback.

Azure Cost Management updates – May 2019

Whether you’re a new student, thriving startup, or the largest enterprise you have financial constraints and you need to know what you’re spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We’re always looking for ways to learn more about your challenges and how Cost Management can help you better understand how and where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let’s dig into the details…

 

Expanded general availability (GA): Pay-as-you-go and Azure Government

Azure Cost Management is now generally available for the following account types:

Public cloud

  • Enterprise Agreements (EA)
  • Microsoft Customer Agreements (MCA)
  • Pay-as-you-go (PAYG) and dev/test subscriptions

Azure Government

  • Enterprise Agreements

Stay tuned for more information about preview support for additional account types and clouds, like Cloud Solution Providers (CSP) and Sponsorship subscriptions. We know how critical it is for you to have a rich set of cost management tools for every account across every cloud, and we hear you loud and clear.

 

New preview: Manage AWS and Azure costs together in the Azure portal

Many organizations are adopting multi-cloud strategies for additional flexibility, but with increased flexibility comes increased complexity. From different cost models and billing cycles to underlying cloud architectures, having a single cross-cloud cost management solution is no longer a luxury, but a fundamental requirement to efficiently and effectively monitor, control, and optimize costs. This is where Azure Cost Management can help.

Start by creating a new AWS cloud connector from the Azure portal. From the home page of the Azure portal select the Cost Management tile. Then, select Cloud connectors (preview) and click the “Add” command. Simply specify a name, pick the management group you want AWS costs to be rolled up to, and configure the AWS connection details.

An image of the "Create an AWS connector" options screen.

Cost Management will start ingesting AWS costs as soon as the AWS cost and usage report is available. If you created a new cost and usage report, AWS may take up to 24 hours to start exporting data. You can check the latest status from the cloud connectors list.

An image showing a cost report for AWS in the Cost Management tool.

Once available, open cost analysis and change the scope to the management group you selected when creating the connector. Group by provider to see a break down of AWS and Azure costs. If you connected multiple AWS accounts or have multiple Azure billing accounts, group by billing account to see a break down by account.

In addition to seeing AWS and Azure costs together, you can also change the scope to your AWS consolidated or linked accounts to drill into AWS costs specifically. Create budgets for your AWS scopes to get notified as costs hit important thresholds.

Managing AWS costs is free to use and you will not be charged during the preview. If you would like to automatically upgrade when AWS support is generally available, navigate to the connector, and select the Automatically charge the 1 percent at general availability option, then select the desired subscription to charge.

For more information about managing AWS costs, see the documentation “Manage AWS costs and usage in Azure.”

 

New getting started videos

Learning a new service can take time. Reading through documentation is great, but you’ve told us that sometimes you just want a quick video to get you started. Well, here are eight:

If you’re looking for something a little more in-depth, try these:

 

Monitor costs based on your pay-as-you-go billing period

As you know, your pay-as-you-go and dev/test subscriptions are billed based on the day you signed up for Azure. They don’t map to calendar months, like EA and MCA billing accounts. This has made reporting on and controlling costs for each bill a little harder, but now you have the tools you need to effectively manage costs based on your specific billing cycle.

When you open cost analysis for a PAYG subscription, it defaults to the current billing period. From there, you can switch to a previous billing period or select multiple billing periods. More on the extended date picker options later.

An image showing how to choose which billing period to view.

If you want to get notified before your bill hits a specific amount, create a budget for the billing month. You can also specify if you want to track a quarterly or yearly budget by billing period.

An image showing the budget creation screen.

Sometimes you need to export data and integrate it with your own datasets. Cost Management offers the ability to automatically push data to a storage account on a daily, weekly, or monthly basis. Now you can export your data as it is aligned to the billing period, instead of the calendar month.

An image showing the data export page.

We love hearing your suggestions, so let us know if there’s anything else that would help you better manage costs during your personalized billing period.

 

More comprehensive scheduled exports

Scheduled exports enable you to react to new data being pushed to you instead of periodically polling for updates. As an example, a daily export of month-to-date data will push a new CSV file every day from January 1-31. These daily month-to-date exports have been updated to continue to push data on the configured schedule until they include the full dataset for the period. For example, the same daily month-to-date export would continue to push new January data on February first and February second to account for any data which may have been delayed. The update guarantees you will receive a full export for every period, starting April 2019.

For more information about how cost data is processed, see the documentation “Understand Cost Management data.”

 

Extended date picker in cost analysis

You’ve told us that analyzing cost trends and investigating spending anomalies sometimes requires a broad set of date ranges. You may want to look at the current billing period to keep an eye on your next bill or maybe you need to look at the last 30 days in a monthly status meeting. Some teams are even looking at the last 7 days on a weekly or even daily basis to identify spending anomalies and react as quickly as possible. Not to mention the need for longer-term trend analysis and fiscal planning.

Based on all the great feedback you’ve shared around needing a rich set of one-click date options, cost analysis now offers an extended date picker with more options to make it easier than ever for you to get the data you need quickly.

We also noticed trends in how you navigate between periods. To simplify this, you can now quickly navigate backward and forward in time using the < PREVIOUS and NEXT > links at the top of the date picker. Try it yourself and let us know what you think.

An image of the date picker screen.

 

Share links to customized views

We’ve heard you loud and clear about how important it is to save and share customized views in cost analysis. You already know you can pin a customized view to the Azure portal dashboard, and you already know you can share dashboards with others. Now you can share a direct link to that same customized view. If somebody who doesn’t have access to the scope opens the link they’ll get an access denied message, but they can change the scope to keep the customizations and apply them to their own scope.

An image showing the ability to create share links for customized views.

You can also customize the scope to share a targeted URL. Here’s the format of the URL:

https://portal.azure.com# [@#{domain}] /blade/Microsoft_Azure_CostManagement/Menu/open/CostAnalysis [/scope/{url-encoded-scope}] /view/{view-config}

The domain is optional. If you remove that, the user’s preferred domain will be used.

The scope is also optional. If you remove that, the user’s default scope will be the first billing account, management group, or subscription found. If you specify a custom scope, remember to URL-encode (e.g. “/” → “%2F”) the scope, otherwise cost analysis will not load correctly.

The view configuration is a gzipped, URL-encoded JSON object. As an example, here’s how you can decode a customized view:

  1. Copy URL from the portal:
    • https://portal.azure.com#@domain.onmicrosoft.com/blade/Microsoft_Azure_CostManagement/Menu/open/CostAnalysis/scope/%2Fsubscriptions%2F00000000-0000-0000-0000-000000000000/view/H4sIAAAAAAAA%2F41QS0sDMRD%2BL3Peha4oam%2FSgnhQilYvpYchOxuDu8k6mVRL2f%2FupC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn%2FvjCRsJyHKQQnjDci6J%2F18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi%2BOSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc%2FVuFST%2BjZ%2F%2Bj3%2BknDMUpziHivPMOI%2F6UOuM4QcE8nHtJAIAAA%3D%3D
  2. Trim down to the view configuration after “/view/”:
    • H4sIAAAAAAAA%2F41QS0sDMRD%2BL3Peha4oam%2FSgnhQilYvpYchOxuDu8k6mVRL2f%2FupC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn%2FvjCRsJyHKQQnjDci6J%2F18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi%2BOSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc%2FVuFST%2BjZ%2F%2Bj3%2BknDMUpziHivPMOI%2F6UOuM4QcE8nHtJAIAAA%3D%3D
  3. URL decode the view configuration:
    • H4sIAAAAAAAA/41QS0sDMRD+L3Peha4oam/SgnhQilYvpYchOxuDu8k6mVRL2f/upC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn/vjCRsJyHKQQnjDci6J/18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi+OSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc/VuFST+jZ/+j3+knDMUpziHivPMOI/6UOuM4QcE8nHtJAIAAA==
  4. Gzip decompress the decoded string to get the customized view (note some tools may require base 64 decoding the URL-decoded string as well):
    • {
        "version":"2019-04-01-preview",
        "queryVersion":"2019-04-01-preview",
        "metric":"ActualCost",
        "query":{
          "type":"Usage",
          "timeframe":"Custom",
          "timePeriod":{"from":"2019-04-18","to":"2019-05-17"},
          "dataset":{
            "granularity":"Daily",
            "aggregation":{"totalCost":{"name":"PreTaxCost","function":"Sum"}},
            "grouping":[{"type":"dimension","name":"ResourceGroupName"}],
            "filter":{"and":[]}
          },
        },
        "chart":"StackedColumn",
        "accumulated":false,
        "pivots":[
          {"type":"Dimension","name":"Meter"},
          {"type":"Dimension","name":"ResourceType"},
          {"type":"Dimension","name":"ResourceId"}
        ]
      }

Understanding how the view configuration works means you can:

  1. Link to cost analysis from your own apps
  2. Build out and automate the creation of custom dashboards via ARM deployment templates
  3. Copy the query property and use it to get the same data used to render the main chart (or table, if using the table view)

You’ll hear more about the view configuration soon, so keep an eye out.

 

Documentation updates

Lots of documentation updates! Here are a few you might be interested in:

Want to keep an eye on all documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select “Edit” at the top of the doc and submit a quick pull request.

What’s next?

These are just a few of the big updates from the last month. We’re always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter for updates, tips, and tricks throughout the week!

The FA is creating a better future for football with Google Cloud

The Football Association of England (The FA) is the custodian of English football and the oldest football association in the world. Driven by new ways of thinking fueled by supporting inclusivity, diversity and innovation with technology, The FA has recently seen a resurgence of success on and off the pitch across the youth, men’s and women’s games.

As part of this transformation journey, The FA has chosen Google Cloud as the official cloud and data analytics partner for the England Teams and St George’s Park, which is their centre of technology excellence.

Through this multi-year partnership, the first step in the transformation has been to put G Suite at the heart of everything, shifting from working in silos to fostering collaboration between coaches of all teams. The FA’s distributed support team share information through G Suite applications and use Hangouts as their number one communication tool.

“The first step in our transformation at St. George’s Park was to unify the way our coaches train and develop our 28 national teams to increase productivity,” says Craig Donald, CIO at The FA. “We needed the ability to collaborate and share across the coaches and team managers. G Suite allowed us to do that and was the first part of our Google Cloud partnership.”

Going forward, The FA want to continue to transform and develop the game further by applying advanced analytics on Google Cloud to find meaningful insights from their data. “Google Cloud was our preferred cloud of choice as we embarked on our digital transformation journey to better support the national teams,”said Mark Bullingham FA Chief Commercial & Football Development Officer and Incoming CEO.

Over the coming years our partnership will focus on three areas:

  • Success: Enabling the England men’s and women’s senior teams to be ready to win in 2022 and 2023.
  • Diversity: Doubling female participation in the game.
  • Inclusivity: Making football more inclusive and open to all.

Using Google Cloud technology, we will work together to help The FA solve these big challenges using our smart analytics tools combined with our unique capabilities in machine learning and AI.

The FA have many terabytes of data stored within Google Cloud that The FA’s analysis teams will use alongside Google Cloud products such as BigQuery to extract relevant information for the teams at St. George’s Park. Their next step will be to expand their proprietary tool, built on Google Cloud, called the Player Profile System (PPS) to measure performance, fitness, training and form of players at all levels.

Together, The FA and Google Cloud will work to supercharge the PPS to automate near real-time data analysis, allowing The FA to better compare and analyze team and player performance and make data driven decisions. PPS will be further enhanced by Google Cloud smart analytics, data management solutions and machine learning capabilities to analyze even more player data signals. “Smart analytics and data management plays a critical part for our PPS. Everything we do at St George’s Park for this workload is built on Google Cloud.” Nick Sewell FA Head Of Application Development.

Dave Reddin, The FA’s Head of Team Strategy and Performance, further added: “We believe technology is a key area of potential competitive advantage for our 28 teams and everything we do at St George’s Park. We have progressively built a systematic approach to developing winning England teams and through the support of Google Cloud technology we wish to accelerate our ability to translate insight and learning into performance improvements.” 

The FA look to focus on the societal impact of football in the wider community and we look forward to building on this, as well as helping drive success on the pitch, starting with the England Senior Men’s and Women’s Teams. As Baroness Sue Campbell, FA Director of Women’s Football, said: “The FA’s mission is to develop the game for all. I am looking forward to partnering with Google Cloud to see how technology can tackle some of these societal challenges.”

The FA has teamed up with Google Cloud as the official cloud provider of the England teams, kickstarting a revolutionary, data-focused transformation of English football on and off the pitch for years to come. We're excited about what lies ahead but right now, we're just looking forward to cheering on the Lionesses this summer.

Over the coming years we are excited to support and tackle these challenges together. To find out more about this partnership read here.