Service control policies in AWS Organizations enable fine-grained permission controls

Starting today, you can use Service Control Policies (SCPs) to set permission guardrails with the fine-grained controls used in AWS Identity and Access Management (IAM) policies. This makes it easier to meet the specific requirements of your organization’s governance rules. The new policy editor in the AWS Organizations console makes it easier to author SCPs by guiding you to add actions, resources, and conditions.  

Introducing Google Cloud security training and certification: skills that could save your company millions

Information security is a top priority for global businesses, large and small. Security breaches can cost organizations millions of dollars, cause irreparable brand damage, and result in lost customers. As data and apps move to the cloud, cloud security is increasingly crucial for organizational success.

According to the Breach Level Index, a global database of public data breaches, 3.3 billion data records were compromised worldwide in the first half of 2018, an increase of 72% compared to the same period in 2017. Further, the average cost of a data breach globally increased to $3.86 million, according to the Ponemon Institute’s 2018 Cost of a Data Breach Study.

While these statistics are eye-popping, concern about security in the cloud is no longer hampering cloud adoption for organizations. In fact, a 2017 global survey of more than 500 IT decision-makers found that three-quarters of respondents have become more confident in cloud security. Businesses are now more concerned about the shortage of talent with the right skills to manage cloud technology, make sure the right security controls are in place, manage cloud-based access, and ensure data protection.

The cost and stakes of a security breach are too high, and organizations are realizing they need a skilled team able to handle the ever-increasing workload. A recent CIO.com article, “The top 4 IT security hiring priorities and in-demand skills for 2019,” said that “when it comes to cloud security hiring, the most in-demand role for 2019 is the Cloud Security Engineer.”

To address these current and future needs, we recently launched the Security in GCP specialization, our latest on-demand training course. It introduces learners to Google’s approach to security in the cloud and how to deploy and manage the components of a secure GCP solution. With focus areas such as Cloud Identity, security keys, Cloud IAM, Google Virtual Private Cloud firewalls, Google Cloud load balancing, and many more, participants will learn about securing Google Cloud deployments as well as mitigating many of the vulnerabilities, attacks, and risks mentioned above in a GCP-based infrastructure. These include distributed denial-of-service (DDoS) attacks, phishing, and data exfiltration threats involving content classification and use.

While this new training is a broad study of security controls and techniques on GCP, it provides a good framework for a job role that is becoming more important to organizations, the Cloud Security Engineer. To offer organizations a way to benchmark and measure the proficiency of their team’s Google Cloud security skills, we’ve also recently completed the beta version of the Professional Cloud Security Engineer. This certification, which will be available to the public at Next ‘19, validates an individual’s aptitude for security best practices and industry security requirements while demonstrating an ability to design, develop, and manage a secure infrastructure that uses Google security technologies.

At a time when many businesses are more vulnerable to cyber attacks, developing cloud security skills with this new training and certification can bring you greater confidence that your data is in safe hands.

To learn more about this new training and certification, join our webinar on March 29 at 9:45am PST.

9 mustn’t-miss machine learning sessions at Next ‘19

From predicting appliance usage from raw power readings, to medical imaging, machine learning has made a profound impact on many industries. Our AI and machine learning sessions are amongst our most popular each year at Next, and this year we’re offering more than 30 on topics ranging from building a better customer service chatbot to automated visual inspection for manufacturing.

If you’re joining us at Next, here are nine AI and machine learning sessions you won’t want to miss.

1. Automating Visual Inspections in Energy and Manufacturing with AI
In this session, you can learn from two global companies that are aggressively shaping practical business solutions using machine vision. AES is a global power company that strives to build a future that runs on greener energy. To serve this mission, they are rigorously scaling the use of drones in their wind farm operations with Google’s AutoML Vision to automatically identify defects and improve the speed and reliability of inspections. Our second presenter joins us from LG CNS, a global subsidiary of LG Corporation and Korea’s largest IT service provider. LG’s Smart Factory initiative is building an autonomous factory to maximize productivity, quality, cost, and delivery. By using AutoML Vision on edge devices, they are detecting defects in various products during the manufacturing process with their visual inspection solution.

2. Building Game AI for Better User Experiences
Learn how DeNA, a mobile game studio, is integrating AI into its next-generation mobile games. This session will focus on how DeNA built its popular mobile game Gyakuten Othellonia on Google Cloud Platform (GCP) and how they’ve integrated AI-based assistance. DeNA will share how they designed, trained, and optimized models, and then explain how they built a scalable and robust backend system with Cloud ML Engine.

3. Cloud AI: Use Case Driven Technology (Spotlight)
More than ever, today’s enterprises are relying on AI to reach their customers more effectively, deliver the experiences they expect, increase efficiency and drive growth across their organizations. Join Andrew Moore and Rajen Sheth in a session with three of Google Cloud’s leading AI innovators, Unilever, Blackrock, and FOX Sports Australia, as they discuss how GCP and Cloud AI services, like the Vision API, Video Intelligence API, and Cloud Natural Language have made their products more intelligent, and how they can do the same for yours.

4. Fast and Lean Data Science With TPUs
Google’s Tensor Processing Units (TPUs) are revolutionizing the way data scientists work. Week-long training times are a thing of the past, and you can now train many models in minutes, right in a notebook. Agility and fast iterations are bringing neural networks into regular software development cycles and many developers are ramping up on machine learning. Machine learning expert Martin Görner will introduce TPUs, then dive deep into their microarchitecture secrets. He will also show you how to use them in your day-to-day projects to iterate faster. In fact, Martin will not just demo but train most of the models presented in this session on stage in real time, on TPUs.

5. Serverless and Open-Source Machine Learning at Sling Media
This session covers Sling’s incremental adoption strategy of Google Cloud’s serverless machine learning platforms that enable data scientists and engineers to build business-relevant models quickly. Sling will explain how they use deep learning techniques to better predict customer churn, develop a traditional pipeline to serve the model, and enhance the pipeline to be both serverless and scalable. Sling will share best practices and lessons learned deploying Beam, tf.transform, and TensorFlow on Cloud Dataflow and Cloud ML Engine.

6. Understanding the Earth: ML With Kubeflow Pipelines
Petabytes of satellite imagery contain valuable indicators of scientific and economic activity around the globe. In order to turn its geospatial data into conclusions, Descartes Labs has built a data processing and modeling platform for which all components run on Google Cloud. Descartes leverages tools including Kubeflow Pipelines as part of their model-building process to enable efficient experimentation, orchestrate complicated workflows, maximize repeatability and reuse, and deploy at scale. This session will explain how you can implement machine learning workflows in Kubeflow Pipelines, and cover some successes and challenges of using these tools in practice.

7. Virtual Assistants: Demystify and Deploy
In this session, you’ll learn how Discover built a customer service solution around Dialogflow. Discover’s data science team will explain how to execute on your customer service strategy, and how you can best configure your agent’s Dialogflow “model” before you deploy it to production.

8. Reinventing Retail with AI
Today’s retailers must have a deep understanding of each of their customers to earn and maintain their loyalty. In this session, Nordstrom and Disney explain how they’ve used AI to create engaging and highly personalized customer experiences. In addition, Google partner Pitney Bowes will discuss how they’re predicting credit card fraud for luxury retail brands. This session will discuss new Google products for the retail industry, as well as how they fit into a broader data-driven strategy for retailers.

9. GPU Infrastructure on GCP for ML and HPC Workloads
ML researchers want a GPU infrastructure they can get started with quickly, run consistently in production, and dynamically scale as needed. Learn about GCP’s various GPU offerings and features often used with ML. From there, we will discuss real-world customer story of how they manage their GPU compute infrastructure on GCP.  We’ll cover the new NVIDIA Tesla T4 and V100 GPU, Deep Learning VM Image for quickly getting started, preemptible GPUs for low cost, GPU integration with Kubernetes Engine (GKE), and more.

If you’re looking for something that’s not on our list, check out the full schedule. And, don’t forget to register for the sessions you plan to attend—seats are limited.

Future of cloud computing: 5 insights from new global research

Research shows that cloud computing will transform every aspect of business, from logistics to customer relationships to the way teams work together, and today’s organizations are preparing for this seismic shift. A new report from Google on the future of cloud computing combines an in-depth look at how the cloud is shaping the enterprise of tomorrow with actionable advice to help today’s leaders unlock its benefits. Along with insights from Google luminaries and leading companies, the report includes key findings from a research study that surveyed 1,100 business and IT decision-makers from around the world. Their responses shed light on the rapidly evolving technology landscape at a global level, as well as variations in cloud maturity and adoption trends across individual countries. Here are five themes that stood out to us from this brand-new research.

1. Cloud computing will move to the forefront of enterprise technology over the next decade, backed by strong executive support.

Globally, 47 percent of survey participants said that the majority of their companies’ IT infrastructures already use public or private cloud computing. When we asked about predictions for 2029, that number jumped 30 percentage points. C-suite respondents were especially confident that the cloud will reign supreme within a decade: More than half anticipate that it will meet at least three-quarters of their IT needs, while only 40 percent of their non-C-suite peers share that view. What’s the takeaway? The cloud already plays a key role in enterprise technology, but the next 10 years will see it move to the forefront—with plenty of executive support. Here’s how that data breaks down around the world.

fig1_companies adopting cloud.gif

2. The cloud is becoming a significant driver of revenue growth.

Cloud computing helps businesses focus on improving efficiency and fostering innovation, not simply maintaining systems and status quos. So it’s not surprising that 79 percent of survey respondents already consider the cloud an important driver of revenue growth, while 87 percent expect it to become one within a decade. C-suite respondents were just as likely as their non-C-suite peers to anticipate that the cloud will play an important role in driving revenue growth in 2029. This tells us that decision-makers across global organizations believe their future success will hinge on their ability to effectively apply cloud technology.

fig2_high expectations for cloud.gif

3. Businesses are combining cloud capabilities with edge computing to analyze data at its source.

Over the next decade, the cloud will continue to evolve as part of a technology stack that increasingly includes IoT devices and edge computing, in which processing occurs at or near the data’s source. Thirty-three percent of global respondents said they use edge computing for a majority of their cloud operations, while 55 percent expect to do so by 2029. The United States lags behind in this area, with only 18 percent of survey participants currently using edge computing for a majority of their cloud operations, but that figure grew by a factor of 2.5 when respondents looked ahead to 2029. As more and more businesses extend the power and intelligence of the cloud to the edge, we can expect to see better real-time predictions, faster responses, and more seamless customer experiences.

fig3_role of edge computing.gif

4. Tomorrow’s businesses will prioritize openness and interoperability.

In the best cases, cloud adoption is part of a larger transformation in which new tools and systems positively affect company culture. Our research suggests that businesses will continue to place more value on openness over the next decade. By 2029, 41 percent of global respondents expect to use open-source software (OSS) for a majority of their software platform, up 14 percentage points from today. Predicted OSS use was nearly identical between IT decision-makers and their business-oriented peers, implying that technology and business leaders alike recognize the value of interoperability, standardization, freedom from vendor lock-in, and continuous innovation.

fig4_current and expected us of open source.gif

5. On their journey to the cloud, companies are using new techniques to balance speed and quality.

To stay competitive in today’s streaming world, businesses face growing pressure to innovate faster—and the cloud is helping them keep pace. Sixty percent of respondents said their companies will update code weekly or daily by 2029, while 37 percent said they’ve already adopted this approach. This tells us that over the next 10 years, we’ll see an uptick in the use of continuous integration and delivery techniques, resulting in more frequent releases and higher developer productivity.

fig5_more frequent code updates.gif

As organizations prepare for the future, they will need to balance the need for speed with maintaining high quality. Our research suggests that they’ll do so by addressing security early in the development process and assuming constant vulnerability so they’re never surprised. More than half of respondents said they already implement security pre-development, and 72 percent plan to do so by 2029.

fig6_security shifting.gif

Cloud-based enterprises will also rely on automation to maintain quality and security as their operations become faster and more continuous. Seventy percent of respondents expect a majority of their security operations to be automated by 2029, compared to 33 percent today.

fig7_move towards a future.gif

Our Future of Cloud Computing report contains even more insights from our original research, as well as a thorough analysis of the cloud’s impact on businesses and recommended steps for unlocking its full potential. You can download it here.

Migrating your traditional data warehouse platform to BigQuery: announcing the data warehouse migration offer

Today, we’re announcing a data warehousing migration offer for qualifying customers, one that makes it easier for them to move from traditional data warehouses such as Teradata, Netezza to BigQuery, our serverless enterprise data warehouse.

For decades, enterprises have relied on traditional on-premises data warehouses to collect and store their most valuable data. But these traditional data warehouses can be costly, inflexible, and difficult to maintain, and for many, they no longer meet today’s business needs. Enterprises need an easy, scalable way to store all that data, as well as to take advantage of advanced analytic tools that can help them find valuable insights. As a result, many are turning to cloud data warehousing solutions like BigQuery.

BigQuery is Google Cloud’s serverless, highly scalable, low-cost enterprise data warehouse designed to make all data analysts productive. There’s no infrastructure to manage, so you can focus on finding meaningful insights using familiar Standard SQL. Leading global enterprises like 20th Century Fox, Domino’s Pizza, Heathrow Airport, HSBC, Lloyds Bank UK, The New York Times, and many others rely on BigQuery for their data analysis needs, helping them do everything from break down data silos to jump-start their predictive analytics journey—all while greatly reducing costs.

Here’s a little more on the benefits of BigQuery in contrast to traditional on-premises data warehouses.

benefits of BigQuery.png

Recently, independent analyst firm Enterprise Strategy Group (ESG) released a report examining the economic advantages of migrating enterprise data warehouse workloads to BigQuery. They developed a three-year total-cost-of-ownership (TCO) model that compared the expected costs and benefits of upgrading an on-premises data warehouse, migrating to a cloud-based solution provided by the same on-premises vendor, or redesigning and migrating data warehouse workloads to BigQuery. ESG found that an organization could potentially reduce its overall three-year costs by 52 percent versus the on-premises equivalent, and by 41 percent when compared to an AWS deployment.

BQ expected cost ownership.png

You can read more about the above total cost of ownership (TCO) analyses in ESG’s blog post.

How to begin your journey to a modern data warehouse

While many businesses understand the value of modernizing, not all know where to start. A typical data warehouse migration requires three distinct steps:

  1. Data migration: the transfer of the actual data contents from the data warehouse from the source to the destination system.
  2. Schema migration: the transfer of metadata definitions and topologies.
  3. Workload migration:the transfer of workloads that include ETL pipes, processing jobs, stored-procedures, reports, and dashboards.

Today, we’re also pleased to announce the launch of BigQuery’s data warehouse migration utility. Based on the existing migration experience, we have built this data warehouse migration service to automate migrating data and schema to BigQuery, and significantly reduce the migration time.

How to get started with our data warehousing migration offer

Our data warehousing migration offer and tooling equips you with architecture and design guidance from Google Cloud engineers, proof-of-concept funding, free training, and usage credits to help speed up your modernization process.

Here’s how it works: 

Step 1: Planning consultation

You’ll receive expert advice, examples, and proof-of-concept funding support from Google Cloud, and you’ll work with our professional services or a specialized data analytics partner on your proof of concept.

Step 2: Complementary training

You’ll get free training from Qwiklabs, Coursera, or Google Cloud-hosted classroom courses to deepen your understanding of BigQuery and related GCP services. 

Step 3: Expert design guidance

Google Cloud engineers will provide you with architecture design guidance, through personalized deep-dive workshops at no additional cost.

Step 4: Migration support

Google Cloud’s professional services organization—along with our partners—have helped enterprises all over the world migrate their traditional data warehouses to BigQuery. And as part of this offer, qualified customers may also be eligible to receive partner funding support to offset the migration and BigQuery implementation costs.

Interested in learning more? Contact us.

New updates to Azure AI expand AI capabilities for developers

As companies increasingly look to transform their businesses with AI, we continue to add improvements to Azure AI to make it easy for developers and data scientists to deploy, manage, and secure AI functions directly into their applications with a focus on the following solution areas:

  1. Leveraging machine learning to build and train predictive models that improve business productivity with Azure Machine Learning.
  2. Applying an AI-powered search experience and indexing technologies that quickly find information and glean insights with Azure Search.
  3. Building applications that integrate pre-built and custom AI capabilities like vision, speech, language, search, and knowledge to deliver more engaging and personalized experiences with our Azure Cognitive Services and Azure Bot Service.

Today, we’re pleased to share several updates to Azure Cognitive Services that continue to make Azure the best place to build AI. We’re introducing a preview of the new Anomaly Detector Service which uses AI to identify problems so companies can minimize loss and customer impact. We are also announcing the general availability of Custom Vision to more accurately identify objects in images. 

From using speech recognition, translation, and text-to-speech to image and object detection, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario. To this date more than a million developers have already discovered and tried Cognitive Services to accelerate breakthrough experiences in their application.

Anomaly detection as an AI service

Anomaly Detector is a new Cognitive Service that lets you detect unusual patterns or rare events in your data that could translate to identifying problems like credit card fraud.

Today, over 200 teams across Azure and other core Microsoft products rely on Anomaly Detector to boost the reliability of their systems by detecting irregularities in real-time and accelerating troubleshooting. Through a single API, developers can easily embed anomaly detection capabilities into their applications to ensure high data accuracy, and automatically surface incidents as soon as they happen.

Common use case scenarios include identifying business incidents and text errors, monitoring IoT device traffic, detecting fraud, responding to changing markets, and more. For instance, content providers can use Anomaly Detector to automatically scan video performance data specific to a customer’s KPIs, helping to identify problems in an instant. Alternatively, video streaming platforms can apply Anomaly Detector across millions of video data sets to track metrics. A missed second in video performance can translate to significant revenue loss for content providers that monetize on their platform.

Custom Vision: automated machine learning for images

With the general availability of Custom Vision, organizations can also transform their business operations quickly and accurately identifying objects in images.

Powered by machine learning, Custom Vision makes it easy and fast for developers to build, deploy, and improve custom image classifiers to quickly recognize content in imagery. Developers can train their own classifier to recognize what matters most in their scenarios, or export these custom classifiers to run them offline and in real time on iOS (in CoreML), Android (in TensorFlow), and many other devices on the edge. The exported models are optimized for the constraints of a mobile device providing incredible throughput while still maintaining high accuracy.

Today, Custom Vision can be used for a variety of business scenarios. Minsur, the largest tin mine in the western hemisphere, located in Peru, applies Custom Vision to create a sustainable mining practice by ensuring that water used in the mineral extraction process is properly treated for reuse on agriculture and livestock by detecting treatment foam levels. They used a combination of Cognitive Services Custom Vision and Azure video analytics to replace a highly manual process so that employees can focus on more strategic projects within the operation.

Screenshot of the Custom Vision platform

Screenshot of the Custom Vision platform, where you can train the model to detect unique objects in an image, such as your brand’s logo.

Starting today, Custom Vision delivers the following improvements:

  • High quality models – Custom Vision features advanced training with a new machine learning backend for improved performance, especially on challenging datasets and fine-grained classification. With advanced training, you can specify a compute time budget and Custom Vision will experimentally identify the best training and augmentation settings.
  • Iterate with ease – Custom Vision makes it simple for developers to integrate computer vision capabilities into applications with 3.0 REST APIs and SDKs. The end to end pipeline is designed to support the iterative improvement of models, so you can quickly train a model, prototype in real world conditions, and use the resulting data to improve the model which gets models to production quality faster.
  • Train in the cloud, run anywhere – The exported models are optimized for the constraints of a mobile device, providing incredible throughput while still maintaining high accuracy. Now, you can also export classifiers to support Azure Resource Manager (ARM) for Raspberry Pi 3 and the Vision AI Dev Kit.

For more information, visit the Custom Vision Service Release Notes.

Get started today

Today’s milestones illustrate our commitment to make the Azure AI platform suitable for every business scenario, with enterprise-grade tools that simplify application development, and industry leading security and compliance for protecting customers’ data.

To get started building vision and search intelligent apps, please visit the Cognitive Services site.

Azure Data Box family meets customers at the edge

Today I am pleased to announce the general availability of Azure Data Box Edge and the Azure Data Box Gateway. You can get these products today in the Azure portal.

Compute at the edge

Data-Box-Edge-2019-Draft2-DarkR[4]We’ve heard your need to bring Azure compute power closer to you – a trend increasingly referred to as edge computing. Data Box Edge answers that call and is an on-premises anchor point for Azure. Data Box Edge can be racked alongside your existing enterprise hardware or live in non-traditional environments from factory floors to retail aisles. With Data Box Edge, there’s no hardware to buy; you sign up and pay-as-you-go just like any other Azure service and the hardware is included.

This 1U rack-mountable appliance from Microsoft brings you the following:

  • Local Compute – Run containerized applications at your location. Use these to interact with your local systems or to pre-process your data before it transfers to Azure.
  • Network Storage Gateway – Automatically transfer data between the local appliance and your Azure Storage account. Data Box Edge caches the hottest data locally and speaks file and object protocols to your on-premises applications.
  • Azure Machine Learning utilizing an Intel Arria 10 FPGA – Use the on-board Field Programmable Gate Array (FPGA) to accelerate inferencing of your data, then transfer it to the cloud to re-train and improve your models. Learn more about the Azure Machine Learning announcement.
  • Cloud managed – Easily order your device and manage these capabilities for your fleet from the cloud using the Azure Portal.

Since announcing Preview at Ignite 2018 just a few months ago, it has been amazing to see how our customers across different industries are using Data Box Edge to unlock some innovative scenarios:

 

Kroger

Sunrise Technology, a wholly owned division of The Kroger Co., plans to use Data Box Edge to enhance the Retail as a Service (RaaS) platform for Kroger and the retail industry to enable the features announced at NRF 2019: Retail’s Big Show, including personalized, never-before-seen shopping experiences like at-shelf product recommendations, guided shopping and more. The live video analytics on Data Box Edge can help store employees identify and address out-of-stocks quickly and enhance their productivity. Such smart experiences will help retailers provide their customers with more personalized, rewarding experiences.

EsriEsri, a leader in location intelligence, is exploring how Data Box Edge can help those responding to disasters in disconnected environments. Data Box Edge will allow teams in the field to collect imagery captured from the air or ground and turn it into actionable information that provides updated maps. The teams in the field can use updated maps to coordinate response efforts even when completely disconnected from the command center. This is critical in improving the response effectiveness in situations like wildfires and hurricanes.

Data Box Gateway – Hardware not required

Data Box Edge comes with a built-in storage gateway. If you don’t need the Data Box Edge hardware or edge compute, then the Data Box Gateway is also available as a standalone virtual appliance that can be deployed anywhere within your infrastructure.

You can provision it in your hypervisor, using either Hyper-V or VMware, and manage it through the Azure Portal. Server message block (SMB) or network file system (NFS) shares will be set up on your local network. Data landing on these shares will automatically upload to your Azure Storage account, supporting Block Blob, Page Blob, or Azure Files. We’ll handle the network retries and optimize network bandwidth for you. Multiple network interfaces mean the appliance can either sit on your local network or in a DMZ, giving your systems access to Azure Storage without having to open network connections to Azure.

Whether you use the storage gateway inside of Data Box Edge or deploy the Data Box Gateway virtual appliance, the storage gateway capabilities are the same.

More solutions from the Data Box family

In addition to Data Box Edge and Data Box Gateway, we also offer three sizes of Data Box for offline data transfer:

  • Data Box – a ruggedized 100 TB transport appliance
  • Data Box Disk – a smaller, more nimble transport option with individual 8 TB disks and up to 40 TB per order
  • Data Box Heavy Preview – a bigger version of Data Box that can scale to 1 PB.

All Data Box offline transport products are available to order through the Azure Portal. We ship them to you and then you fill them up and ship them back to our data center for upload and processing. To make Data Box useful for even more customers, we’re enabling partners to write directly to Data Box with little required change to their software via our new REST API feature which has just reached general availability – Blob Storage on Data Box!

Get started

Thank you for partnering with us on our journey to bring Azure to the edge. We are excited to see how you use these new products to harness the power of edge computing for your business. Here’s how you can get started:

  • Order Data Box Edge or the Data Box Gateway today via the Azure portal.
  • Review server hardware specs on the Data Box Edge datasheet.
  • Learn more about our family of Azure Data Box products.

12 tantalizing sessions for data management at Next ’19

Data is the backbone of many an enterprise, and when cloud is in the picture it becomes especially important to store, manage and use all that data effectively. At Next ‘19, you’ll find plenty of sessions that can help you understand ways to manage your Google Cloud data, and tips to store and manage it efficiently. For an excellent primer on Google Cloud Platform (GCP) data storage, sign yourself up for this spotlight session for the basics and demos. Here are some other sessions to check out:

1. Tools for Migrating Your Databases to Google Cloud

You can choose different ways to migrate your database to the cloud, whether lift-and-shift to use fully managed GCP or a total rebuild to move onto cloud-native databases. This session will explain best practices for database migration and tools to make it easier.

2. Migrate Enterprise Workloads to Google Cloud Platform

There is a whole range of essential enterprise workloads you can move to the cloud, and in this session you’ll learn specifically about Accenture Managed Services for GCP, which makes it easy for you to run Oracle databases and software on GCP.

3. Migrating Oracle Databases to Cloud SQL PostgreSQL

Get the details in this session on migrating on-prem Oracle databases to Cloud SQL PostgreSQL. You’ll get a look at all the basics, from assessing your source database and doing schema conversion to data replication and performance tuning.

4. Moving from Cassandra to Auto-Scaling Bigtable at Spotify

This migration story illustrates the real-world considerations that Spotify used to decide between Cassandra and Cloud Bigtable, and how they migrated workloads and built an auto-scaler for Cloud Bigtable.

5. Optimizing Performance on Cloud SQL for PostgreSQL

In this session, you’ll hear about the database performance tuning we’ve done recently to considerably improve Cloud SQL for PostgreSQL. We’ll also highlight Cloud SQL’s use of Google’s Regional Persistent Disk storage layer. You’ll learn about PostgreSQL performance tuning and how to let Cloud SQL handle mundane, yet necessary, tasks.

6. Spanner Internals Part 1: What Makes Spanner Tick?

Dive into Cloud Spanner with Google Engineering Fellow Andrew Fikes. You’ll learn about the evolution of Cloud Spanner and what that means for the next generation of databases, and get technical details about how Cloud Spanner ensures strong consistency.

7. Thinking Through Your Move to Cloud Spanner

Find out how to use Cloud Spanner to its full potential in this session, which will include best practices, optimization strategies and ways to improve performance and scalability. You’ll see live demos of how Cloud Spanner can speed up transactions and queries, and ways to monitor its performance.

8. Technical Deep Dive Into Storage for High-Performance Computing

High-performance computing (HPC) storage in the cloud is still an emerging area, particularly because complexity, price and performance have caused concern. This session will look at companies that are using HPC storage in the cloud across multiple industries. You’ll also see how HPC storage uses GCP tools like Compute Engine VMs and Persistent Disk.

9. Driving a Real-Time Personalization Engine With Cloud Bigtable

See how one company, Segment, built its own Lambda architecture for customer data using Cloud Bigtable to handle fast random reads and BigQuery to process large analytics datasets. Segment’s CTO will also describe the decision-making process around choosing these GCP products vs. competing options, and their current setup, with tens of terabytes stored in multiple systems and super-fast latency.

10. Building a Global Data Presence

Come take a look at how Cloud Bigtable’s new multi-regional replication works using Google’s SD-WAN. This new feature makes it possible for a single instance of data, up to petabyte size, to be accessed within or between five different continents in up to four regions. Your users can access data globally with low latency, and get a fast disaster recovery option for essential data.

11. Worried About Application Performance? Cache It!

In-memory caching can help speed up application performance, but it brings challenges too. Take a closer look in this session to learn about cache sizing, API considerations and latency troubleshooting.

12. How Twitter Is Migrating 300 PB of Hadoop Data to GCP

This detailed look at Twitter’s complex Hadoop migration will cover their use of the Cloud Storage Connector and open-source tools. You’ll hear from Twitter engineers on how they planned and managed the migration to GCP and how they solved some of their unique data management challenges.

For more on what to expect at Google Cloud Next ‘19, take a look at the session list here, and register here if you haven’t already. We’ll see you there.

Clean up files by built-in delete activity in Azure Data Factory

Azure Data Factory (ADF) is a fully-managed data integration service in Azure that allows you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. In the journey of data integration process, you will need to periodically clean up files from the on-premises or the cloud storage server when the files become out of date. For example, you may have a staging area or landing zone, which is an intermediate storage area used for data processing during your ETL process. The data staging area sits between the data source stores and the data destination store. Given the data in staging areas are transient by nature, you need to periodically clean up the data in the staging area after the ETL process has being completed.

We are excited to share ADF built-in delete activity, which can be part of your ETL workflow to deletes undesired files without writing code. You can use ADF to delete folder or files from Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, File System, FTP Server, sFTP Server, and Amazon S3.

You can find ADF delete activity under the “Move & Transform” section from the ADF UI to get started.

1. You can either choose to delete files or delete the entire folder. The deleted files and folder name can be logged in a csv file.

2. The file or folder name to be deleted can be parameterized, so that you have the flexibility to control the behavior of delete activity in your data integration flow.

3. You can delete expired files only rather than deleting all the files in one folder. For example, you may want to only delete the files which were last modified more than 30 days ago.

4. You can start from ADF template gallery to quickly deploy common use cases involving delete activity.

You are encouraged to give these additions a try and provide us with feedback. We hope you find them helpful in your scenarios. Please post your questions on Azure Data Factory forum or share your thoughts with us on Data Factory feedback site.

Why IoT is not a technology solution—it’s a business play

IoT Hero

Enterprise leaders understand how important the Internet of Things (IoT) will be to their companies—in fact, according to a report by McKinsey & Company, 92 percent of them believe IoT will help them innovate products and improve operations by 2020. However, like many business-enabling systems, IoT is not without its growing pains. Even early adopters have concerns about the cost, complexity, and security implications of applying IoT to their businesses.

These growing pains can make it daunting for organizations to pick an entry point for applying IoT to their business.

Many companies start by shifting their thinking. It’s easy to get lost in technology, opting for platforms with the newest bells and whistles, and then leaning on those capabilities to drive projects. But sustainable change doesn’t happen that way—it happens when you consider business needs first, and then look at how the technology can fulfill those needs better than current processes can.

You might find it helpful to start by thinking about how IoT can transform your business. You know connected products will be an important part of your business model, but before you start building them, you need to make sure you understand where the market is headed so you can align innovation with emerging needs. After all, the biggest wins come when you can use emerging technology to create a “platform” undergirding products and services that can be extended into new opportunity areas.

To help you plan your IoT journey, we’re rolling out a four-part blog series. In the upcoming posts, we’ll cover how to create an IoT business case, overcome capability gaps, and simplify execution; all advice to help you maximize your gains with IoT.

Let’s get started by exploring the mindset it takes to build IoT into your business model.

Make sure business sponsors are invested

With any business-enabling system, organizations instinctively engage in focused exploration before they leap in. The business and IT can work together to develop a business case, identifying and prioritizing areas that can be optimized and provide real value. Involving the right business decision-makers will ensure you have sponsorship, budget, and commitment when it comes to applying IoT to new processes and systems and make the necessary course corrections as implementation grows and scales. Put your business leaders at the center of the discussion and keep them there.

Seize early-mover advantage

Organizations that are early in and commit to developing mastery in game-changing technologies may only see incremental gains in the beginning. But their leadership position often becomes propulsive, eventually creating platform business advantages that allow them to outdistance competitors for the long term. Don’t believe it? Just look at the history of business process automation, operational optimization, ecommerce, cloud services, digital business, and other tech-fueled trends to see how this has played out.

Consider manufacturing. The industry was a leader in operational optimization, using Six Sigma and other methodologies to strip cost and waste out of processes. After years of these efforts, process improvements became a game of inches with ever-smaller benefits.

Enter IoT. Companies have used IoT to achieve significant gains with improving production and streamlining operations in both discrete and process manufacturing. IoT can help companies predict changing market demand, aligning output to real needs. In addition, many manufacturing companies use IoT to help drive throughput of costly production equipment. Sensors and advanced analytics predict when equipment needs preventive maintenance, eliminating costly downtime.

How companies are using IoT today

Johnson Controls makes building-automation solutions, enabling customers to fine-tune energy use, lowering costs and achieving sustainability goals. The company built out its connected platform in the cloud with the Microsoft Azure IoT Suite to be able to aggregate, manage, and make sense of the torrent of facility data it receives. Through its Smart Connected Chillers initiative, Johnson Controls was able to identify a problem with a customer’s chiller plant, take corrective action, and prevent unplanned downtime that would have cost the customer $300,000 an hour.

IoT can also enable new business models. Adobe used to run its business with one-time software sales, but the company pivoted to online subscription sales as part of its drive to create a digital business, according to Forbes. While the move seemed risky at the time (and initially hurt revenue), Adobe’s prescient move has enabled it to dominate digital marketing, creative services, and document management. Now, of course, the software industry is predominantly Software as a Service (SaaS). Adobe is building on its success by pushing even deeper into analytics. The Adobe Experience Cloud uses Microsoft Azure and Microsoft Dynamics 365 to provide businesses with a 360-degree customer view and the tools to create, deliver, and manage digital experiences and a globally scalable cloud platform.

When connections with customers are constant and always adding value—everyone wins.

Think about IoT as a business enabler

It’s helpful to constantly stress to your team that IoT is a new way to enable a business strategically. IoT isn’t a bolt-on series of technology applications to incrementally optimize what you have. That may seem at odds with the prevailing wisdom to optimize current services. Yes, organizations should seek efficiencies, but only after they have considered where the business is headed, how things need to change to support targeted growth, and whether current processes can be improved or need to be totally transformed. In the case of Johnson Controls, service optimization is a core part of the company’s value proposition to its customers.

Yes, IoT can reinvent your business. And yes, constant, restless innovation can be expected until IoT is fully embedded in your business in a way that’s organic and self-sustaining. Adobe has used its digital platform to extend further and further into its customers’ businesses, providing value they can measure.

If you haven’t started with IoT, now is the right time, because your competitors are grappling with these very issues and creating smart strategies to play the IoT long game. Disruption is here and will only get more pronounced as platform leaders rocket ahead.

As you plan your journey with IoT, there’s help. In forthcoming blogs, we’ll be looking at:

  • Building a business case for IoT—Why this is the moment to be thinking about IoT and developing a solid business case to capture market opportunity and future-proof your organization.
  • Paving the way for IoT—Understanding and addressing what gaps need to be overcome to achieve IoT’s promise.
  • Becoming an IoT leader now—It’s simpler than you think to start with IoT. You can use what you have and SaaS-ify your approach with technology that makes it easy to connect, monitor, and manage your IoT assets at scale.

Need inspiration? Watch this short video to hear insights from IoT leaders Henrik Fløe of Grundfos, Doug Weber from Rockwell Automation, Michael MacKenzie from Schneider Electric, and Alasdair Monk of The Weir Group.