Women Techmakers Summit Europe: Supporting Diversity & Inclusion in Tech

Posted By Franziska Hauck and Katharina Lindenthal , Google Developer Relations Europe

Once a year, we invite community organizers and influencers from developer groups that support diversity and inclusion in their local tech ecosystem to the Women Techmakers Summit Europe. The Women Techmakers Summit is designed to provide training opportunities, share best practices, show success stories and build meaningful relationships. The fourth edition of the WTM Summit in Europe took place in Warsaw, one of Europe’s most innovative tech and startup ecosystems.

Such positive energy! All 120 attendees of the WTM Summit Europe 2019Such positive energy! All 120 attendees of the WTM Summit Europe 2019

Expertise from the Community for the Community

The Women Techmakers Summit hosted 120 people, all women and men that are leading tech communities across Europe. With more than half of the sessions being delivered by community influencers, the group came together to share their best practices, learn from each other and discuss all things related to diversity & inclusion. “A fantastic opportunity to meet other community organizers across Europe and learn from each other.”

We also invited role models to draw inspiration and motivation from. Head of Google for Startups, Agnieszka Hryniewicz-Bieniek, and Cloud Engineer, Ewa Maciaś, demonstrated that stepping out of our comfort zone is something we should do more and more. No one has the right answers from the start but by trying out new ways, we can carve our individual paths. Fear of failure is real. It should not keep us from experimenting, though.

Google’s Natalie Villalobos, head of the Women Techmakers program, and Emma Haruka Iwao, record breaker for calculating the most accurate value of Pi with Google Cloud, gave a glimpse into their personal stories. Their insights? Sometimes we need to go through hard times. They equipped us with the right mindset to push through, become your boss and succeed.

This left the attendees with the right motivation to get back to their communities: “This was my first WTM Summit, and it was an incredible experience. I met some amazing ladies and role models, and will be happy to share the inspiration I got with my local community.”

Googler Emma Haruka Iwao sharing her journey to break the world record for calculating the most accurate value of Pi Googler Emma Haruka Iwao sharing her journey to break the world record for calculating the most accurate value of Pi

Building the Basis for Diversity and Inclusion

“Being at the WTM Summit felt like being inside a family. I felt really included like at no conference before.” To make everyone feel welcome, a code of conduct was visible for all attendees, and prayers and parents spaces were provided for all attendees. The itself needed to become the inspiration for community organizers and influencers to carry the learnings back to the communities.

Organizers working together to develop best practices to foster diversity and inclusion in their tech communities Organizers working together to develop best practices to foster diversity and inclusion in their tech communities

Women Techmakers: Changing the Narrative

One of the core elements of Women Techmakers is creating and providing community for women in tech. Women Techmakers Ambassadors thrive diversity and inclusion initiatives in their local tech community to help to bring more women into the industry. In Europe, more than 150 WTM Ambassadors from 25 countries support their local tech communities to close the gap between the number of women and men in the industry. Meetup organizers and community advocates who want to achieve parity can join the Women Techmakers program. As members, they are given the tools and opportunities to change the narrative.

If you are interested in joining the WTM Ambassadors Program, reach out to [email protected]

Admin Insider: What’s new in Chrome Enterprise, Release 76

Updates in Chrome 76 are focussed on increasing security for Chrome Browser and Chrome devices in your organization. For the full list of what’s new, and more detailed descriptions, be sure to read the release notes.

Flash is now blocked by default
Last year, Adobe announced it will stop updating and distributing the Flash Player at the end of 2020. As part of our commitment to security, and our transition plan, from Chrome 76,  Adobe Flash will be blocked by default.  Administrators can manually switch back to ASK (“Dialog to Ask first before running Flash”) before running Flash. This change won’t impact existing policy settings for Flash. You can still control Flash behavior using DefaultPluginsSetting, PluginsAllowedForUrls, and PluginsBlockedForUrls.   

For more information on the Flash transition plan, see the Flash Roadmap. Enterprises using Flash applications today should be looking for alternatives to those applications, as Flash will be removed from Chrome in late 2020. 

Privately-hosted extensions should now be packaged with CRX3 for added security
We know that some enterprises prefer to privately-host (self-host) internally developed extensions, or third-party extensions outside of the Chrome Web Store for many business reasons—the most common is compliance. 

If your self-hosted extensions are still packaged in the CRX2 format, these extensions will stop updating in Chrome 76 and new installations of the extension will fail. Privately hosted extensions that were packaged using a custom script or a version of Chrome prior to Chrome 64 must berepackaged to CRX3. 

As we’ve been discussing since Chrome 68, we are moving from CRX2 to CRX3. CRX2 uses SHA1 to secure updates to the extension or app and, because breaking SHA1 is technically possible, this allows attackers to intercept an extension update and inject arbitrary code into it.  CRX3 uses a stronger algorithm, avoiding these risks, helping to protect against attacks. 

It’s now even easier to discover Chrome Enterprise policies
As part of our ongoing efforts to make discovering and setting Chrome Enterprise policies even easier, we have created a new site which details our Chrome Enterprise policies. The new site allows you to filter by platform and Chrome version to make it faster and easier to see which policies are available for your fleet.

chrome enterprise.png

Built-in FIDO security key is now supported
Starting with version 76, all latest-generation Chromebooks (produced from 2018) will gain support for built-in FIDO security keys backed by the Titan M chip. For supported services, end users can now use the power button on these devices for second factor authentication. This feature is disabled by default, however administrators can enable this by changing DeviceSecondFactorAuthentication in the Admin console.

To stay in the know, bookmark and visit our Help Center, or sign up to receive new release details as they become available.

Google Cloud Partner Specializations expand with more than 90 new partners

At Google Cloud, partnering is a key part of how we serve our customers, so we are committed to building an ecosystem of innovation and services that can drive digital transformation for customers.

To that end, we are extremely excited about the launch of our new Google Cloud Partner Advantage Program. One thing that really stands out about our partner ecosystem is how incredibly passionate and engaged our partners are with our products and solutions, and the levels of technical and differentiated business know-how that they are building. With the launch of Partner Advantage, we are growing our support for these partners and continuing to expand our Partner Specialization initiative.

Specializations allow our partners to differentiate their business to the market, build knowledge and experience in specific, high-priority product and solutions areas, like application development, machine learning or cloud migration. At Next ‘19, we announced three additional Specialization areas: Marketing Analytics, IoT and Security Training, and acknowledged several partners that achieved this highest level of designation. 

Today, we’re excited to announce the 90-plus partners that successfully achieved specializations in the first half of 2019. To achieve a specialization, these partners demonstrated compelling customer success stories, a consistent practice, and passed a rigorous capability assessment.

It’s exciting to see these partners growing their businesses with us, and even more exciting to see these partners driving real business value for our customers with Google Cloud.

New partners achieving specializations include:

AppDev2-01.png

Application Development: These partners can help you use the best of Google Cloud Platform to build and manage cloud-native business apps.

cloudmigration-02.png

Cloud Migration: These partners deliver a seamless transition to Google Cloud Platform—from building the foundational architecture to the actual mechanics of migration.

DA-01.png

Data Analytics: These partners have demonstrated success turning huge amounts of data into insights that drive your business forward.

edu-02.png

Education: Get proven help implementing Google for Education solutions in the classroom. Whether it’s technical deployment, professional development for educators, or transformation services for school leaders, these partners can help you find creative solutions.

work transformation-02-01.png

Work Transformation – Enterprise: These partners are fluent in deploying G Suite across large organizations and can help you streamline workflows, ensure governance, and get everyone on the same page (even when they’re not in the same room).

Infrastructure-01.png

Infrastructure: These partners have proven success building customer infrastructure and workflows on Google Cloud Platform.

IOT-01.png

Internet of Things:These partners can help you thrive in a connected world by demonstrating the ability to connect, process, store, and analyze device data both at the edge and in the cloud to drive new business value.

location based-01.png

Location-Based Services: These partners have a successful track record of building and managing applications using the best of Google Maps Platform and Google Cloud Platform, in both web and mobile environments.

ML-01.png

Machine Learning: These partners can help you use Google Cloud AI and machine learning services for your own data analysis, speech and image recognition applications, and more.

marketing analytics-01.png

Marketing Analytics: These partners help customers collect, transform, analyze, and visualize data, and then use the insights gained to optimize marketing strategy and activations.

security-01.png

Security:Keep your organization (and reputation) safe. These partners are experienced in securing customer data and workflows through Google Cloud Platform.

work trans-01.png

Work Transformation: These partners have demonstrated significant success deploying G Suite to SMB organizations, which includes providing services across all the project workstreams (i.e. governance, technical, people, process and support).

Explore the full list in our Google Cloud Partner Directory. And if you’re a partner interested in participating, visit Google Cloud Partner Specialization.

Beyond the Map: How we build the maps that power your apps and business

Editor’s Note: Today’s post comes from Andrew Lookingbill and Ethan Russell, two longtime Googlers whose goal is to map the world and make our maps universally accessible and useful. Over the next several months, we’ll give you a closer look at how we build maps that keep up with the ever-evolving world, give developers the data they need to deliver innovative experiences, and provide companies with location-based insights to help optimize their business operations. Today, we’ll start with an overview of the mapping basics.

As more than a billion people have come to rely on Google Maps to explore the world and millions of apps and experiences have been built on top of our data, we’re often asked how we build the map that serves such a wide set of users and use cases. The answer is that it’s taken more than a decade of laying the groundwork and an obsessive commitment to refining our techniques to be able to meet increasing user expectations for fresh and accurate data and insights.

An early investment in imagery
Just a couple of years after launching Google Maps and Google Maps Platform (formerly Google Maps APIs), we launched Street View. For consumers, it helped them virtually explore the entire world from their own homes. For as long as our Street View program has operated, we’ve made this rich imagery data set available to businesses so they can provide real-world context in their applications. Our Street View APIs allow real estate sites like Trulia to help homebuyers discover a place they’ll love to live by virtually exploring neighborhoods right from their website and apps.

Street View exploration of a neighborhood on Trulia.png
Street View exploration of a neighborhood on Trulia

At Google, Street View gave us the foundation for the future of our mapping process. Advances in our machine learning technology, combined with the more than 170 billion Street View images across 87 countries, enable us to automate the extraction of information from those images and keep data like street names, addresses, and business names up to date for our customers. If a picture is worth a thousand words, then a high-res, panoramic image is worth a billion. So we’re committed to developing our own hardware, like our newest trekker equipped with higher-resolution sensors and an increased aperture, to deliver the highest quality imagery and insights to our customers. 

Partnering with authoritative sources 
Providing reliable and up-to-date information is essential for enterprises looking to build mission critical applications on our platform. So we also use data from more than 1,000 authoritative data sources around the world like the United States Geological Survey, the National Institute of Statistics and Geography (INEGI) in Mexico, local municipalities, and even housing developers.

Combining our imagery analysis with third-party data gives customers the most accurate and reliable data to power their businesses. For instance, we’re able to provide ridesharing companies such as Lyft, and mytaxi with convenient pickup/dropoff locations for their passengers and traffic-aware routing so their drivers can take the fastest route possible. We understand that one wrong route or delayed pick-up can have an impact on whether a customer comes back, so we make it easy for third-party authoritative sources to share their data with us. From there, we quickly ingest it and turn it into the features that are helping ridesharing companies all over the world improve their customer experience and business efficiencies.

mytaxi navigation using Google Maps Platform.png
mytaxi navigation using Google Maps Platform

Real people, real insights 
Data and imagery are key components of mapmaking. But they’re static and don’t always give us the context we want about a specific place. If you think of Street View as helping you contextualize where you are on a street, you can think about user contributed content as helping you contextualize a specific place like a restaurant or coffee shop. With the help of a passionate community of Local Guides, active Google users, and business owners via Google My Business, we receive more than 20 million contributions from users every day–from road closures, to details about a place’s atmosphere, to new businesses, and more. To ensure this contributed info is helpful, we publish it only if we have a high degree of confidence in its accuracy.

This has enabled us to build a data set of more than 150 million places around the world, which we make available to developers through our Places API. The Places API includes rich data on location names, addresses, ratings, reviews, contact information, business hours, and atmosphere–helping companies empower their users not just to find a restaurant, but to find a restaurant that’s good for kids with vegetarian menu items. 

Keeping up with the speed of innovation and growth with machine learning  
The mapmaking process we’ve shared so far builds a useful and reliable map, but it presents one major challenge–speed. To empower our customers to move fast and innovate, we need to map the world more quickly than ever before. And as regions of the world rapidly develop, we need to be able to quickly get that information into our maps and products. To increase the rate at which we map the world, we turn to machine learning to automate mapping processes, while maintaining high levels of accuracy and precision. 

Here’s an example of how we used machine learning to solve what we dubbed “fuzzy buildings”. Our team was frustrated with fuzzy building outlines caused by an algorithm that tried to guess whether part of an image was a building or not. To fix this, we worked with our data operations team to trace common building outlines manually. Now that’s a solution in itself. But tracing all the common building outlines in the world by hand isn’t a scalable or quick process. So once our team traced the common building outlines, they used this information to teach our machine learning algorithms what shapes buildings tend to have in the real world, and which parts of images correspond with building edges and outlines. Using this technique, we were able to map as many buildings in one year as we mapped in the previous 10–vastly improving the maps we share with our customers.

google maps.png

Left: Before: buildings with no outlines. Right: After: clear building polygons outlined on the map

Providing addresses where the streets have no name
Earlier we mentioned extracting information from Street View imagery. Using machine learning and Street View imagery together, we’re able to automatically identify house numbers almost anywhere in the world, and it’s helped us make immense progress in mapping more than 220 countries and regions worldwide.  

But not everyone actually has an address. This is why Google Maps and Google Maps Platform support plus codes, which enables everyone in the world to have an address they can share with friends, delivery services, use to send/receive mail, and more. Plus codes is open source, available for any developer to use, and also incorporated into our Places and Geocoding APIs.

By helping to map these regions, giving everyone an address, and providing access to our API products, businesses and local authorities can better serve their communities and there’s increased opportunity for new location-based ecosystems to grow. Parties interested in using plus codes can contact us here

We’re in it for the long haul 
After more than a dozen years at Google (25 years combined), we’re still excited about building maps that keep up with the real world. As the world changes, we’ll keep innovating to not only help people, but to provide location-based tools that power businesses and transform their operations, industries, and the world. 

Next month, we’ll look at the mapping challenges in different regions and explain how imagery helps us overcome them–and what that means for businesses and developers building with our products. In the meantime, to learn more about Google Maps Platform, visit our website.

Acknowledgements: Dave McClusky, Global Head of Customer Engineering, contributed to this post.

Reaching for the sky: Japanese businesses embrace Google Cloud for digital transformation

Since our last Google Cloud Next Tokyo in September 2018, we’ve been busy growing and expanding our commitment to Japanese businesses. In May,we launched our Osaka cloud region, complementing our existing cloud region in Tokyo, and were recognized as a Leader in the 2019 Gartner Magic Quadrant for Cloud Infrastructure as a Service, Japan. We’ve also invested ina new undersea cable system that will connect Japan to Guam and Australia. When it goes online in early 2020, it will be the third cable system that Google has invested in to land in Japan. It’s a key part of our cloud network in Asia Pacific, helping to bring greater agility, flexible capacity and better performance to our customers throughout the region.

Demand from Japanese customers who want to build and scale their business on Google Cloud continues to grow, and we’ve been thrilled to welcome new customers in the past year like Asahi Group Holdings, Kyocera Communications Systems, SHARP, and Yamaha. Since launching our Advanced Solutions Lab (ASL) in 2018 we’ve been working side-by-side with leaders from many different industries to develop AI-powered solutions to solve business challenges, and we’ve heard from customers like Fast Retailing who’ve already benefited from ASL immersive training. And to better support all our Japanese customers, earlier this year we launched 24×7 Japanese language support across all channels for Platinum and Enterprise customers.

We continue to be inspired by all the ways Japanese businesses are transforming in the cloud, and are thrilled to welcome so many as we kick off Google Cloud Next in Tokyo this week. Here are just a few of their stories. 

Fast connections: East Japan Railway evolves and grows its services with Google Cloud
Serving 17.9 million passengers on 12,000 trains each day, East Japan Railway Company, or JR-EAST, is the nation’s largest railway company. It is also essential to one’s daily life in Japan, beyond transportation.

Safety is a top priority for JR-EAST’s management and a key tenet of its corporate vision “Move UP 2027.” In addition to pursuing forward-looking projects like mobility as a service (MaaS), JR-EAST has started working with Google Cloud to deliver the highest possible level of transportation safety and deepen trust with customers and the community.

“It’s been one year since we launched our corporate vision ‘Move UP 2027’ with an aim to create ‘customer-centric values and services’. This vision promotes future-oriented programs including MaaS and places ‘ultimate safety’ as the top priority,” said Masaki Ogata, Vice Chairman, JR-EAST. “For example, we will be working with Google Cloud to revolutionize maintenance works of lines and rails. This collaboration with Google Cloud will be a trigger to innovate maintenance operations within our railway business and also to spur innovation essential to our non-rail business.”

Applying smart analytics and AI for better customer experiences and business outcomes
The opportunity to unlock transformative business insights continues to be a driver for cloud adoption in Japan. In fact, we’ve seen a number of Japanese businesses turn to Google Cloud for their data management and smart analytics needs.

Serving millions of customers with services like housing information, hotel and restaurant reservations, and job listings, scalability has been a huge priority for Recruit Group. With Google Cloud, it has the improved management and scalability it was looking for.

“Volume and complexity of data processing in our services is increasingly challenging,” says Sogo Ohishi Recruit Technologies Co., Ltd. Corporate Executive Officer who leads infrastructure management across the groups’ multiple products. “We’re very satisfied with the high stability and performance improvement powered by GCP. I’m particularly excited about the recent migration of our EOSL (End of Service Life) Hadoop cluster to BigQuery and Cloud Dataproc, which enabled us to create an integrated data mart 14 times faster. In an era of ever growing data, we look forward to continue improving agility, scalability and operational efficiency by leveraging robust cloud native architecture. “

Organizations that have robust data processing and analysis are frequently the most successful in applying AI and machine learning (ML). Accordingly, we’ve seen growth in the number of Japanese enterprises adopting AI to transform their business.

One of Japan’s most popular mobile-based internet service companies, DeNA is using ML to improve the new player onboarding experience for its game “Gyakuten Othellonia”. With AI, the game provides new players recommendations for in-game strategies and offers scenarios for new players to practice and gain experience before challenging skilled players. As a result, it saw an increase in new player activity and the win rate for beginners grew by five percentage points. Plus, players that used the new recommendation service have shown higher lifetime value (LTV) than those that did not, which has the potential to positively impact the game’s bottom line.

“We view AI as an important element of transforming the gaming experience on our platform,” says Kenshin Yamada, Director of AI Dept, DeNA Co., Ltd. “By collaborating with Google Cloud, we have been able to leverage Google’s expertise in AI as well as building and serving different components in our game. We are also able to leverage Google Cloud’s open and serverless technologies to host our AI models without worrying about scalability of infrastructure or portability of code.”

For Zozo, a popular Japanese fashion retailer, the product search capabilities in its website “ZOZOTOWN” are essential to meeting customer needs. To provide the best search experience possible, it relies on ML. But managing and optimizing its ML model requires frequent updates with new data.To speed up model training, Zozo turned to Cloud TPUs.  

“Visual search for apparel is very important for our users, and training useful machine learning models that produce accurate search results is critical for our user experience,” says Imamura Masayuki, VPoE of ZOZO Technologies, Inc. “With Cloud TPU, we are now able to train our TensorFlow models 55x faster, going from one week to under three hours of training time. The combination of running TensorFlow on Google Cloud using Cloud TPU has helped us consistently test, improve, and serve better models that delight our users.”

One of Japan’s leading insurance companies, Sompo Holdings is using AI to speed up the estimation of insurance premiums for customers. 

“The ability to provide an instant quote is very important for a good customer experience,” says Koichi Narasaki, Group Chief Digital Officer, Executive Vice President and Executive Officer, Sompo Holdings, Inc. “With the help of the Google Cloud Vision API, we are able to use a smart device to extract information from insurance documents and feed the data into our premium calculators in real-time. This process lets users receive instant premium estimates.”

Looking ahead
These are just a few of the many stories we’ve heard from customers in Japan and Asia Pacific as they embrace the cloud to modernize their infrastructure, develop new applications, manage their data, gain insights through smart analytics, and increase productivity and collaboration.  In addition to the stories here, we’ve also heard from customers like NTT Communications and ANZ Bank that are using Anthos, our new hybrid and multi-cloud platform, to accelerate application development and take advantage of transformational technologies like containers, service mesh and microservices. You can learn more on that in this blog post.

We look forward to continuing to work with more and more businesses wherever they may be to put the power of the cloud to work for them for meaningful transformation. To find more stories from Google Cloud customers in Japan and Asia Pacific, visit our website.

New GCP database options to power your enterprise workloads

Databases power critical applications and workloads for enterprises across every industry, and we want to make it easier for businesses to use and manage those databases. Our goal is to provide you the capabilities to run any workload, existing and future, on Google Cloud. That’s why we offer secure, reliable and highly available database services, and have been working to deeply integrate open-source partner services running on Google Cloud Platform (GCP) to give you the freedom of choice when it comes to how to manage your data.

Today, we’re announcing a number of enhancements to our database portfolio:

  • Cloud SQL for Microsoft SQL Server in alpha 
  • Federated queries from BigQuery to Cloud SQL
  • Elastic Cloud on GCP now available in Japan and coming soon to Sydney 

Open the window to the cloud with Cloud SQL for Microsoft SQL Server 
When you move to the cloud, you shouldn’t have to reinvent the wheel. Our goal at GCP is to make it easy for everything you use on-premises to work as expected once you move to the cloud. Cloud SQL for Microsoft SQL Server (currently in alpha) lets you bring existing SQL Server workloads to GCP and run them in a fully managed database service. 

We’ve heard great feedback from our early access customers using Cloud SQL for SQL Server. For enterprises, this option means they can now experience fully managed SQL Server with built-in high availability and backup capability. You can lift and shift SQL Server workloads without changing apps, then use the data from these apps with other GCP services like BigQuery and AI tools to create more intelligent applications.

Federated queries from BigQuery to Cloud SQL
Data can only create value for your business when you put it to work, and businesses need secure and easy-to-use methods to explore and manage data that is stored in multiple locations. To help, we’re continuing to expand federated queries to more GCP products so you can bring analysis to data, wherever it is and right within BigQuery. We currently support querying non-BigQuery native storage systems like Cloud Storage, Cloud Bigtable and Sheets, and today we’re extending the federated query capability to include Cloud SQL. This is just part of our continuing efforts to integrate our services across products to provide a seamless customer experience and build strong ecosystems around our products.   

Elastic Cloud on GCP available in Japan; Sydney coming soon
Migrating to the cloud can be challenging. That’s something we keep in mind as we develop our database products and integrate them with the rest of GCP. We do this with GCP-built services like Cloud SQL and Cloud Spanner, as well as through deeply integrated partner services running on GCP. These open source-centric strategic partners are in line with our belief that open source is a critical component of the public cloud. 

We’re pleased to announce the expanded availability of Elastic Cloud in Google Cloud’s Japan region, with Sydney region availability coming soon. As the creators of the Elastic Stack, built on Elasticsearch, Elastic builds self-managed and SaaS offerings that make data usable in real time and at scale for search use cases, like logging, security, and analytics. With more integration to come, you will soon be able to use your GCP commits toward Elastic Cloud, with a single bill from Google Cloud. 

How customers are using GCP databases
We’ve heard from customers here in Japan that they’ve found new flexibility and scale with GCP databases, along with strong consistency and less operational overhead.   

Merpay is a provider of secure online payment technology. Merpay’s Mercari platform has about 13 million monthly active users and depends on the performance and scalability of GCP database technology to run smoothly. “We adopted Cloud Spanner with little experience in order to store the data of a new smartphone payment service, to keep consistent with our culture of ‘Go Bold,’ where we encourage employees to take on challenges,”says Keisuke Sogawa, CTO of Merpay, Inc. “Additionally, adopting a microservices architecture based on Google Kubernetes Engine (GKE) made it possible for teams to freely and quickly develop services and maintain their service levels even as the organization grew larger. As a result, we released the Merpay service within a short span of 15 months, and today our provision of the service remains reliable for our customers.”

Learn more about GCP databases, and get details about Forrester naming Google Cloud a Leader in The Forrester Wave Database-as-a-Service and The Forrester Wave Big Data NoSQL.

Check out these Tokyo Cloud Next ’19 sessions for more about GCP databases:

Bringing hybrid and multi-cloud to our APAC customers with Anthos

At Google Cloud Next ‘19 in April, we announced Anthos, Google Cloud’s new open platform that lets you run applications anywhere—simply, flexibly and securely. Embracing open standards, Anthos lets you run your applications unmodified on existing on-prem hardware investments or in the public cloud. Anthos also includes capabilities to help you automate policy and security at scale across your deployments using Anthos Config Management. Anthos accelerates application development by giving teams only one set of modern tools to learn, equips your business with transformational technologies like containers, service mesh and microservices, and prevents you from being locked in to a single cloud provider. 

To support these transformations, customers frequently tell us they need to be able to modernize. Having the power to modernize your workloads in place from existing on-prem data centers or with a move to the cloud is key to Google meeting you where you are. To continue supporting that flexibility, today, we are happy to report the beta availability of Migrate for Anthos, which allows you to take VMs from on-prem or Google Compute Engine and move them directly into containers running in Google Kubernetes Engine (GKE). We’re also expanding the list of supported sources so that you can migrate VMs directly from Amazon EC2 and Microsoft Azure into containers in GKE. 

With this launch, customers can now use Migrate for Anthos to automatically modernize their VMs and move them to containers without any of the complex, manual processes of traditional container modernization strategies. Our new approach gives you more flexibility to modernize your existing infrastructure investments with ease, even for VMs you’d previously written off as not being able to modernize. By making the move to containers in GKE you gain a wealth of benefits and automation, like no longer having to manually maintain and patch your OS, as one example. 

Atos, a systems integrator with a large presence in Japan, has been using Migrate for Anthos to accelerate its hybrid cloud journey. “Containers are already a part of our cloud landscape, giving us a powerful way to manage and maintain our systems as well as customer environments. At the same time, we have a lot of VMs in production and we are always looking for optimized ways of migrating these over to hybrid cloud delivery models,” said Michael Kollar, SVP for Cloud Engineering at Atos. “Migrate for Anthos gives us an additional fantastic tool in transformational projects and it will further accelerate our cloud success.” 

Anthos Adoption in Asia Pacific

Gartner predicts that by 2021, more than 75% of midsize and large organizations will have adopted a multi-cloud and/or hybrid IT strategy1. Organizations are taking this approach to prevent vendor lock-in and benefit from better server density, reduced management overhead, and improved management through integration with services like Stackdriver for logging, monitoring, and debugging. 

NTT Communications is one of the world’s largest telecommunications providers, and has been one of the earliest adopters of Anthos in Asia Pacific. 

“NTT Communications Corporation was one of the early customers of Anthos in Japan when it was announced in April this year. Since then, we have embarked on an interesting proof of concept (POC) with medical institutions and system integrators to analyze clinical data to see if we can improve the quality of rehabilitation programs offered to patients. Using GKE On-Prem which is part of the Anthos platform, we securely store clinical data on our Enterprise Cloud which comprises physically distributed servers connected through a closed and secure network, and access smart analysis tools to process the data,” said Akiko Kudo, Senior Vice President, Head of Fifth Sales Division.

We’re thrilled to work with our customers in Asia Pacific to help them bring their environments into the digital future. Modernizing your IT environment is a dynamic journey and will likely involve multiple strategies. Technologies like our Anthos platforms are here to help simplify that path. For more information on migrating and modernizing with Google Cloud, be sure to visit our Anthos, and Migrate for Anthos pages. Sign up here if you are interested in trying out Anthos.

1. Smarter With Gartner, 5 Approaches to Cloud Applications Integration, May 14, 2019

Driving enterprise modernization with Google Cloud infrastructure

Organizations are adopting modern cloud architectures to deliver the best experience to their customers and benefit from greater agility and faster time to market. Google Cloud Platform (GCP) is at the center of this shift, from enabling customers to adopt hybrid and multi-cloud architectures to modernizing their services. Today, we’re announcing important additions to our migration and networking portfolios to help you with your modernization journey:

Migrate to GCP from more clouds
Businesses migrate virtual machines from on-prem to Google Cloud all the time and, increasingly, they also want to move workloads between clouds. That’s why we’re announcing today that Migrate for Compute Engine is adding beta support for migrating virtual machines directly out of Microsoft Azure into Google Compute Engine (GCE). This complements Migrate for Compute Engine’s existing support for migrating VMs out of Amazon EC2. As a result, whether you’re migrating between clouds for better agility, to save money, or to increase security, you now have a way to lift and shift into Google Cloud—quickly, easily and cost-effectively.

Trax, which uses GCP to digitize brick-and-mortar retail stores, has significantly accelerated its migration and freed up developer time thanks to the ease of use and flexibility of Migrate for Compute Engine. 

“Migrate for Compute Engine allowed our DevOps team to successfully move dozens of servers within a few hours and without utilizing developers or doing any manual setup,” said Mark Serdze, director of cloud infrastructure at Trax. “Previous migration sprints were taking as long as three weeks, so getting sprints down to as little as three hours with Migrate for Compute Engine was a huge time and energy saver for us. And being able to use the same solution to move VMs from on-prem, or from other cloud providers, will be very beneficial as we continue down our migration path.”

Simplify transformation with enterprise-ready Service Mesh and modern load balancing 
As enterprises break monoliths apart and start modernizing services, they need solutions for consistent service and traffic management at scale. Organizations want to invest time and resources in building applications and innovating, not on the infrastructure and networking required to deploy and manage these services. Service mesh is rapidly growing in popularity because it solves these challenges by decoupling applications from application networking and service development from operations. To ease service mesh deployment and management, we’re announcing two enterprise-ready solutions that make it easier to adopt microservices and modern load balancing: general availability of Traffic Director and beta availability of Layer 7 Internal Load Balancer (L7 ILB).

Traffic Director, now available in Anthos, is our fully managed, scalable, resilient service mesh control plane that provides configuration, policy and intelligence to Envoy or similar proxies in the data plane using open APIs, so customers are not locked in. Originally built at Lyft, Envoy is an open-source high-performance proxy that runs alongside the application to deliver common platform-agnostic networking capabilities, and together with Traffic Director, abstracts away application networking. Traffic Director delivers global resiliency, intelligent load balancing and advanced traffic control like traffic splitting, fault injection and mirroring to your services. You can bring your own Envoy builds or use certified Envoy builds from Tetrate.io.

Traffic Director,.gif

“Service mesh technologies are integral to the evolution from monolithic, closed architectures to cloud-native applications,” said Vishal Banthia, software engineer at Mercari, a leading online marketplace in Japan. “We are excited to see Traffic Director deliver fully-managed service mesh capabilities by leveraging Google’s strengths in global infrastructure and multi-cloud service management.”

We’re also taking the capabilities of Traffic Director a step further for customers who want to modernize existing applications. With L7 ILB, currently in beta, you can now bring powerful load balancing features to legacy environments. Powered by Traffic Director and Envoy, L7 ILB allows you to deliver rich traffic control to legacy services with minimal toil—and with the familiar experience of using a traditional load balancer. Deploying L7 ILB is also a great first step toward migrating legacy apps to service mesh. 

“L7 ILB makes it simple for enterprises to deploy modern load balancing,” said Matt Klein, creator ofEnvoy Proxy. “Under the hood, L7 ILB is powered by Traffic Director and Envoy, so you get advanced traffic management simply by placing L7 ILB in front of your legacy apps.”

L7 ILB.gif

Both L7 ILB and Traffic Director work out-of-the-box with virtual machines (Compute Engine) and containers (Google Kubernetes Engine or self-managed) so you can modernize services at your own pace.

Deliver resilient connectivity for hybrid environments 
Networking is the foundation of hybrid cloud, and fast, reliable connectivity is critical, whether it’s with a high performance option like Cloud Interconnect or Cloud VPN for lower bandwidth needs. For mission-critical requirements, High Availability VPN and 100Gbps Dedicated Interconnect will soon be generally available, providing resilient connectivity with industry leading SLAs for deploying and managing multi-cloud services.

We look forward to hearing how you use these new features. Please visit our website to learn more about our networking and migration solutions, including Migrate for Anthos.

Run Windows Server and SQL Server workloads seamlessly across your hybrid environments

In recent weeks, we’ve been talking about the many reasons why Windows Server and SQL Server customers choose Azure. Security is a major concern when moving to the cloud, and Azure gives you the tools and resources you need to address those concerns. Innovation in data can open new doors as you move to the cloud, and Azure offers the easiest cloud transition, especially for customers running on SQL Server 2008 or 2008 R2 with concerns about end of support. Today we’re going to look at another critical decision point for customers as they move to the cloud. How easy is it to combine new cloud resources with what you already have on-premises? Many Windows Server and SQL Server customers choose Azure for its industry leading hybrid capabilities.

Microsoft is committed to enabling a hybrid approach to cloud adoption. Our commitment and passion stems from a deep understanding of our customers and their businesses over the past several decades. We understand that customers have business imperatives to keep certain workloads and data on premises, and our goal is to meet them where they are and prepare them for the future by providing the right technologies for every step along the way. That’s why we designed and built Azure to be hybrid from the beginning and have been delivering continuous innovation to help customers operate their hybrid environments seamlessly across on-premises, cloud and edge. Enterprise customers are choosing Azure for their Windows Server and SQL Server workloads. In fact, in a 2019 Microsoft survey of 500 enterprise customers, when those customers were asked about their migration plans for Windows Sever, they were 30 percent more likely to choose Azure.

Customers trust Azure to power their hybrid environments

Take Komatsu as an example. Komatsu achieved 49 percent cost reduction and nearly 30 percent performance gain by moving on-premises applications to Azure SQL Database Managed Instance and building a holistic data management and analytics solutions across their hybrid infrastructure.

Operating a $15 billion enterprise, Smithfield Foods slashed datacenter costs by 60 percent and accelerated application delivery from two months to one day using a hybrid cloud model built on Azure. Smithfield has factories and warehouses often in rural areas that have less than ideal internet bandwidth. It relies on Azure ExpressRoute to connect their major office locations globally to Azure to gain the flexibility and speed needed.

The government of Malta built a complete hybrid cloud eco-system powered by Azure and Azure Stack to modernize its infrastructure. This hybrid architecture, combined with a robust billing platform and integrated self-service backup, brings new level of flexibility and agility to the Maltese government operations, while also providing citizens and businesses more efficient services that they can access whenever they want.

Let’s look at some of Azure’s unique built-in hybrid capabilities.

Bringing the cloud to local datacenters with Azure Stack

Azure Stack, our unparalleled hybrid offering, lets customers build and run cloud-native applications with Azure services in their local datacenters or in disconnected locations. Today, it’s available in 92 countries and customers like Airbus Defense & Space, iMOKO, and KPMG Norway are using Azure Stack to bring cloud benefits on-premises.

We recently introduced Azure Stack HCI solutions so customers can run virtualized applications on-premises in a familiar way and enjoy easy access to off-the-shelf Azure management services such as backup and disaster recovery.

With Azure, Azure Stack, and Azure Stack HCI, Microsoft is the only cloud provider in the market that offers a comprehensive set of hybrid solutions.

Modernizing server management with Windows Admin Center

Windows Admin Center, a modern browser-based application free of charge, allows customers to manage Windows Servers on-premises, in Azure, or in other clouds. With Windows Admin Center, customers can easily access Azure management services to perform tasks such as disaster recovery, backup, patching, and monitoring. Since its launch just over a year ago, Windows Admin Center has seen tremendous momentum, managing more than 2.5 million server nodes each month.

Screenshot of the Windows Admin Center - Azure Hybrid Center

Easily migrating on-premises SQL Server to Azure

Azure SQL Database is a fully managed and intelligent database service.  SQL Database is evergreen, so it’s always up to date: no more worry about patching, upgrades or End of Support. Azure SQL Database Managed Instance has the full surface area of the SQL Server database engine in Azure. Customers use Managed Instance to migrate SQL Server to Azure without changing the application code. Because the service is consistent with on-premises SQL Server, customers can continue using familiar features, tools and resources in Azure.

With SQL Database Managed Instance, customers like Komatsu, Carlsberg Group, and AllScripts were able to quickly migrate SQL databases to Azure with minimal downtime and benefit from built-in PaaS capabilities such as automatic patching, backup, and high availability.

Connecting hybrid environments with fast and secure networking services

Customers build extremely fast private connections between Azure and local infrastructure, allowing both to and through access using Azure ExpressRoute at bandwidths up to 100 Gbps. Azure Virtual WAN makes it possible to quickly add and connect thousands of branch sites by automating configuration and connectivity to Azure and for global transit across customer sites, using the Microsoft global network.

Customers are also taking full advantage of services like Azure Firewall, Azure DDoS Protection, and Azure Front Door Service to secure virtual networks and deliver the best application performance experience to users.

Managing anywhere access with a single identity platform

Over 90 percent of enterprise customers use Active Directory on-premises. With Azure, customers can easily connect on-premises Active Directory with Azure Active Directory to provide seamless directory services for all Office 365 and Azure services. Azure Active Directory gives users a single sign-on experience across cloud, mobile and on-premises applications, and secures data from unauthorized access without compromising productivity.

Innovating continuously at the edge

Customers are extending their hybrid environments to the edge so they can take on new business opportunities. Microsoft has been leading the innovation in this space. The following are some examples.

Azure Data Box Edge provides a cloud managed compute platform for containers at the edge, enabling customers to process data at the edge and accelerate machine learning workloads. Data Box Edge also enables customers to transfer data over the internet to Azure in real-time for deeper analytics, model re-training at cloud scale or long-term storage.

At Microsoft Build 2019, we announced Azure SQL Database Edge as available in preview, to bring SQL engine to the edge. Developers will now be able to adopt a consistent programming surface area to develop on a SQL database and run the same code on-premises, in the cloud, or at the edge.

Get started – Integrate your hybrid environments with Azure

Check out the resources on Azure hybrid such as overviews, videos, and demos so you can learn more about how to use Azure to run Windows Server and SQL Server workloads successfully across your hybrid environments.

Cloud providers unite on frictionless health data exchange

This post was co-authored by Heather Jordan Cartwright, General Manager, Microsoft Healthcare

Cloud computing is rapidly becoming a bigger and more central part of the infrastructure of healthcare. We see this as a historic shift that motivates us to think hard about how to ensure that, in this cloud-based future, interoperable health data is available as needed and without friction.

Microsoft continues to build health data interoperability into the core of the Azure cloud, empowering developers and partners to easily build data-rich health apps with the Azure API for FHIR®. We are also actively contributing to healthcare community with open source software like the FHIR Server for Azure, bringing together developers on collaborative solutions that move the industry forward.

We take interoperability seriously. At last summer’s CMS Blue Button Developer Conference, we made a public commitment to promote the frictionless exchange of health data with our counterparts at AWS, Google, IBM, Salesforce and Oracle. That commitment remains strong.

Today, at the same conference of health IT community leaders, we are sharing a joint announcement that showcases how we have moved from principles and commitment to actions. Our activities over the past year include open-source software releases, development of new standards and implementation guides, and deployment of services that support U.S. federal interoperability mandates.

Here’s the full text of our joint announcement:


As healthcare evolves across the globe, so does our ability to improve the health and wellness of communities. Patients, providers, and health plans are striving for more value-based care, more engaging user experiences, and broader application of machine learning to assist clinicians in diagnosis and patient care.

Too often, however, patient data are inconsistently formatted, incomplete, unavailable, or missing – which can limit access to the best possible care. Equipping patients and caregivers with information and insights derived from raw data has the potential to yield significantly better outcomes. But without a robust network of clinical information, even the best people and technology may not reach their potential.

Interoperability requires the ability to share clinical information across systems, networks, and care providers. Barriers to data interoperability sit at the core of many process problems. We believe that better interoperability will unlock improvements in individual and population-level care coordination, delivery, and management. As such, we support efforts from ONC and CMS to champion greater interoperability and patient access.

This year’s proposed rules focus on the use of HL7® FHIR® (Fast Healthcare Interoperability Resources) as an open standard for electronically exchanging healthcare information. FHIR builds on concepts and best-practices from other standards to define a comprehensive, secure, and semantically-extensible specification for interoperability. The FHIR community features multidisciplinary collaboration and public channels where developers interact and contribute.

We’ve been excited to use and contribute to many FHIR-focused, multi-language tools that work to solve real-world implementation challenges. We are especially proud to highlight a set of open-source tools including: Google’s FHIR protocol buffers and Apigee Health APIx, Microsoft’s FHIR Server for Azure, Cerner’s FHIR integration for Apache Spark, a serverless reference architecture for FHIR APIs on AWS, Salesforce/Mulesoft’s Catalyst Accelerator for Healthcare templates, and IBM’s Apache Spark service.

Beyond the production of new tools, we have also proudly participated in developing new specifications including the Bulk Data $export operation (and recent work on an $import operation), Subscriptions, and analytical SQL projections. All of these capabilities demonstrate the strength and adaptability of the FHIR specification. Moreover, through connectathons, community events, and developer conferences, our engineering teams are committed to the continued improvement of the FHIR ecosystem. Our engineering organizations have previously supported the maturation of standards in other fields and we believe FHIR version R4 — a normative release — provides an essential and appropriate target for ongoing investments in interoperability.

We have seen the early promise of standards-based APIs from market leading Health IT systems, and are excited about a future where such capabilities are universal. Together, we operate some of the largest technical infrastructure across the globe serving many healthcare and non-healthcare systems alike. Through that experience, we recognize the scale and complexity of the task at hand. We believe that the techniques required to meet the objectives of ONC and CMS are available today and can be delivered cost-effectively with well-engineered systems.

As a technology community, we believe that a forward-thinking API strategy as outlined in the proposed rules will advance the ability for all organizations to build and deploy novel applications to the benefit of patients, care providers, and administrators alike. ONC and CMS’s continued leadership, thoughtful rules, and embrace of open standards help move us decisively in that direction.

Signed,
Amazon, Google, IBM, Microsoft, Oracle, and Salesforce


The positive collaboration on open FHIR standards and the urgency for data interoperability have strengthened our commitment to an open-source-first approach in healthcare technology. We continue to incorporate feedback from the community to develop new features, and are actively identifying new places where open source software can help accelerate interoperability.

Support from the ONC and CMS in 2019 to adopt FHIR APIs as a foundation for clinical data interoperability will have a profound and positive effect on the industry. Looking forward, the application of FHIR to healthcare financial data including claims, explanation of benefit, insurance coverage, and network participation will continue to accelerate interoperability at scale and open new pathways for machine learning.

While it’s still early, we’ve seen our partners leveraging FHIR to better coordinate care, to develop innovative global health tracking systems for super-bacteria, and to proactively prevent the need for patients undergoing chemotherapy to be admitted to the emergency room. FHIR is providing a foundational platform on which our partners can drive rapid innovation, and it inspires us to work even harder to deliver technology that makes interoperable data a reality.

We’re just beginning to see what is possible in this new world of frictionless health data exchange, and we’d love for you to join us. If you want to participate, comment or learn more about FHIR, you can reach our FHIR Community chat here.

Amazon Polly Launches Neural Text-to-Speech and Newscaster Voices

Amazon Polly is a service that turns text into lifelike speech. Today, we are excited to announce the general availability of Neural Text-to-Speech (NTTS) technology, which delivers ground-breaking improvements in speech quality through a new machine learning approach. The 8 US English and 3 UK English voices in the Polly portfolio are now available in both the previous Standard technology, as well as in the new Neural TTS technology. In addition, 2 of the US English voices also feature a Newscaster speaking style, which sounds like a TV or radio newscaster.