Google Cloud Security – continuing to give good the advantage

Cloud security is a top enterprise IT priority as organizations modernize their critical business systems both in-place and in the cloud. Our mission is to provide advanced security solutions that help give good the advantage, starting from building the most secure cloud platform to products that bring the power of Google’s global infrastructure and threat intelligence directly to your data centers.

Today at the RSA Conference we’re introducing new capabilities that offer security wherever our customers’ systems and data may reside, including threat detection and timeline capabilities in Chronicle, threat response integration between Chronicle and Palo Alto Networks’ Cortex XSOAR, and online fraud prevention services. 

New advanced threat detection and automatic timelines in Chronicle

msoffice_powershell.jpg
Detection rule to find a PowerShell download

Chronicle launched its security analytics platform in 2019 to help change the way any business could quickly, efficiently, and affordably investigate alerts and threats in their organization. At RSA this year, as part of Google Cloud, we’ll show how customers can detect threats using YARA-L, a new rules language built specifically for modern threats and behaviors, including types described in Mitre ATT&CK. This advanced threat detection provides massively scalable, real-time and retroactive rule execution.

We’re also introducing Chronicle’s intelligent data fusion, a combination of a new data model and the ability to automatically link multiple events into a single timeline. Palo Alto Networks, with Cortex XSOAR, is our first partner to integrate with this new data structure to enable even more powerful threat response. We’ll be demonstrating this integrated capability in the Google Cloud/Chronicle booth at RSA.

“Cortex XSOAR offers automated enrichment, response and case management to enterprise-wide threats,” said Rishi Bhargava, VP, Product Strategy at Palo Alto Networks. “The integration with Chronicle’s new detection capabilities and event timelines, across months or years’ of data, enhances that response and enables comprehensive threat management for our mutual customers.”

Prevent fraud and abuse with reCAPTCHA Enterprise and Web Risk

To protect your business, you need to protect your users. To help, we’re announcing the general availability of reCAPTCHA Enterprise and Web Risk API. These products are underpinned by two Google security technologies that have been protecting billions of web users and millions of websites for more than a decade—reCAPTCHA and Google Safe Browsing. 

reCAPTCHA Enterprise helps protect websites from fraudulent activities like scraping, credential misuse, and automated account creation. Protecting the web from bots has become increasingly important with the rise of threats like credential stuffing attacks, where malicious actors can test large volumes of breached passwords against legitimate sites. reCAPTCHA Enterprise recently added a new wave of commercial-grade bot defense capabilities to help ensure that a login attempt is being made by a legitimate user and not a bot. Google Nest is using reCAPTCHA Enterprise to help prevent automated attacks by actors seeking to obtain unauthorized access to accounts and devices.

reCAPTCHA Enterprise protections.jpg
Overview of reCAPTCHA Enterprise protections

Using Web Risk API, enterprise customers can enable client applications to check URLs against Google’s constantly updated lists of unsafe web resources to prevent access to or inclusion of malicious content. Web Risk API alerts on, and includes information about, more than a million unsafe URLs that we keep up-to-date by examining billions of URLs each day in Google Safe Browsing.

Web Risk API and reCAPTCHA Enterprise are now both globally generally available and can be purchased separately. 

Google Cloud security in 2020 and beyond

When it comes to security, our work will never be finished. In addition to the capabilities announced today, we’ll continue to empower our customers with products that help organizations modernize their security capabilities in the cloud or in-place. To learn more about our entire portfolio of security capabilities, visit us at booth #2233 Moscone South, and check out our Trust & Security Center.

Now generally available: Managed Service for Microsoft Active Directory (AD)

A few months ago, we launched Managed Service for Microsoft Active Directory (AD) in public beta. Since then, our customers have created more than a thousand domains to evaluate the service in their pre-production environments. We’ve used the feedback from these customers to further improve the service and are excited to announce that Managed Service for Microsoft AD is now generally available for everyone and ready for your production workloads.

Simplifying Active Directory management.png

Simplifying Active Directory management

As more AD-dependent apps and servers move to the cloud, you might face heightened challenges to meet latency and security goals, on top of the typical maintenance challenges of configuring and securing AD Domain Controllers. Managed Service for Microsoft AD can help you manage authentication and authorization for your AD-dependent workloads, automate AD server maintenance and security configuration, and connect your on-premises AD domain to the cloud. The service delivers many benefits, including:

  • Compatibility with AD-dependent apps. The service runs real Microsoft AD Domain Controllers, so you don’t have to worry about application compatibility. You can use standard Active Directory features like Group Policy, and familiar administration tools such as Remote Server Administration Tools (RSAT), to manage the domain. 

  • Virtually maintenance-free. The service is highly available, automatically patched, configured with secure defaults, and protected by appropriate network firewall rules.

  • Seamless multi-region deployment. You can deploy the service in a specific region to enable your apps and VMs in the same or other regions to access the domain over a low-latency Virtual Private Cloud (VPC). As your infrastructure needs grow, you can simply expand the service to additional regions while continuing to use the same managed AD domain.

Hybrid identity support. You can connect your on-premises AD domain to Google Cloud or deploy a standalone domain for your cloud-based workloads.

admin experience.png

You can use the service to simplify and automate familiar AD tasks like automatically “domain joining” new Windows VMs by integrating the service with Cloud DNS, hardening Windows VMs by applying Group Policy Objects (GPOs), controlling Remote Desktop Protocol (RDP) access through GPOs, and more. For example, one of our customers, OpenX, has been using the service to reduce their infrastructure management work:

“Google Cloud’s Managed AD service is exactly what we were hoping it would be. It gives us the flexibility to manage our Active Directory without the burden of having to manage the infrastructure,” said Aaron Finney, Infrastructure Architecture, OpenX. “By using the service, we are able to solve for efficiency, reduce costs, and enable our highly-skilled engineers to focus on strategic business objectives instead of tactical systems administration tasks.”

And our partner, itopia, has been leveraging Managed AD to make the lives of their customers easier: “itopia makes it easy to migrate VDI workloads to Google Cloud and deliver multi-session Windows desktops and apps to users on any device. Until now, the customer was responsible for managing and patching AD. With Google Cloud’s Managed AD service, itopia can deploy cloud environments more comprehensively and take away one more piece of the IT burden from enterprise IT staff,” said Jonathan Lieberman, CEO, itopia. “Managed AD gives our customers even more incentive to move workloads to the cloud along with the peace of mind afforded by a Google Cloud managed service.”

Getting started

To learn more about getting started with Managed Service for Microsoft AD now that it’s generally available, check out the quickstart, read the documentation, review pricing, and watch the webinar.

Exploring Container Security: Run what you trust; isolate what you don’t

From vulnerabilities to cryptojacking to well, more cryptojacking, there were plenty of security events to keep container users on their toes throughout 2019. With Kubernetes being used to manage most container-based environments (and increasingly hybrid ones too), it’s no surprise that Forrester Research, in their 2020 predictions, called out the need for “securing apps and data in an increasingly hybrid cloud world.” 

On the Google Cloud container security team, we want your containers to be well protected, whether you’re running in the cloud with Google Kubernetes Engine or hybrid with Anthos, and for you to be in-the-know about container security. As we kick off 2020, here’s some advice on how to protect your Kubernetes environment, plus a breakdown of recent GKE features and resources.

Run only what you trust, from hardware to services

Many of the vulnerabilities we saw in 2019 compromised the container supply chain or escalated privileges through another overly-trusted component. It’s important that you trust what you run, and that you apply defense-in-depth principles to your containers. To help you do this, Shielded GKE Nodes is now generally available, and will be followed shortly by the general availability of Workload Identity–a way to authenticate your GKE applications to other Google Cloud services that follows best practice security principles like defense-in-depth.

Let’s take a deeper look at these features.

Shielded GKE Nodes
Shielded GKE Nodes ensures that a node running in your cluster is a verified node in a Google data center. By extending the concept of Shielded VMs to GKE nodes, Shielded GKE Nodes improves baseline GKE security in two respects:

  • Node OS provenance check: A cryptographically verifiable check to make sure the node OS is running on a virtual machine in Google data center

  • Enhanced rootkit and bootkit protection: Secure and measured boot, virtual trusted platform module (vTPM), UEFI firmware, and integrity monitoring

You can now turn on these Shielded GKE Nodes protections when creating a new cluster or upgrading an existing cluster. For more information, read the documentation.

Workload Identity
Your GKE applications probably use another service–like a data warehouse–to do their job. For example, in the vein of “running only what you trust,” when an application interacts with a data warehouse, that warehouse will require your application to be authenticated. Historically the approaches to doing this haven’t been in line with security principles—they were overly permissive, or had the potential for a large blast radius if they were compromised. 

Workload Identity helps you follow the principle of least privilege and reduce that blast radius potential by automating workload authentication through a Google-managed service account, with short-lived credentials. Learn more about Workload Identity in the beta launch blog and the documentation. We will soon be launching general availability of Workload Identity.

Stronger security for the workloads you don’t trust
But sometimes, you can’t confidently vouch for the workloads you’re running. For example, an application might use code that originated outside your organization, or it might be a  software-as-a-service (SaaS) application that ingests input from an unknown user. In the case of these untrusted workloads, a second layer of isolation between the workload and the host resources is part of following the defense-in-depth security principle. To help you do this, we’re releasing the general availability of GKE Sandbox.  

GKE Sandbox
GKE Sandbox uses the open source container runtime gVisor to run your containers with an extra layer of isolation, without requiring you to change your application or how you interact with the container. gVisor uses a user space kernel to intercept and handle syscalls, reducing the direct interaction between the container and the host, and thereby reducing the attack surface. However, as a managed service, GKE Sandbox abstracts away these internals, giving you  single-step simplicity for multiple layers of protection. Get started with GKE Sandbox. 

Up your container security knowledge

As more companies use containers and Kubernetes to modernize their applications, decision makers and business leaders need to understand how they apply to their business—and how they will help keep them secure.  

Core concepts in container security
Written specifically for readers who are new to containers and Kubernetes, Why Container Security Matters to Your Business takes you through the core concepts of container security, for example supply chain and runtime security. Whether you’re running Kubernetes yourself or through a managed service like GKE or Anthos, this white paper will help you connect the dots between how open-source software like Kubernetes responds to vulnerabilities and what that means for your organization.  

New GKE multi-tenancy best practices guide
Multi-tenancy, when one or more clusters are shared between tenants, is often implemented as a cost-saving or productivity mechanism. However, incorrectly configuring the clusters to have multiple tenants, or the corresponding compute or storage resources, can not only deny these cost-savings, but open organizations to a variety of attack vectors. We’ve just released a new guide, GKE Enterprise Multi-tenancy Best Practices, that takes you through setting up multi-tenant clusters with an eye towards reliability, security, and monitoring. Read the new guide, see the corresponding Terraform modules, and improve your multi-tenancy security.

Learn how Google approaches cloud-native security internally
Just as the industry is transitioning from an architecture based on monolithic applications to distributed “cloud-native” microservices, Google has also been on a journey from perimeter-based security to cloud-native security.

In two new whitepapers, we released details about how we did this internally, including the security principles behind cloud-native security. Learn more about BeyondProd, Google’s model for cloud-native security; and about Binary Authorization for Borg, which discusses how we ensure code provenance and use code identity.

Let 2020 be your year for container security

Security is a continuous journey. Whether you’re just getting started with GKE or are already running clusters across clouds with Anthos, stay up to date with the latest in Google’s container security features and see how to implement them in the cluster hardening guide.

10 ways Chrome Enterprise helps protect employees and businesses

Editor’s note:Security is top of mind for many businesses as they shift to increasingly digital workplaces. In honor of Safer Internet Day, we thought we’d do a quick overview of the security capabilities available in Chrome Enterprise that can help businesses better protect their end users.

As organizations increase their productivity through the use of cloud and SaaS apps, managing business risk makes IT security even more imperative. They have to guard the organization against external attacks and internal vulnerabilities, while keeping the business moving. According to PurpleSec’s Cyber Security Statistics for 2019, malware and web-based attacks are the two most costly attack types—and companies spend an average of $2.4 million to ward off these threats. Ransomware attacks are expected to cost an estimated $6 trillion annually by 2021.

Each year, we introduce new security features to ensure that organizations and their employees have the resources to be more safe and secure online. Chromebooks help IT administrators protect employees from harmful attacks. 

Here’s a brief look at how Chrome Enterprise keeps your employees and business better protected.

Phishing prevention .gif

Phishing prevention 

Safe Browsing 
Unidentified dangerous sites can harm devices or cause problems when your employees are browsing online. With Google Chrome Safe Browsing, your employees are warned about malicious sites before they navigate to them, helping to deter negligent behavior. 

Password protection
Help keep your organization’s data safe by mitigating phishing attacks caused by password thefts. Password Alert Policy requires employees to reset their password when used with an unauthorized site, thus reducing the risk of a potential security breach.

Security keys
Google’s Titan Security Key helps to prevent hackers from logging into accounts when login details and passwords are compromised. By providing a second step of authentication after your password, security keys help prevent phishing and keep out attackers that could steal restricted information.

Protection from ransomware.gif

Protection from ransomware and other malicious software

Background auto-updates 
Seamless background auto-updates address vulnerabilities before they affect employees and your business—without interrupting workflows.

Low on-device data footprint
Chromebooks are cloud-native by design. Unlike traditional laptops, your files and customizations are primarily stored safely in the cloud, protected from bad actors by Google’s infrastructure.

ClusterFuzz
Chrome OS uses ClusterFuzz to help rapidly find potential security vulnerabilities before they affect users. Employees can focus on what matters most without worrying about breaches and external attacks.

Prevent OS tampering
All Chromebooks use verified boot to confirm the operating system is an authentic, safe, Google-distributed build. With two versions of Chrome OS on every device, Chromebooks can proceed with boot-up even if one OS has been tampered with.

Google Security Modules
Chrome devices ensure stored user data encrypted, by default—no configuration required—with keys stored on tamper-resistant hardware called the Google Security Module (learn more in our ebook, “Cloud-Native Security for Endpoints”). 

Ephemeral mode
With Chrome Enterprise, Chromebooks can be set up to wipe all data from the device at the end of your session. Enabling ephemeral modereduces the chances of any browsing information being left behind on a user’s device.

Block malicious apps.gif

Block malicious apps & URLs

Blacklisting URLs 
By blacklisting specific URLs or sets of websites through the Google Admin console, you can restrict employee access to malicious sites. 

Google Play Protect
Google Play Protect helps detect potentially harmful applications in a variety of ways (including static analysis, dynamic analysis, and machine learning) and prevents your employees from downloading them.

Our vision with Chrome Enterprise is to secure cloud entry so that every enterprise can work smarter and stay safe—and make work easier and more meaningful for everyone. Learn more about Chrome Enterprise security at cloud.google.com/chrome-enterprise/security.

Announcing new G Suite partner integrations for eDiscovery and archiving

At Google, we’re committed to giving our G Suite customers as much choice as possible when it comes to their workflows and business processes. Google Vault plays a key role serving customer needs in the areas of eDiscovery and archiving, but sometimes additional features, management, and workflows are required. Today, we’re highlighting new technical integrations with partners to enable deeper eDiscovery and archiving capabilities.

Existing G Suite capabilities

Google Vault, an add-on to G Suite basic (and included at no additional cost in G Suite Business or G Suite Enterprise), provides G Suite customers with the ability to retain data in place as well as perform search and export for eDiscovery across many core G Suite products, like Gmail, Drive, and Hangouts Chat.

Additionally, Google Vault APIs enable customers and partners to provide more complex workflows to users when needed. Some customers choose to use a third-party archiving solution to support their compliance and regulatory requirements. Partner integrations with Drive, and Gmail Enterprise journaling, help G Suite customers leverage these third-party archiving solutions easily and seamlessly with G Suite.

Daniel Mandon, VP, Information Governance & eDiscovery Operations at News Corp., said, “We are excited that Google is working actively with our eDiscovery partners to create successful integrations and enhance Vault’s capabilities. I am personally excited to see how they continue to improve and evolve the product.”

Extending G Suite capabilities with technical integrations

We’ve been working hard with a number of partners to help them build and deliver technical integrations in the eDiscovery and archiving space for joint customers. As we dive into the first day of the 2020 Legaltech conference in New York, we’re sharing a look at some of the partner integrations we’ve accomplished in the last year.

Veritas

Veritas now has an integration with Gmail Enterprise journaling to enable native Gmail archiving in Enterprise Vault and EV.cloud. G Suite Enterprise and G Suite Enterprise for Education customers can now set up journaling to deliver copies of email messages sent and received by Gmail to Enterprise Vault and use Enterprise Vault’s robust email retention, classification, supervision, and eDiscovery capabilities on Gmail and any other data sources stored in Enterprise Vault.

Zapproved

Zapproved integrates its ZDiscovery platform with Google Vault APIs to enable users to seamlessly preserve G Suite data. This integration streamlines the process when responding to litigation by enabling in-place preservations directly from ZDiscovery’s Legal Hold Pro when issuing a litigation hold. By directly connecting the Google Vault APIs to ZDiscovery, users can simply click to preserve data in many G Suite applications like Gmail, Hangouts Chat, and Drive.

Zapproved.png

Congruity360

Congruity360 integrates Hold360 (formerly SaGo Legal Hold), a litigation hold solution that reduces timelines and cost associated with litigation hold workflows, with Google Vault APIs. This enables Hold360 customers to manage holds in many G Suite applications like Gmail, Hangouts Chat and Drive, as well as create and track hold notifications from a single platform.

Congruity360.png

Logikcull

Logikcull just announced an integration for search and export of G Suite data for eDiscovery, leveraging Google Vault APIs. Logikcull customers can now export data relevant to disputes and investigations from G Suite directly into Logikcull’s Instant Discovery software for further culling, searching, and review.

Logikcull.png

Globanet

Globanet’s Merge1 platform integrates with G Suite Drive APIs to enable third-party capture and archiving support for Drive files. This allows G Suite customers to utilize third-party archiving for Google Drive for advanced archiving and compliance needs. Key features include capturing comments, and the latest versions of shared documents.

Learn more about Vault, or if you have advanced eDiscovery or archiving needs, reach out to one of our partners listed above.

Introducing Google Cloud’s Secret Manager

Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication. Managing and securing access to these secrets is often complicated by secret sprawl, poor visibility, or lack of integrations.

Secret Manager is a new Google Cloud service that provides a secure and convenient method for storing API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. 

Secret Manager offers many important features:

  • Global names and replication: Secrets are project-global resources. You can choose between automatic and user-managed replication policies, so you control where your secret data is stored.

  • First-class versioning: Secret data is immutable and most operations take place on secret versions. With Secret Manager, you can pin a secret to specific versions like 42 or floating aliases like latest.

  • Principles of least privilege: Only project owners have permissions to access secrets. Other roles must explicitly be granted permissions through Cloud IAM.

  • Audit logging: With Cloud Audit Logging enabled, every interaction with Secret Manager generates an audit entry. You can ingest these logs into anomaly detection systems to spot abnormal access patterns and alert on possible security breaches.  

  • Strong encryption guarantees: Data is encrypted in transit with TLS and at rest with AES-256-bit encryption keys. Support for customer-managed encryption keys (CMEK) is coming soon.

  • VPC Service Controls: Enable context-aware access to Secret Manager from hybrid environments with VPC Service Controls.

The Secret Manager beta is available to all Google Cloud customers today. To get started, check out the Secret Manager Quickstarts. Let’s take a deeper dive into some of Secret Manager’s functionality.

Global names and replication

Early customer feedback identified that regionalization is often a pain point in existing secrets management tools, even though credentials like API keys or certificates rarely differ across cloud regions. For this reason, secret names are global within their project.

While secret names are global, the secret data is regional. Some enterprises want full control over the regions in which their secrets are stored, while others do not have a preference. Secret Manager addresses both of these customer requirements and preferences with replication policies.

  • Automatic replication: The simplest replication policy is to let Google choose the regions where Secret Manager secrets should be replicated.

  • User-managed replication: If given a user-managed replication policy, Secret Manager replicates secret data into all the user-supplied locations. You don’t need to install any additional software or run additional services—Google handles data replication to your specified regions. Customers who want more control over the regions where their secret data is stored should choose this replication strategy.

First-class versioning

Versioning is a core tenet of reliable systems to support gradual rollout, emergency rollback, and auditing. Secret Manager automatically versions secret data using secret versions, and most operations—like access, destroy, disable, and enable—take place on a secret version.

Production deployments should always be pinned to a specific secret version. Updating a secret should be treated in the same way as deploying a new version of the application. Rapid iteration environments like development and staging, on the other hand, can use Secret Manager’s latest alias, which always returns the most recent version of the secret.

Integrations

In addition to the Secret Manager API and client libraries, you can also use the Cloud SDK to create secrets:

and to access secret versions:

Discovering secrets

As mentioned above, Secret Manager can store a variety of secrets. You can use Cloud DLP to help find secrets using infoType detectors for credentials and secrets. The following command will search all files in a source directory and produce a report of possible secrets to migrate to Secret Manager:

If you currently store secrets in a Cloud Storage bucket, you can configure a DLP job to scan your bucket in the Cloud Console. 

Over time, native Secret Manager integrations will become available in other Google Cloud products and services.

What about Berglas?

Berglas is an open source project for managing secrets on Google Cloud. You can continue to use Berglas as-is and, beginning with v0.5.0, you can use it to create and access secrets directly from Secret Manager using the sm:// prefix.

If you want to move your secrets from Berglas into Secret Manager, the berglas migrate command provides a one-time automated migration.

Accelerating security

Security is central to modern software development, and we’re excited to help you make your environment more secure by adding secrets management to our existing Google Cloud security product portfolio. With Secret Manager, you can easily manage, audit, and access secrets like API keys and credentials across Google Cloud. 

To learn more, check out the Secret Manager documentation and Secret Manager pricing pages.

Exploring container security: Announcing the CIS Google Kubernetes Engine Benchmark

If you’re serious about the security of your Kubernetes operating environment, you need to build on a strong foundation. The Center for Internet Security’s (CIS) Kubernetes Benchmark give you just that: a set of Kubernetes security best practices that will help you build an operating environment that meets the approval of both regulators and customers. 

The CIS Kubernetes Benchmark v1.5.0 was recently released, covering environments up to Kubernetes v1.15. Written as a series of recommendations rather than as a must-do checklist, the Benchmarks follows the upstream version of Kubernetes. But for users running managed distributions such as our own Google Kubernetes Engine (GKE), not all of its recommendations are applicable. To help, we’ve released in conjunction with CIS, a new CIS Google Kubernetes Engine (GKE) Benchmark, available under the CIS Kubernetes Benchmark, which takes the guesswork out of figuring out which CIS Benchmark recommendations you need to implement, and which ones Google Cloud handles as part of the GKE shared responsibility model.

Read on to find out what’s new in the v1.5.0 CIS Kubernetes Benchmark, how to use the CIS GKE Benchmark, and how you can test if you’re following recommended best practices.

Exploring the CIS Kubernetes Benchmark v1.5.0

The CIS Kubernetes Benchmark v1.5.0 was published in mid October, and has a significantly different structure than the previous version. Whereas the previous version split up master and worker node configurations at a high level, the new version separates controls by the components to which they apply: control plane components, etcd, control plane configuration, worker nodes, and policies. This should help make it easier for you to apply the guidance to a particular distribution, as you may not be able to control some components, nor is it your responsibility.

In terms of specific controls, you’ll see additional recommendations for: 

  • Secret management. New recommendations include Minimize access to secrets (5.1.2), Prefer using secrets as files over secrets as environment variables (5.4.1), and Consider external secret storage (5.4.2).

  • Audit logging. In addition to an existing recommendation on how to ensure audit logging is configured properly with the control plane’s audit log flags, there are new recommendations to Ensure that a minimal audit policy is created (3.2.1), and Ensure that the audit policy covers key security concerns (3.2.2).

  • Preventing unnecessary access, by locking down permissions in Kubernetes following the principle of least privilege. Specifically, you should Minimize wildcard use in Roles and ClusterRoles (5.1.3).

Introducing the new CIS GKE Benchmark

What does this mean if you’re using a managed distribution like GKE? As we mentioned earlier, the CIS Kubernetes Benchmark is written for the open-source Kubernetes distribution. And while it’s intended to be as universally applicable as possible, it doesn’t fully apply to hosted distributions like GKE.

The new CIS GKE Benchmark is a child of the CIS Kubernetes Benchmark specifically designed for the GKE distribution. This is the first distribution-specific CIS Benchmark to draw from the existing benchmark, but removing items that can’t be configured or managed by the user. The CIS GKE Benchmark also includes additional controls that are Google Cloud-specific, and that we recommend you apply to your clusters, for example, as defined in the GKE hardening guide. Altogether, it means that you have a single set of controls for security best practice on GKE.

There are two kinds of recommendations in the GKE CIS Benchmark. Level 1 recommendations are meant to be widely applicable—you should really be following these, for example enabling Stackdriver Kubernetes Logging and Monitoring. Level 2 recommendations, meanwhile, result in a more stringent security environment, but are not necessarily applicable to all cases. These recommendations should be implemented with more care to avoid potential conflicts in more complicated environments. For example, Level 2 recommendations may be more relevant to multi-tenant workloads than single-tenant, like using GKE Sandbox to run untrusted workloads. 

The CIS GKE Benchmark recommendations are listed as “Scored” when they can be easily tested using an automated method (like an API call or the gcloud CLI), and the setting has a value that can be definitively evaluated, for example, ensuring node auto-upgrade is enabled. Recommendations are listed as “Not Scored” when a setting cannot be easily assessed using automation or the exact implementation is specific to your workload—for example, using firewall rules to restrict ingress and egress traffic to your nodes—or they use a beta feature that you might not want to use in production.

If you want to suggest a new recommendation or a change to an existing one, please contribute directly to the CIS Benchmark in the CIS Workbench community.

Applying and testing the CIS Benchmarks

There are actually several CIS Benchmarks that are relevant to GKE, and there are tools available to help you test whether you’re following their recommendations. For the CIS Kubernetes Benchmark, you can use a tool like kube-bench to test your existing configuration; for the CIS GKE Benchmark, there’s Security Health Analytics, a security product that integrates into Security Command Center and that has built-in checks for several CIS GCP and GKE Benchmark items. By enabling Security Health Analytics, you’ll be able to discover, review, and remediate any cluster configurations you have that aren’t up to par with best practices from the CIS Benchmarks in the Security Command Center vulnerabilities dashboard.

Security Health Analytics scan results for CIS Benchmarks.png
Security Health Analytics scan results for CIS Benchmarks

Documenting GKE control plane configurations

The new CIS GKE Benchmark should help make it easier for you to implement and adhere to Kubernetes security best practices. And for components that they don’t cover, we’ve documented where the GKE control plane implements the new Kubernetes CIS Benchmark, where we are working to improve our posture, and the existing mitigating controls we have in place. We hope this helps you make an informed decision on what controls to put in place yourself, and better understand your existing threat model.

Check out the new CIS GKE Benchmark, the updated CIS Kubernetes Benchmark, and understand how GKE performs according to the CIS Kubernetes Benchmark. If you’re already using the GKE hardening guide, we’ve added references to the corresponding CIS Benchmark recommendations so you can easily demonstrate that your hardened clusters meets your requirements.

The CIS GKE Benchmark were developed in concert with Control Plane and the Center for Internet Security (CIS) Kubernetes community.

3 ways retailers improve the customer experience with help from Chromebooks

In an increasingly competitive market, delivering outstanding customer experiences is top of mind for retailers. Seventy percent of global business and IT decision-makers in retail say that improving the customer experience is a top business priority over the next 12 months, according to a commissioned study conducted by Forrester on behalf of Google.

One big way to improve the customer experience is to give retail associates shared devices, like Chromebooks, that enable them to work securely from wherever they are, whether on the store floor or in the back office. Here are three ways retailers are using Chromebooks to offer better customer experiences.

1. Ensuring customers receive high-quality products

Providing consistent quality is critical to delivering superior customer experiences. Take Panda Express for instance. The company wants customers to enjoy a consistent taste experience for its signature dishes like Original Orange Chicken and Broccoli Beef. That’s why in almost 400 locations, Panda Express workers take training courses on these recipes with the help of Chromebooks. Restaurant associates used to frequently be interrupted when watching training videos by other workers. Now, they have access to touchscreen, flip-capable Asus Chromebooks for training purposes.

“Our restaurant associates work hard to show customers that our food isn’t from an assembly line, and cook it to order each day,” said Dorothy Shih, IS Senior Project Manager, and Young Kim, IS Network Administrator, at Panda Restaurant Group. “With the help of Chromebooks in our training, we’re making sure that our Original Orange Chicken tastes great, no matter which restaurant guests visit.”

2. Reducing customer wait times

More than 3,000 workers at eCommerce company Mercado Libre take calls from customers, helping them place an order or resolve an issue. When contact center employees used Windows PCs, a power outage or transit strike could stop them from helping customers since there was no way to quickly enable employees to work from outside the office. 

In contrast, Chromebooks make it easy for them to work from anywhere because unlike traditional laptops, employee files and customizations are primarily stored in the cloud. As a result, employees can move seamlessly between different Chrome devices to stay productive. And the fast boot up of Chromebooks saves 250 productivity hours each shift, according to a company review of the number of logged customer cases. Contact center employees have gained more time to take orders and answer callers’ questions.  

In another example, at family-owned grocery store chain Schnucks Markets, associates staffing the meat and produce departments had to leave the counter to use Windows PCs in back storerooms to check email or follow up on orders. This left customers waiting for assistance as associates walked back-and-forth from the back room to the counter. With 100 locations across five states in the United States, this impaired its customer service.

To help, Schnucks rolled out about six Acer Spin Chromebooks per store. Now, associates can check email and orders right behind the counter, keeping them front and center when customers approach, speeding their customer service and saving about eight hours a week.

“As a grocer, we have to be ‘best in fresh.’ That means the customer experience has to be 100% efficient and quick, which Chromebooks has helped us accomplish,” said Mike Kissel, Senior Manager of Endpoint and Cloud Security, at Schnucks Markets. “People still want to go to a brick-and-mortar grocery store, but the last thing they want to do is stand in line and not be served quickly.” 

3. Offering personalized customer experiences

Eighty percent of shoppers say they’re more likely to do business with a company that offers personalized customer experiences, and the right technology can make this possible both in storefronts and as part of product development.

That’s the case for pet food retailer NomNowNow, which uses Chromebooks to customize pet meal portions. Workers on the company’s kitchen, packing, and inventory teams prepare, pack and ship personalized pet food based on online profile details from pet parents, such as a pet’s name, age, weight, and breed. 

“Based on these [pet] details, we’re able to make NomNomNow’s personalized customer experience possible,” said Lynn Hubbard, Vice President of Operations, and Dan Massey

Vice President of Data, Product, and Engineering, at NomNomNow. “And for an extra personal touch, every NomNomNow shipment gets a packing slip with the pet’s name and food details.” 

Learn how Chrome Enterprise can benefit your retail business

Many retail businesses use Chrome Enterprise to securely power better customer experiences. To learn more, visit our website. Or if you’re attending NRF 2020, stop by the Chrome Enterprise booth #5065 on Level 3 and check out our session on Monday, Jan. 13, 2020 on the benefits of cloud-powering retail associates.

Exploring container security: Navigate the security seas with ease in GKE v1.15

Your container fleet, like a flotilla, needs ongoing maintenance and attention to stay afloat—and stay secure. In the olden days of seafaring, you grounded your ship at high tide and turned it on its side to clean and repair the hull, essentially taking it “offline.” We know that isn’t practical for your container environment however, as uptime is as important as security for most applications. 

Here on the Google Kubernetes Engine (GKE) team, we’re always hard at work behind the scenes to provide you with the latest security patches and features, so you can keep your fleet safe while retaining control and anticipating disruptions.

As GKE moved from v1.12 to v1.15 over the past year, here’s an overview of what security changes we’ve made to the platform, to improve security behind the scenes, and with stronger defaults, as well as advice we added to the GKE hardening guide.

Behind-the-scenes hardening in GKE

A lot of our security recommendations come down to a simple principle: implement and expose fewer items in your infrastructure, so there’s less for you to secure, maintain, and patch. In GKE, this means paring down controls to only what your application actually needs and removing older implementations or defaults. Let’s take a deeper look at the changes we made this year.

Distroless images

Behind the scenes, we’re continually hardening and improving GKE. A major undertaking in the past several months has been rebasing GKE master and daemonset containers on top of distroless base images. Distroless images are limited to only the application and its runtime dependencies—they’re not a full Linux distribution, so there are no shells or package managers. And because these images are smaller, they’re faster to load, and have a smaller attack surface. By moving almost all Kubernetes components to distroless images in Kubernetes 1.15 and 1.16, this helps to reduce the signal-to-noise ratio in vulnerability scanning, and makes it simpler to maintain Kubernetes components. By the way, you should also consider moving your container application images to distroless images!

Locking down system:unauthenticated access to clusters

Kubernetes authentication allows certain cluster roles to have access to cluster information by default, for example, to gather metrics about cluster performance. This specifically allows unauthenticated users (who could be from anywhere on the public internet!) to read some unintended information if they gain access to the cluster API server. We worked in open-source to change this in Kubernetes 1.14, and introduced a new discovery role system:public-info-viewer explicitly meant for unauthenticated users. We also removed system:unauthenticated access to other API server information. 

Ongoing patching and vulnerability response

Our security experts are part of the Kubernetes Product Security Committee, and help manage, develop patches for, and address newly discovered Kubernetes vulnerabilities. On GKE, in addition to Kubernetes vulnerabilities, we handle other security patches—in the past year, these included critical patches to the Linux kernel, runc, and the Go programming language—and when appropriate, publishing a security bulletin detailing the changes.

Better defaults in GKE

Among the more visible changes, we’ve also changed the defaults for new clusters in GKE to more secure options, to allow newer clusters to more easily adopt these best practices. In the past several releases, this has included enabling node auto-upgrade by default, removing the Kubernetes dashboard add-on, removing basic authentication and client certs, and removing access to legacy node metadata endpoints. These changes apply to any new GKE clusters you create, and you can still opt to use another option if you prefer.

new clusters in GKE.png
Defaults for new clusters in GKE have been improving over releases in the past several years, to improve security

Enabling node auto-upgrade

Keeping the version of Kubernetes up-to-date is one of the simplest things you can do to improve your security. According to the shared responsibility model, we patch and upgrade GKE masters for you, but upgrading the nodes remains your responsibility. Node auto-upgrade automatically provides security patches, bug fixes and other upgrades to your node pools, and ensures alignment with your master version to avoid unsupported version skew. As of November, node auto-upgrade is enabled by default for new clusters. Nothing has changed for pre-existing clusters though, so please consider enabling node auto-upgrade manually or upgrading yourself regularly and watching the Security Bulletins for information on recommended security patches. With release channels, you can subscribe your cluster to a channel that meets your business needs, and infrastructure requirement. Release channels take care of both the masters and nodes, and ensures your cluster is up to date with the latest patch version available in the chosen channel.

Locking down the Kubernetes Dashboard

The open-source Kubernetes web UI (Dashboard) is an add-on which provides a web-based interface to interact with your Kubernetes deployment, including information on the state of your clusters and errors that may have occurred. Unfortunately, it is sometimes left publicly accessible or granted sensitive credentials, making it susceptible to attack. Since the Google Cloud Console provides much of the same functionality for GKE, we’ve further locked down the Dashboard to better protect your clusters. For new clusters created with:

  • GKE v1.7, the Dashboard does not have admin access by default.
  • GKE v1.10, the Dashboard is disabled by default.
  • GKE v1.15 and higher, the Kubernetes web UI add-on Dashboard is no longer available in new GKE clusters.

You can still run the dashboard if you wish, following the Kubernetes web UI documentation to install it yourself.

Improving authentication

There are several methods of authenticating to the Kubernetes API server. In GKE, the supported methods are OAuth tokens, x509 client certificates, and static passwords (basic authentication). GKE manages authentication via gcloud for you using the OAuth token method, setting up the Kubernetes configuration, getting an access token, and keeping it up to date. Enabling additional authentication methods, unless your application is using them, presents a wider surface of attack. Starting in GKE v1.12, we disabled basic authentication and legacy client certificates by default for new clusters, so that these credentials are not created for your cluster. For older clusters, make sure to remove the static password if you aren’t using it.

Disabling metadata server endpoints

Some attacks against Kubernetes use access to the VM’s metadata server to extract the node’s credentials; this can be particularly true for legacy metadata server endpoints. For new clusters starting with GKE v1.12, we disabled these endpoints by default. Note that Compute Engine is in the process of turning down these legacy endpoints. If you haven’t already, you may use the check-legacy-endpoint-access tool to help discover if your apps should be updated and migrated to the GA v1 metadata endpoints, which include an added layer of security that can help customers protect against vulnerabilities .

Our latest and greatest hardening guide

Even though we keep making more and more of our security recommendations the default in GKE, they primarily apply to new clusters. This means that even if you’ve been continuously updating an older cluster, you’re not necessarily benefitting from these best practices. To lock down your workloads as best as possible, make sure to follow the GKE hardening guide. We’ve recently updated this with the latest features, and made it more practical, with recommendations for new clusters, as well as recommendations for GKE On-Prem

It’s worth highlighting some of the newer recommendations in the hardening guide for Workload Identity and Shielded GKE Nodes.

Workload Identity

Workload Identity is a new way to manage credentials for workloads you run in Kubernetes, automating best practices for workload authentication, and removing the need for service account private keys or node credential workarounds. We recommend you use Workload Identity over other options, as it replaces the need to use metadata concealment, and protects sensitive node metadata.

Shielded GKE Nodes

Shielded GKE Nodes is built upon Shielded VMs and further protects node metadata, providing strong, verifiable node identity and integrity for all the GKE nodes in your cluster. If you’re not using third-party kernel modules, we also recommend you enable secure boot to verify the validity of components running on your nodes and get enhanced rootkit and bootkit protections.

The most secure GKE yet

We’ve been working hard on hardening, updating defaults, and delivering new security features to help protect your GKE environment. For the latest and greatest guidance on how to bolster the security of your clusters, we’re always updating the GKE hardening guide.

Google Cloud: Supporting our customers with the California Consumer Privacy Act (CCPA)

The California Consumer Privacy Act (CCPA) is a data privacy law that imposes new requirements on businesses and gives consumers in California the right to access, delete, and opt-out of the “sale” of their personal information. Businesses that collect California residents’ personal information and meet certain thresholds (for example, revenue) will need to comply with these obligations. 

You can count on the fact that Google Cloud is committed to supporting CCPA compliance across G Suite and Google Cloud products when it takes effect on January 1, 2020. Google Cloud will support you in meeting your CCPA obligations by offering convenient tools alongside the robust data privacy and security protections in our services and contracts. 

How does Google Cloud support CCPA compliance?
The security and privacy of customer data is our highest priority, and we’re committed to supporting your efforts to comply with the CCPA by: 

Providing tools and support to enable you to comply with CCPA requirements around your consumers’ rights. You can use G Suite and Google Cloud’s administrative consoles and services to help access, export, or delete data that you and your users put into our systems. This functionality will help you fulfill your obligations to respond to requests from consumers who exercise their rights under CCPA.

Offering security products and features that will help you to protect personal data. Google operates global infrastructure engineered for security from the start. You can rest assured knowing that we have designed for the secure deployment of services and data storage. We’ve implemented end-user privacy safeguards, secure communications between services, secure and private communication with customers over the Internet, and granular operational controls by administrators. Google Cloud runs on this infrastructure, and our products and features provide capabilities for data governance, access control, export, encryption, and security management that can help organizations with their CCPA readiness.

Providing documentation and resources to assist you in your privacy assessment of our services. We want to ensure that Google Cloud customers can confidently use our services in light of the CCPA. When you use Google Cloud, we support your efforts by providing detailed documentation and resources, such as our new Google Cloud and the CCPA whitepaper

Continuing to monitor the regulatory landscape, and evolving as needed. Our cross-functional teams of privacy advocates, user experience researchers, public policy, and privacy legal experts regularly engage with customers, industry stakeholders, and supervisory authorities to shape our Google Cloud services in order to help customers meet their compliance needs. As the regulatory landscape shifts, we evolve to support our customers’ changing compliance needs. 

Offering a team dedicated to addressing Google Cloud customers’ data protection-related inquiries. For more information, refer to Google’s Businesses and Data website or visit our support pages for Google Cloud and G Suite

Where do you stand?
As a current or future customer of Google Cloud, there are many ways to begin preparing for the CCPA. Consider these tips:

  • Familiarize yourself with the text of the CCPAand its regulations. 

  • Create a data inventory that describes how your business collects, uses, and shares personal information. We have tools such as Cloud Data Loss Prevention and Data Catalog that can help identify and classify data.

  • Review the current controls, policies, and processes that govern your use of personal information to assess whether they meet CCPA requirements, and build a plan to address any gaps.

  • Consider the best process for your business to accept and verify a California consumer request.

  • Review our Google Cloud third-party audit and certification materials, as well as our guidance documents and mappings, to see how they may help with this exercise. 

  • Consider how you can leverage existing data protection features on Google Cloud to support your CCPA compliance.

  • Monitor the latest regulatory guidance as it becomes available, and consult a lawyer to obtain legal advice tailored to your business’s circumstances.   

What’s next?
We’re carefully monitoring developments around this new legislation, and constructively engaging with our customers and partners throughout this process. We’ve also created this CCPA Compliance pageon our Compliance resource center to assist with your efforts as you prepare for CCPA.

For information on Google Cloud privacy practices, please visit our Google Cloud Trust Principles


This blog post is intended to be for informational purposes only. You should seek independent legal advice relating to your status and obligations under the CCPA, as only a lawyer can provide you with tailored legal advice for your situation. Nothing in this blog post is intended to provide you with or should be used as a substitute for legal advice.

Use third-party keys in the cloud with Cloud External Key Manager, now beta

At Google Cloud Next UK last month, we announced the alpha version of Google Cloud’s External Key Manager (Cloud EKM). Today, Cloud EKM is available in beta, so we wanted to provide a deeper look at what Cloud EKM is and how it can be valuable for your organization. 

In a first for any public cloud, Cloud EKM will let you achieve full separation between your data and your encryption keys. At its heart, Cloud EKM lets you protect data at rest in BigQuery and Compute Engine using encryption keys that are stored and managed in a third-party key management system that’s deployed outside Google’s infrastructure.

Cloud EKM.png
Cloud EKM provides the bridge between Cloud KMS and an external key manager.

This approach offers several unique security benefits: 

  • Maintain key provenance over your third-party keys. You have strict control over the creation, location, and distribution of your keys.  

  • Full control over who accesses your keys. Because keys are always stored outside Google Cloud, you can enforce that access to data at rest for BigQuery and Compute Engine requires an external key. 

  • Centralized key management. Use one key manager for both on-premises and cloud-based keys, ensuring a single policy point and allowing enterprises to easily take advantage of hybrid deployments. 

To make Cloud EKM easy to implement, we are working with five industry-leading key management vendors: Equinix, Fortanix, Ionic, Thales, and Unbound. (The Ionic and Fortanix integrations are ready today; Equinix, Thales, and Unbound are coming soon.) Check out the videos below to learn more.

Equinix and Cloud EKM

In collaboration with Equinix, Google Cloud brings customers the next level of control for their cloud environments with External Key Manager. Check out the video to learn more.

Fortanix and Cloud EKM

In collaboration with Fortanix, Google Cloud brings customers the next level of control for their cloud environments with External Key Manager. Check out the video to learn more.

Ionic and Cloud EKM

In collaboration with Ionic, Google Cloud brings customers the next level of control for their cloud environments with External Key Manager. Check out the video to learn more.

Thales and Cloud EKM

In collaboration with Thales, Google Cloud brings customers the next level of control for their cloud environments with External Key Manager. Watch the video to learn more.

Unbound and Cloud EKM

In collaboration with Unbound, Google Cloud brings customers the next level of control for their cloud environments with External Key Manager. Check out the video to learn more.

For more information on Cloud EKM, including how to get started, check out the documentation.

BeyondProd: How Google moved from perimeter-based to cloud-native security

At Google, our infrastructure runs on containers, using a container orchestration system Borg, the precursor to Kubernetes. Google’s architecture is the inspiration and template for what’s widely known as “cloud-native” today—using microservices and containers to enable workloads to be split into smaller, more manageable units for maintenance and discovery.

Google’s cloud-native architecture was developed prioritizing security as part of every evolution in our architecture. Today, we’re introducing a whitepaper about BeyondProd, which explains the model for how we implement cloud-native security at Google. As many organizations seek to adopt cloud-native architectures, we hope security teams can learn how Google has been securing its own architecture, and simplify their adoption of a similar security model.

BeyondProd: A new approach to cloud-native security

Modern security approaches have moved beyond a traditional perimeter-based security model, where a wall protects the perimeter and any users or services on the inside are fully trusted. In a cloud-native environment, the network perimeter still needs to be protected, but this security model is not enough—if a firewall can’t fully protect a corporate network, it can’t fully protect a production network either. In the same way that users aren’t all in the same physical location or using the same device, developers don’t all deploy code to the same environment. 

In 2014, Google introduced BeyondCorp, a network security model for users accessing the corporate network. BeyondCorp applied zero-trust principles to define corporate network access. At the same time, we also applied these principles to how we connect machines, workloads, and services. The result is BeyondProd.

In BeyondProd, we developed and optimized for the following security principles:

  • Protection of the network at the edge

  • No inherent mutual trust between services

  • Trusted machines running code with known provenance

  • Choke points for consistent policy enforcement across services, for example, ensuring authorized data access

  • Simple, automated, and standardized change rollout, and

  • Isolation between workloads

BeyondProd applies concepts like: mutually authenticated service endpoints, transport security, edge termination with global load balancing and denial of service protection, end-to-end code provenance, and runtime sandboxing.

Altogether, these controls mean that containers and the microservices running inside them can be deployed, communicate with one another, and run next to each other, securely, without burdening individual microservice developers with the security and implementation details of the underlying infrastructure.

Applying BeyondProd

Over the years we designed and developed internal tools and services to protect our infrastructure that follows these BeyondProd security principles. That transition to cloud-native security required changes to both our infrastructure and our development process. Our goal is to address security issues as early in the development and deployment lifecycle as possible—when addressing security issues can be less costly—and do so in a way that is standardized and consistent. It was critical to build shared components, so that the burden was not on individual developers to meet common security requirements. Rather, security functionality requires little to no integration into each individual application, and is instead provided as a fabric that envelops and connects all microservices. The end result is that developers spend less time on security while achieving more secure outcomes.

If you’re looking to apply the principles of BeyondProd in your own environment, there are many components, through Google Kubernetes Engine, Anthos, and open source, that you can leverage to achieve a similar architecture:

In the same way that BeyondCorp helped us to evolve beyond a perimeter-based security model, BeyondProd represents a similar leap forward in our approach to production security. By applying the security principles in the BeyondProd model to your own cloud-native infrastructure, you can benefit from our experience, strengthen the deployment of your workloads, know how your communications are secured, and how they affect other workloads.

To learn more about BeyondProd, as well as Binary Authorization for Borg, one of the controls we use in the BeyondProd model, head on over to the Google security blog.