Admin Insider: These 6 steps can help you block security threats in G Suite

Businesses are increasingly at risk of phishing and malware attacks, with email continuing to be one of the leading attack vectors. Because phishers are constantly changing their tactics, these threats can be hard to detect, and many times, admins don’t have the required controls in place to protect their organizations. 

We believe that security solutions need to be three things in order to be effective: 1.) proactive, and have some default protections in place, 2.) intelligent, and adaptable to the ever-changing threat landscape, and 3.) simple to use, for both end users and admins. G Suite was designed with these principles in mind; we’ve built protections to help admins and users detect and remediate attacks. Here’s an example of how you can recognize and help stop threats in their tracks using G Suite security tools.

1. Recognize the problem. 

Managing a large organization isn’t easy. It’s not uncommon for IT admins to receive thousands of alerts each day! With this volume, it’s no surprise that almost half of these alerts are not investigated. It can be hard to separate the signal from the noise, which is where G Suite can help. The G Suite Alert Center provides admins with a single, comprehensive view of the most important security updates.

Within the alert center, admins can receive alerts and actionable insights about the latest security threats including phishing, malware, suspicious account, and suspicious device activity. For example, a suspicious device activity alert may include specifics like device ID and serial number. This lets the admin figure out that, say, a Pixel 2 owned by an employee has displayed some suspicious behavior.

1.png

2. Evaluate and take action right away.

Security remediation workflows can be quite cumbersome. At the end of the day, security tools are only useful if you can easily deploy them—if they are difficult or complicated, they’re of limited use. In G Suite, the alert center and the security center workflows operate together seamlessly.

In this example, we can click on investigate alert which directly loads the investigation tool within the security center. The investigation tool allows admins to identify, triage, and take action on security issues in their domain. As a next step, admins can perform bulk actions (organization-wide) to delete malicious email and examine file sharing to spot and stop potential data exfiltration. Lets walk through what that might look for an organization.

2.png

3. Investigate potential causes. 

It’s important to try to understand the root of the problem, so that you can avoid future breaches. Remember what we said about email being the leading attack vector? Often attacks can come via email, and sometimes users can click on links despite warning banners. 

By doing a quick pivot and search within the investigation tool, one can see that the suspicious email came in with the subject line “Time sensitive – update your 401k contact details” which the employee clicked on. That could be a potential reason for the device being compromised.

3.png

4. Determine the reach of your security incident. 

Now that you’ve figured out how this user got compromised, your next worry should be whether more users have been affected. The investigation tool is the place to figure this out. Within the same window, you can pivot and find out other people within the organization that have received the same suspicious email. In this instance, it looks like we have a bigger problem at hand.

4.png

5. Limit attack proliferation.

Now that we know that 18 other employees received the same suspicious email, we need to do some firefighting. Since we caught this attack early (thanks to the visibility that G Suite controls provide), it’s likely that most employees wouldn’t haven’t opened the email. Within the investigation tool, super admins can proactively delete suspicious emails from user inboxes. This action is logged and can be audited by the organization if needed. Another option might be suspend those user accounts just to be safe, until the admin has more time to investigate.

5.png

6. Finally, explore other apps or data that have been affected. 

Having access to accurate information quickly is critical in these situations. For example, an admin may want to find out right away if any data has been exfiltrated because of these attacks. The investigation tool allows the admin to look at all files that have changed visibility from “internal only” to “externally visible” in just a few clicks. Once this is known, the admin can limit file sharing of these files within the organization or remove access for a certain set of users.

6.png

Next steps

This is one of the many ways you can help solve security issues using G Suite’s proactive, intelligent tools. Learn more security tips from Google experts by watching these G Suite security videos, or join us for our quarterly Google Cloud Security Talks coming up in November.

AWS Client VPN is now available in Asia Pacific (Seoul), Canada (Central), EU (Stockholm), North America (Northern California) regions

AWS Client VPN is now available in Asia Pacific (Seoul), Canada (Central), EU (Stockholm), North America (Northern California) regions. The addition of these regions increases availability of AWS Client VPN, which offers region support in Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), EU (Frankfurt, Ireland, London), North America (N. Virginia, Ohio, Oregon) With this launch, AWS Client VPN is now available in a total of 14 regions. 

AWS IoT Device Tester v1.5.0 for Amazon FreeRTOS now supports Amazon FreeRTOS 201910.00

AWS IoT Device Tester for Amazon FreeRTOS now supports Amazon FreeRTOS 201910.00. With this release, silicon vendors can qualify their development boards with secure elements to AWS device catalog using the latest Amazon FreeRTOS. The latest AWS IoT Device Tester also brings improvements for SecureSockets and WiFi test groups by providing an ability to configure the preferred port for Echo server.

Beyond the Map: How we optimize maps data for our customers

Editor’s Note: This post comes to you from Eli Danziger, as part of the Beyond the Map Series which gives you a behind the scenes look at how we map the world to help our customers build businesses and experiences for their end users.

Over the past few months we’ve highlighted a variety of ways that we map the world to help people explore and get things done–and how that same data enables customers to build location-based experiences and businesses. As we hope you’ve noticed, we’re constantly working on new ways of mapping the world for our users. But at the same time we’re also working directly with our customers to identify areas where we can improve our maps data to better power their businesses. In this Beyond the Map installment, we’re taking a closer look at the work we’ve done in Southeast Asia to ensure our rides and delivery customers are able to meet their business objectives and exceed their users’ expectations.

First, data quality is important to all our customers–but particularly important to customers that power on-demand services. Whether it’s a ride or food delivery, end users expect accurate ETAs, to be picked up where they actually are, and have their dinner delivered without a hassle. An inaccurate ETA that makes someone late for work or a delivery snafu that delivers cold dinner to someone’s table can be the difference between whether a user returns to the service or opts for an alternative. That’s why we worked with rides and deliveries customers in Southeast Asia to understand their unique needs and have applied many of the techniques described in Beyond the Map to improve our regional maps data and help our partners drive immediate business results.  

Adding more roads to power more rides 
The most obvious and impactful way to help our customers power more rides is to add more roads to our data. We’ve doubled down on our efforts in road creation in Southeast Asia to ensure not only more roads, but also that we’re adding roads that are accurate, connected and highly local so that we can make sure trips can be completed down to the last mile and to the many new points of interest and addresses we’ve added to the region. So far we’ve added more than 80,000 kilometers of these local roads, with much of that coming from 2-wheeler routes.  Previously, we tailored our road network for cars. But when we saw the need to support 2-wheeler networks, we made sure our maps cover these roads that are so important for getting around locally and completing the last mile of local trips. As we shared in our last Beyond the Map post, we’re committed to continuing to add 2-wheeler roads throughout the region. 

Pekanbaru, Indonesia
Before (left) and after (right) adding new roads in Pekanbaru, Indonesia

Improving geocodes for more efficient pickups 
Through customer feedback, we came to understand that our reverse geocoding results did not always return all of the small businesses and points of interest that were located in certain regions. Using a variety of the technologies and tactics we’ve explained so far in Beyond the Map, we’ve added millions of addresses and points of interest in Southeast Asian countries. By incorporating those into our reverse geocoding database, we were able to significantly increase coverage in areas where addresses are sparse. This coverage improvement has helped us ensure that the vast majority–more than 95 percent of reverse geocoding calls–return a result that’s close to the request. With improved reverse geocoding results, users can better find the places they’re looking for, making it easy for them to get picked up, dropped off, or have food delivered to the right address.

Myanmar
Before (left) and after (right) adding more addresses and places of interest in Mandalay, Myanmar. The more places mapped, the greater likelihood we can reverse geocode a position to the right place.

In the on demand rides and deliveries space and beyond, we’re always working to understand our customers’ unique needs and challenges, so we can capture new maps data, improve existing data, and build the right products to help them succeed. From mapping narrow roads with Street View 3-wheelers to extracting information from imagery using machine learning, we’re committed to mapping the world and reflecting real-world changes with the shortest possible latency. 

For more information on Google Maps Platform, visit our website.

How GCP helps you take command of your threat detection

Why do we keep talking about security all the time? Why hasn’t anyone just gone and fixed it?

You’ve probably heard these questions, whether from your leadership, or a board member, or just from friends. Then you labor at explaining why security in the cloud is so complex and challenging, the constant arms race, and eventually their eyes glaze over. But you’re right: It is complex, and it can be hard. 

As a security leader today, you likely spend a lot of your time focused on getting information: what’s going on, what new vulnerability just surfaced, what threats are present in your environment, and how to remediate them. And you probably already have a few dozen tools in place to measure, analyze, collect, and search through your data. 

Right now, with your set of existing tools, you hopefully have a good sense of your on-prem systems and your overall attack surface. But across all those datasets, you’re juggling incongruous access patterns, stale data, and cluttered information coming from disparate tools—it’s not organized by topic, risk, or project. So unifying and rectifying the data sources, to really give you a full picture, just doesn’t happen.

Then you add cloud systems to the mix, and it’s a whole ‘nother ball of wax. To wrap up National Cybersecurity Awareness Month, we wanted to detail a few security features we’ve developed—most recently Event Threat Detection, available today in beta—and highlight some information that can help you reduce the complexity of your organization’s security, and improve your security posture.

Gain visibility and control, and prevent threats
With Cloud Security Command Center (Cloud SCC), Google brings a flexible platform to give you wide visibility and rapid response capabilities. Beyond just risk and vulnerability management, Cloud SCC focuses on active defense, showing you threats that have been detected and the path to greater holistic security in your cloud resources. It integrates with existing partner security solutions you already use and Google Cloud security tools. And its API is accessible to you and your vendors, so any additional data is easy to integrate.

Cloud SCC.png

The model above is a centralized dashboard for threat prevention, detection, and response, with views of your current state that you can change based on your needs. For example, you can focus on assets to get a comprehensive list of every firewall, network, disk, bucket, and so on in your organization. 

You can also orient your view based on findings (results) of what’s wrong in your Google Cloud Platform (GCP) environment. We recently launched the Vulnerabilities dashboard to show findings from Security Health Analytics. It’s an integrated security product that helps you identify misconfigurations and compliance violations in your GCP resources and take action.

Reduce threat exposure with Event Threat Detection
Reducing your exposure to threats goes hand-in-hand with being able to respond quickly to those threats that are present in your environment. Today, we’re excited to announce the beta of Event Threat Detection, a security product that integrates into Cloud SCC, and was inspired by how Google protects itself. We wanted to extend our scale and threat intelligence to help you protect your environment and improve your security posture. 

Event Threat Detection helps you detect threats in your logs and send high-risk threats to your SIEM (Security Information and Event Management system) for further investigation. It also can help you save time and money by focusing your attention on the most worrisome cloud-based threats. 

Due to the growth in cloud computing, we’ve seen an increase in the number of customers running VPC Flow logs, Cloud DNS logs, Cloud Audit logs, and syslog delivered via the fluentd agent on GCP. Event Threat Detection uses Google’s threat intelligence to surface threats present in these logs, including anomalous IAM grants, malware, cryptomining, outgoing DDoS, and brute-force SSH. 

When Event Threat Detection finds a threat in your logs, it shows up as a finding on the Cloud SCC dashboard. If you need to further analyze any of these threats, you can send them to your SIEM, saving time and money because Event Threat Detection has already determined the high-risk logs you need to investigate further. 

Event Threat Detection integrates with Cloud Functions to make it easier for you to export findings to your SIEM of choice. You can also use Cloud Functions to automate responses and changes to Event Threat Detection findings. See the video below for more information.

In this video, learn about Event Threat Detection, a service within Cloud Security Command Center that can alert you when a threat is detected in your logs running in GCP.

Respond to threats
Once threats have been detected, the final step is, obviously, responding to them. To help speed up your response game, you can set up automated actions for when threats are detected. When Cloud SCC detects an anomaly, or an active threat, you can have it change a VM configuration, perhaps cutting the VM off from other parts of your network. You can also change firewall rules automatically. Using these events to trigger Cloud Functions, you can set up any response you like, fully automated. At the same time, you can send incident metrics and data to Stackdriver or your own SIEM to make sure your incident response team has everything they need.

Together, these features give you the power to structure and organize the data you gather, which is key to making cloud security work for large, mature organizations. Cloud SCC lets you create tags for items to assist with incident response or project-based inquiry, and to aid in custom dashboard creation. The constant goal: give you the information you need quickly, so you can take the necessary action.

Cloud SCC details
Now that you have a high-level view of how Cloud SCC and Event Threat Detection can help your organization become more secure, here are some other resources highlighting Google security features that integrate into Cloud Security Command Center, how they work, and how they can help you improve your security posture:

These blogs feature step-by-step instructions with screenshots, and each has a companion video. Check them out, and let us know if there are any other issues and solutions you’d like us to detail.  

Get started
To get started with Cloud Security Command Center, watch our video below.

In this video, learn the five-step process of setting up Cloud Security Command Center to prevent, detect, and respond to threats.

If you’re new to GCP and want to give these products a try, simply start your free GCP trial, enable Cloud SCC, and turn on our integrated security products, like Event Threat Detection. If you’re an existing Cloud SCC customer, just enable Event Threat Detection and our other security products from Security Sources in Cloud SCC. For more information on Event Threat Detection, read our documentation.

Keep Parquet and ORC from the data graveyard with new BigQuery features

Parquet and ORC are popular columnar open source formats for large-scale data analytics. As you make your move to the cloud, you may want to use the power of BigQuery to analyze data stored in these formats. Choosing between keeping these files in Cloud Storage vs. loading your data into BigQuery can be a difficult decision, leading to your data platform looking more like a spooky data graveyard where data goes to disappear. However, it’s now possible to merge the worlds of the living and the undead: your old columnar-format files in Cloud Storage with BigQuery’s Standard SQL interface. 

We’re pleased to announce that BigQuery has conjured up (OK, launched) beta support for querying Parquet and ORC file formats in Cloud Storage. This new feature joins other federated querying capabilities from within BigQuery, including storage systems such as Cloud Bigtable, Google Sheets, and Cloud SQL, as well as AVRO, CSV, and JSON file formats in Cloud Storage —all part of BigQuery’s commitment to building an open and accessible data warehouse.

Federated queries allow you to access real-time data from many different sources with one query, helping you do advanced analytics faster, thus bringing you the power of BigQuery analysis to your data, wherever it is. You don’t have to move any data, and you can be sure of the integrity of the data you’re querying—no evil twin copies lurking about. In addition, you can now query and load Hive partitioned tables stored in Cloud Storage from within BigQuery.

This video demonstrates a newly-released set of BigQuery features! BigQuery now supports querying Parquet and ORC files stored in GCS, and BigQuery is now able to understand Hive-partitioned tables in GCS.

You’ll find that using these new features builds a bridge between your datasets and can help you be more flexible. Your data stays in your preferred open source formats in Cloud Storage and you can use BigQuery’s ANSI Standard SQL for analytics and data processing. This means: 

  • BigQuery is able to take full advantage of the columnar nature of Parquet and ORC to efficiently project columns. 

  • BigQuery’s support for understanding Hive Partitions scales to 10 levels of partitioning and millions of partition permutations. 

  • BigQuery is able to efficiently prune partitions for Hive partitioned tables.

Using federated queries to avoid data graveyards
We were fortunate to have nearly two hundred customers participate in the alpha release of this feature, and their feedback and input was invaluable in the release and development process. In this blog post, you’ll hear about the early impact on three of those customers: Pandora, Truecaller, and Cardinal Health.

BigQuery Parquet and ORC.png

“At Pandora, we have petabytes of data spread across multiple Google Cloud storage services; accordingly, we expect BigQuery’s federated query capability to be a useful tool for integrating our diverse data assets into a unified analytics ecosystem,” says Greg Kurzhals, product manager at Pandora. “The support for Parquet and other external data source formats will give us the ability to choose the best underlying storage option for each use case, while still surfacing all our data within a centralized, BigQuery-based data lake optimized for analytics and insights.” Gaining this flexibility eliminated some difficult architectural trade-offs, helping to simplify the design process and ultimately facilitate the creation of an efficient, accessible data structure in the cloud for the music services company.

When Cardinal Health started their journey to the cloud, they chose a lift-and-shift strategy, migrating all of their Hadoop jobs to run in Cloud Dataproc. “We also wanted to leverage cloud-native options like BigQuery but without necessarily rewriting our entire ingestion pipeline,” says Ken Flannery, senior enterprise architect at Cardinal Health. “We needed a quick and cost-effective way to allow our users the flexibility of using different compute options (BigQuery or Hive) without necessarily sacrificing performance or data integrity. Adding ORC federation support to BigQuery was exactly what we needed and was timed perfectly for our migration.” 

As soon as Cardinal Health started migrating jobs to Cloud Dataproc, the same datasets that users were already querying from Cloud Dataproc were now simultaneously available to them in BigQuery. “ORC federation helped us take advantage of BigQuery much sooner than otherwise possible and gave us the needed flexibility of choosing when and how much of BigQuery we would use,” says Flannery.

Software company Truecaller was using Hive/Spark to query data before it tested external table support on the columnar format—but it was slower and cost twice as much. They were working on onboarding teams to BigQuery quickly, so they decided to try external tables vs. managed tables. “We were impressed by how convenient it was: There is zero setup cost and it is incredibly simple,” says Juliana Araújo, data product manager at Truecaller. “All we had to do was set the Cloud Storage URL path to our data and make a permanent table that references the data source. Now we can have our EDWH and data lake under the same stack.

The greatest benefit of using BigQuery external tables for Truecaller is that it provides unprecedented opportunity to do ad-hoc analysis on enormous datasets that we don’t want to store in BigQuery and are too big for usual Hadoop processing.” This has saved hours of time for Truecaller. For example, in one use case, querying external tables was 30 times faster than querying Hive/Spark in the Truecaller data platform.

With the release of querying Parquet and ORC files in Cloud Storage, you can continue to use Cloud Storage as your storage system and take advantage of BigQuery’s data processing capabilities. Moreover, BigQuery’s managed storage is able to provide a higher level of automation, performance, security, and capability—something to consider as you move forward.

Loading Hive partitioned data into BigQuery
In addition to the native functionality provided by BigQuery, you may take advantage of the convenient command-line open source utility Hive External Table Loader for BigQuery. This utility aids in loading Hive partitioned data into BigQuery.

You may want to use this tool if:

  • Your Hive partitioned data does not have a default Hive partitioned layout encoding all partition keys and values

  • Your Hive partitioned data does not share a common source URI prefix for all URIs and requires metastore for partition locations

  • Your Hive partitioned data relies on metastore positional column matching for schema detection

Commitment to open data warehousing
BigQuery’s original columnar file format ColumnIO inspired the open source ecosystem to develop open columnar file formats, including Parquet. Today, dozens of exabytes are stored in Parquet across organizations of all shapes and sizes. This data format has come full circle: Parquet is now a first-class citizen of the BigQuery ecosystem. We’re pleased to be able to continue our commitment to open source with this integration. 

“In 2012, I worked on a side project that was going to become the basis for Apache Parquet: I implemented the column-striping algorithm from ColumnIO based on the Dremel paper,” says Julien Le Dem, vice president, Apache Parquet. “At the time, Google had recently made that technology available through BigQuery. I didn’t imagine that one day they would support Parquet, integrating the work of its contributors. That’s the magic of open source!”

Learn more about staying in the land of the living with BigQuery 
For more information and practical examples on how to take advantage of Parquet, ORC, and Hive partitioned data, head over to the documentation. As always, you can try BigQuery with our free perpetual tier of 1TB of data processed and 10GB of data stored per month. Keep your data well away from the land of the undead with our rich ecosystem across different file formats and storage types.

Use these Chrome Enterprise security resources to better secure endpoints

Editor’s note: We’re nearing the end of Cybersecurity Awareness Month, which is a great time to reflect on your organization’s upcoming security goals, particularly when it comes to endpoint security. In this post, we’ll hear about some great (and free!) resources that can be helpful if you’re looking to establish an informed, endpoint security strategy for your organization.

According to IDC, approximately 70% of security breaches originate from endpoints. Companies who lead the way in modern OS and browser security are the ones that understand and adopt tools that provide built-in, proactive management controls. Google has always prioritized endpoint security with Chrome Enterprise, which is why businesses like HackerOne and Blue Cross Blue Shield of North Carolina use Chrome devices and Chrome Browser—they’re secure by design.


Gartner recently conducted a comprehensive security review in its “May 2019 Mobile OSs and Device Security: A Comparison of Platforms” report of leading operating systems and devices platforms. Based on an evaluation of “the core OS security features that are built into various mobile device platforms, as well as enterprise management capabilities,” Chrome OS received strong ratings for 27 out of 30 criteria. 


We’ve built proactive protections in place with tools like Safe Browsing to help deter users from falling for harmful attacks. And with features like sandboxing, admins can rest assured that endpoints in their fleet can mitigate the impact of an attack if one occurs. With Chrome Enterprise, businesses have access to hundreds of user, browser and device policies to provide administrators with the oversight they need to keep their business data secure. 

If you are looking to evaluate your endpoint security strategy, there are some important considerations to bear in mind. 

  • First, are your endpoints secure by design? In a modern computing landscape with threats coming from every direction, it’s important that your browser and OS both proactively deter end users from falling for attacks, as well as limit the impact if one does prevail.

  • Next, do your endpoints work with diverse application ecosystems to ensure applications are trusted? Apps can require permissions and gain unintended access to corporate data, so it is important to provide IT with control to help ensure harmful apps stay out of the hands of users.

  • Finally, are your endpoints positioned to eliminate and protect against current threats? Security breaches are on the rise, increasing by as much as 27% in recent years, and bad actors continue to use common methods prevalent on legacy tools such as malware, ransomware and phishing attacks. 

Improve your organization’s security with Chrome Enterprise 
Chrome OS and Chrome Browser have built-in security features that give admins the control they need to help keep endpoints secure and users productive. As you consider what endpoint option is best for your business, here are some great resources that can help.

Chrome Enterprise’s innovative cloud-native approach to security proactively helps to protect enterprise data, while keeping your business safe. To learn more, visit our website or check out the Chrome Enterprise release notes for the latest details on recent Chrome Enterprise security enhancements.

Protecting your GCP infrastructure at scale with Forseti Config Validator part three: Writing your own policy

No two Google Cloud environments are the same, and how you protect them isn’t either. In previous posts, we showed you how to use the Config Validator scanner in Forseti to look for violations in your GCP infrastructure by writing policy constraints and scanning for labels. These constraints are a good way for you to translate your security policies into code and can be configured to meet your granular requirements. And because policy constraints are based on Config Validator templates, it’s easy to reuse the same code base to implement similar, but distinct constraints.

In this post, you’ll learn how to write your own custom template (and test it with sample constraints) to get you started writing your own security policies as code.

A closer look at template constraints 

First, let’s examine a sample constraint that implements the GCPStorageLocationConstraintV1 template. This template lets you define where in your cloud environment your Cloud Storage buckets should live.

Let’s take a look at this constraint:

As you can see, this constraint implements the GCPStorageLocationConstraintV1 template (kind). Here is what we can tell from this constraint file:

  • Its name is allow_some_storage_location. This is what will show in your reports for each identified violation.

  • The violations it raises will be marked as high.

  • It applies to the entire organization (target)

  • It has three parameters (mode, locations and exemptions).

Another important point is the target object for the constraint. This lets you specify the resources that should comply with this constraint in your organization hierarchy. In this example, all resources should comply with the constraint, but in some cases you may want to limit the folders and/or projects resources that should be affected by it. 

You can specify more than one target (array), and in the same logic, you can use the exclude object to specifically prevent the constraint from targeting certain resources.

Now, what about templates?

Let’s keep digging into this example and look at the GCPStorageLocationConstraintV1 template. For simplicity, it’s divided into two main parts.

GCPStorageLocationConstraintV1 Template (top part):

Here you can see that there are several top-level keys that describe the template. The most important ones to focus on at this point are:

  • kind: A description of a constraint template

  • metadata > name: the template’s common name

  • spec: Documentation for your specific template.

  • spec > names > kind and spec > names > plural – which template to use (kind in your constraint file)

  • spec > validation > openAPIV3Schema > properties – Location of your template’s parameters. If no parameters are needed, use {} as the value.

Now let’s take a look at how to define your parameters in this template file (again, under the properties section):

You can use the open API v3 format to describe your parameters, meaning that you can be quite precise about them. Here for instance, you can see that:

  • The first parameter is “mode”, which is a string and more specifically an enum (fixed list of valid values). Its value can either be “denylist” or “allowlist”. Note that there is no default values specified, so you should always pass a value when using this template, just to be safe.

  • The second parameter is “exemptions” which is an array (list) of string.

In all cases, the description field lets you know what values should be passed to these parameters when calling this template in your constraint.

Finally, the last part of the template, is the rego rule, the language that lets your write custom policies for the Config Validator tools, including terraform-validator:

For the template to be valid, you need to put the actual rego rule that Config Validator will call by when you deploy your constraint (either in Forseti or with terraform-validator as we discuss in the next article). 

You can find the code for the rego rule in the validator/storage_location.rego file. Then, use the “make build” command to automatically copy this rule over to your template when the code is ready.

Now you have a clearer sense about what a template is: a YAML file that describes the template itself, the inputs it needs (if any) and finally the rego rule that should be applied whenever the template is called by Config Validator. 

Next, let’s go over rego (and OPA), and how to get started writing your own rules that will become the core of your template.

Introduction to rego and OPA

The Open Policy Agent is a framework that lets you write policies that can be reused across tools. This a good standard to use if you want to ensure that your policies will only need to be written once, regardless of what will consume them in the end. 

In our case, this is the main reason why the template rules discussed in this post (a.k.a. policies) can be interpreted the same way by both the Forseti config_validator scanner and the terraform-validator tool, as we will see in the next article.

The most challenging part about rego for most developers is that it’s a declarative language, but it looks/feels like an imperative one. This can lead to some confusion when debugging rules that you need to write.

There are a lot of good resources to help you get started writing rego:

One tool that I use often to collaborate with other developers is the online rego sandbox that lets you write rego code and test the output based on your inputs. You can also share your examples with others easily.

So, how does this relate to our template? Well, if you look at all the other templates in the policy-library, you will notice that they all define a special rego rule in their definition:

This is where the magic happens. This deny rule is what lets Config Validator know whether or not a given asset (like a GCP resource) should be marked in violation. 

If the body the rule (# some rego logic) evaluates to true given the input it receives (your parameters + the asset to evaluate), then the resource will be marked as a violation: the deny rule will be evaluated to true.

If it evaluates to true, the deny rules will return some metadata (msg and details) In our case it will pass along the values of the message and metadata variables if they have been set (or error out if not).

Some key points to remember when writing rego:

  • This is not an imperative language. The rule will be evaluated in parallel as much as is feasible. The dependencies between the instructions are discovered and followed at runtime. 

    • Some operators have special behaviors, like “=”, “:=” and “==” (learn more here and  here)—make sure you understand the difference. When in doubt use “:=” for assignments (unless you really mean to use “=”).

    • There is a limited number of functions available, but do use them, as they will save you a lot of time.

    • The deny rule can be tricky because it works seemingly  backward to how our brains usually process information (e.g., this deny rule will be true if its body is evaluated to true). Most programmers find it easier to write a positive logic function, and use the “not” operator when calling the function to reverse its outcome in the rule.

    • You can use the trace and sprintf functions to debug your rego logic, but the trace output only shows up if one of your test fails. If there are errors in your code (such as syntax or runtime errors), your traces only show up if they were evaluated before the error, which might be tricky to predict.

I hope I did not scare you too much about rego, but my point is that it’s best to go slow when writing your rule and validate that it behaves as expected as early as possible (do no write hundreds of lines at once and start testing at the end).

Writing your own custom rule

For this section, I will use a template that I recently published to the policy-library repository. This goal of this template is to allow a user to specify which resources types are allowed (whitelist) or denied (blacklist) in their GCP infrastructure (for instance within a folder). This kind of policy is quite in demand by financial or insurance companies that require  additional guardrails.

Let’s get started. Start by writing your new rego rule within the validator folder, within a file named allowed_resource_types.rego. It should look like this:

validator/allowed_resource_types.rego:

You can see we have some basic logic already in the rule. You retrieve the constraint object that was passed by Forseti or terraform-validator as an input to your rule, and you get the constraint parameters using the get_constraint_params function. 

This function is defined in the validator library, which is also in the policy-library repository (under the lib folder). The parameters that you retrieve from the constraint are accessible in the params variable.

At the same time, make sure that you have an input.asset object passed to your rule, which is the GCP resource that you need to evaluate in the rule. This asset object should reflect the GCP Cloud Asset Inventory export format, as mentioned in earlier articles.

Finally, set the message and metadata variables. These will be used only if the body of the deny rule evaluates to true

Writing your first test for your template

Now it’s time to test your template with a brand new constraint, in a new folder: validator/test/fixtures/allowed_resource_types

Following the contributing guidelines, create two subfolders for your tests:

  • assets: this will contain all of your test data that you get from a Cloud Asset Inventory export, or from other template test data (these are both json objects)

  • constraints: this will contain all of the test cases for your template. This is a way for you to test various inputs to your template and make sure it behaves as expected against your test data.

Now, create a new test constraint in the constraints folder:

validator/test/fixtures/allowed_resource_types/constraints/basic/data.yaml:

This example simply uses the template you just wrote, with no special parameters. Now you can test your almost empty rule by creating a test file in the validator folder (with _test,rego as a suffix):

validator/allowed_resource_types_test.rego:

The key points to remember for this test file are:

  • Only the functions with the “test_” prefix will be executed as tests

  • You can import as many data sets and test constraints as you want (see the import statements at the top)

  • When you import a constraint (YAML) or a test data set (JSON), you can retrieve it using the directory structure where it lives (for instance, for the above constraint case, you can import  data.test.fixtures.allowed_resource_types.basic.constraints which maps to the data.yaml file location.

Getting your mock data

As mentioned earlier, Config Validator only supports resources exported by Cloud Asset Inventory. So a good way to get mock/testing data for a new template is to run a Cloud Asset Inventory export on your existing infrastructure ( assuming you already have resources against which you want to test your template). Cloud Asset Inventory supports these resource types. Another option is to use mock data from existing policies.

For my test data, I use only one data.json file, but feel free to have separate data sets for separate use cases. You can find my latest test data set here.

For the context of this article, I have one resource of each of the following:

  • storage.googleapis.com/Bucket

  • compute.googleapis.com/Instance

  • compute.googleapis.com/Disk

  • google.bigtable.Instance

  • sqladmin.googleapis.com/Instance

Now you’re ready to test your new template. The test function verifies that you have five violations at this point (count == 5), since the dataset is currently comprised of five resources (the rule flags everything as a violation at this point):

Adding logic to your rule

Ultimately, your goal is to allow (whitelist) or deny (blacklist) resources in your infrastructure, based on a list of resource types that you’ve passed to your template.

First, add two parameters to your template:

  • resource_types_list: the list of resource types to consider in the template (list of strings)

  • mode: whether the list is a whitelist or blacklist (enum: whitelist or blacklist

Now you can add some more logic to your rule to use these parameters to evaluate the asset that was passed as an input to the template:

As discussed earlier, I used a positive logic function (resource_type_is_valid) to make my life easier. If a resource is valid (i.e., its type is part of the list passed for the whitelist mode or absent of that list in the blacklistmode), then my function returns true. This is why in the main deny rule, I use the not operator on it to raise a violation only when the asset scanned is not valid.

Note: As you can see in the rego, you can define the same function multiple times, with the same prototype. At runtime, all of them are evaluated and OR’d together to determine the result. As a programmer, this is convenient, since you can write the same xyz_is_valid function for multiple use cases and call it once in your top-level rule (here deny), as long as each function tests for a different scenario.

You can also test your changes, the same way you did earlier (i.e., run “make test”). In the current version of this template, I added more resource types and more test constraints for it, but the rule itself is unchanged (at least for now). Feel free to take a look at the current version here.

Here is the updated test constraint (to pass values for parameters in our test function):

validator/test/fixtures/allowed_resource_types/constraints/basic/data.yaml:

Publishing your new template

Now that you have a stable (and tested) version of your template rule, you can generate the template using the “make build” command. This runs “make format” and “make build_templates”, which updates the template file to include the latest version of your rego rule automatically. 

Here is your brand new template file, before running the “make build” command to populate your rule:

policies/templates/gcp_allowed_resource_types_v1.yaml:

Note: There is a brief description of the whole template in the comments at the top, as well as a discussion of its  parameters in the properties section.

Once you’re ready to finalize your template, run the “make build” command:

Check out again your template file. The command should have populated the rego section of your template for you, from the validator/allowe_resource_types.rego file. 

I encourage you to double check that you have a valid YAML file, or you could encounter issues later on when you deploy your template with Forseti or terraform-validator.

Finally, it’s best practice to reference a sample usage of your template in the samples/ folder, so feel free to copy your test constraint in there before pushing your changes to your repositories.

Conclusion

In this article, we reviewed how you can write your own policies for Config Validator that you can use out-of-the-gate with both Forseti and terraform-validator. You can now commit new files into your repository. Feel free to make a pull request if you would like to publish your new policy to the community (take a look at the contributor guidelines for the Config Validator policy-library).

The policy we just created is quite useful when you want to use a multi-pipeline strategy for your deployments. For instance, you could have highly specific pipelines to deploy a specialized terraform templates with separate pipelines for network, applications / GKE, or IAM resources. You could also use this policy to ensure each pipeline cannot deploy resources beyond its scope, or use different service accounts for each pipeline, with only the minimal permissions it needs to do its job.

In a follow-up article, we’ll discuss how to use terraform-validator in your terraform deployments, so you can prevent bad resources from being deployed in your environment in the first place!

Useful links 

OPA / rego:

Repositories:

Iron Mountain fosters a culture of collaboration with Google Cloud

When I speak to enterprises, many share with me their desire to grow an internal culture of collaboration that accelerates business transformation. No matter what industry they’re in, from traditional enterprises to digital natives, working together—better, faster, and from anywhere—is a top priority.

Iron Mountain is an example of a company doing exactly this. We’ve worked with Iron Mountain extensively over the years, including launching jointly developed solutions like Iron Mountain InSight, which customers can use to digitize documents in the cloud, analyze them and apply machine learning for previously untapped business insights.

 Today, we’re thrilled that Iron Mountain is taking the next step in its digital transformation journey by adopting G Suite. They’ll be rolling out Google Cloud’s productivity and collaboration solutions—including Gmail, Google Docs, and Drive—to 26,000 employees across its more than 1,450 facilities in 50 countries. 

Iron Mountain saw great potential in G Suite’s powerful mobile experience to offer an environment for engagement and knowledge sharing across Iron Mountain’s workforce, both in its global offices and out in the field. With G Suite, Iron Mountain will continue on its multi-year transformational journey to bring the best technology to its employees and customers, making it easier for them to work together and enable faster and more effective decision-making. 

“Enabling digital transformation for our customers and our own company is a core tenet of our technology strategy,” says Kimberly Anstett, senior vice president and chief information officer, Iron Mountain Incorporated. “We are extending the power of Google Cloud as a platform of change and transformation for the entire company with G Suite. We have a global business that we’ve scaled to meet the needs of our customers and drive continued growth. Our employees need to collaborate to drive that growth and the best possible customer experience around the world, and with G Suite, we’re bringing together every associate and business function in all parts of the world with a single initiative.” 

Iron Mountain is a trusted partner to more than 225,000 organizations and 95 percent of the FORTUNE 1000, offering solutions that include information management, digital transformation, secure storage, secure destruction, as well as data centers and cloud services. Since its founding in 1951, Iron Mountain has been a global leader for storage and information management services, helping  customers lower cost and risk, comply with regulations, recover from disaster, and enable a more digital way of working. 

Iron Mountain’s focus on digital transformation is as much about accessing technology solutions within its business as it is putting technology in the hands of its customers Transforming its own culture creates higher levels of customer intimacy and responsiveness, as well as opportunities for collaboration to capitalize on innovation. With Google Cloud, employees at Iron Mountain will be able to work together more efficiently and have more meaningful interactions, enabling them to be more agile, responsive and in tune with their clients’ needs.

AWS Global Accelerator ahora brinda soporte para los puntos de enlace de la instancia EC2

Nos complace anunciar que a partir de hoy, sus aplicaciones que se ejecutan en instancias de Amazon EC2 pueden llevarse a cabo directamente en AWS Global Accelerator. Antes, debía usar un balanceador de carga de aplicaciones, un balanceador de carga de red o una dirección IP elástica para llevar a cabo una instancia EC2 con Global Accelerator. Ahora, puede usar Global Accelerator directamente como su punto de acceso único accesible desde Internet para sus instancias EC2, mejorando así la disponibilidad y el rendimiento de las aplicaciones con usuarios locales o globales.