Consolidating storage into a centrally managed infrastructure resource can make life as a storage architect much easier. But the path to consolidation is fraught with complexity. Data is flowing into your organization constantly from live sources, whether from your company’s customers, employees, partners or the devices and hardware you maintain. All this data sits at different locations owned by different business units, inside various types of storage technologies that aren’t necessarily available the moment you need them for your data storage needs.
But one thing is a constant for most businesses today: The amount of data to be stored just keeps growing. Today we’re announcing the Storage Growth Plan for Google Cloud Storage, a way to provide flexible, ready-when-you-need-it data storage that won’t result in unexpected bills.
Cloud Storage is great at solving the consolidation and the capacity problem of data storage today. It is the unified object storage that powers many Google Cloud Platform (GCP) customers, letting you store and move data as needed. You can use Transfer Appliance to get petabytes into Cloud Storage quickly, you can stream data into Cloud Storage with Dataflow, you can even move data from AWS S3 to Cloud Storage with Storage Transfer Service. You pay by the gigabyte-month for the data you are storing in Cloud Storage, and you can store petabytes, exabytes or more. And once it’s in Cloud Storage, integrations across the platform make it easy to expose your data to services like BigQuery, Dataproc and CloudML.
It’s easy to store and use the data in Cloud Storage—but it’s still being created at an astonishing and unpredictable rate. And creation unpredictability means cost unpredictability. We’ve developed the Storage Growth Plan to help enterprise customers manage storage costs and meet the forecasting and predictability that is often asked of IT organizations. It’s a new way to commit to Cloud Storage that protects you from the cost volatility associated with your data storage behavior. Here’s how it works:
You commit to at least $10,000 spending per month for 12 months of Cloud Storage usage. This is a fixed amount you will pay each month.
You can grow stored data, with no extra charges for usage over your commitment, during those 12 months.
At the end of 12 months, you have two choices for renewal.
Commit to the next 12 months at whatever your peak usage was. If that is within 30% of your original commitment, all of your previous year’s overage is free. If it is more than 30%, you repay that remainder over the next year.
Or, leave the plan and pay for the past year’s overage.
Repeat 12 months at a time for as long as you like.
We heard from customers that data growth can be unpredictable, but costs can’t be. We’ve also heard that data can have unpredictable life cycles. A legacy image archive might become relevant again as a Cloud Vision API training set, or an analytics workload might only sit in hot storage for a month. Storage Growth Plan applies to any storage class, enabling you to move your data freely between hot and cold classes of storage and maintain cost predictability.
Storage Growth Plan helps companies like Recursion set storage costs as they build the world’s largest biological image dataset. Recursion currently manages a data set growing by more than 2 million new images a week. “This dataset enables the company to train neural networks and use other sophisticated computational techniques to identify changes in thousands of cellular and subcellular features in response to various tests,” says Ben Mabey, Vice President of Engineering at Recursion. “This approach, which we call ‘Phenomics,’ helps us pursue novel biology, drug targets, or drug candidates with more data and less bias.”
You can take advantage of this new commitment structure today by contacting sales.
Adding geo-redundancy and price drops for Cloud Storage
We’re also passing on continued technical innovation to our customers in the form of price drops, in addition to introducing this new way to buy Cloud Storage. We recently announced that Cloud Storage Coldline in multi-regional locations is now geo-redundant. This means that Coldline data–the lowest-access tier of Cloud Storage—is protected from regional failure by storing another copy of your data at least 100 miles away in a different region. This image illustrates how your data is stored in different types of locations:
We’ve added this redundancy to Coldline storage, but haven’t raised the price. Instead, we’re dropping prices for our Coldline class of storage in regional locations by 42%. Data stored in Coldline in regional locations is now as low as $0.004 per GB. As with all Cloud Storage classes, the data is still accessible to users in milliseconds.
We often hear from customers that they take advantage of all of our classes of storage as their data ages. What starts in the Standard class of storage when it’s accessed frequently eventually moves to Nearline and then Coldline as it’s accessed less frequently. You can turn on object lifecycle management to move data among storage classes automatically based on a policy you set. Or, for use cases like digital archives, backups or content under a retention requirement where you won’t be accessing the data, you can start in a colder class. Regardless of which class you start with, Cloud Storage will maintain the redundancy of that data per the location of the bucket as it is tiered. And you’ll have a consistent experience across tiers no matter how often data is being accessed.
Take advantage of these new features and options to create the flexible storage infrastructure to support your cloud. Learn more about GCP storage here.
Thanks to contributions from Chris Talbott.