Google Cloud Storage is an object storage service that allows you to upload files to a virtual bucket, providing quick and easy file storage for your applications. It competes with AWS’s S3 storage service on both price and features.
How Much Does GCP Cloud Storage Cost?
Overall, GCP Cloud Storage is priced similarly to AWS S3. There are a few different storage classes with different prices; the following prices are based off the us-east1 region, one of the bigger (and cheaper) regions:
Standard Storage costs $0. 020, used for general purpose file storage. Nearline Storage costs $0. 010, used for infrequently accessed data, with a 30 day minimum and additional costs for accessing data. Coldline Storage costs $0. 004, used for data that isn’t accessed often (about once a quarter) Archive Storage costs $0. 0012, used for long term archival. It has a one year minimum storage policy, and high costs for retrieving data. However, unlike AWS Glacier Deep Archive, your data is accessible in milliseconds, compared to hours or days.
You can also choose to have your data spread out across multiple regions. This improves redundancy, but the main reason you’d want this is to lower the access latency for end-user accessible content. Having multiple copies of your data in many different places means the average latency to any user will be low.
Of course, storing data in multiple places costs extra money, but not as much as you’d think—for the whole US region, standard storage costs $0.026 per GB, compared to $0.020 for the us-east1 region. This is because even though you’re only using one region, your data is still stored in multiple Availability Zones for redundancy and lowest possible internal latency. With Multi-Region deployments, you’re not storing copies in every AZ, so the costs are relatively similar.
Creating a Bucket
From the GCP Console, find “Storage” in the sidebar, and click on “Browser”:
From here, you can create a new bucket, or edit your existing ones.
Give it a name, which must be globally unique.
You have a few options for the location. The default is multi-region, which spans a large area, and will provide the best performance for end users. If you’re only accessing data from one region, the single region option is cheaper. Dual-Region is much more expensive than both of them, and only useful for HA deployments where low latency for in-region access is key.
Choose the default storage class for the bucket. If you upload data, and don’t specify a specific class, it will default to what you choose here. You can, of course, have Standard and Nearline objects in the same bucket.
The next option controls the level of access to each object. If the entire bucket is used for the same purpose, such as a bucket of publically accessible images, you can set this to uniform to simplify access. Otherwise, leave it on Fine-Grained. There is no pricing difference.
Click create, and you should see a new bucket in the list.
If you want to upload items to test it out, you can do so from the console:
However, this won’t be how you’re accessing it most of the time. If you want to access it from the command line, you’ll need to install gsutil, a Python utility for accessing Cloud Storage. It’s installed by default on Compute Engine instances, but if you want to access it from your personal computer or another machine, you’ll need to install the Google Cloud SDK:
Then run gcloud init to link your account:
This will give you a link which you can open in your browser to choose your Google account.
Once your account is linked, you should be able to upload items with gsutil cp:
If you want to access Cloud Storage from within an application, you can use the Cloud Storage Client Library for your language, or simply use the REST API.
If you’re migrating from S3, Google provides a tool for easily moving your data over to the new bucket.