Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (2024)

ClickHouse recognizes that GCS represents an attractive storage solution for users seeking to separate storage and compute. To help achieve this, support is provided for using GCS as the storage for a MergeTree engine. This will enable users to exploit the scalability and cost benefits of GCS, and the insert and query performance of the MergeTree engine.

GCS Backed MergeTree

note

This page is not applicable to ClickHouse Cloud. The feature documented here is not available in ClickHouse Cloud services.See the ClickHouse Cloud Compatibility guide for more information.

Creating a Disk

To utilize a GCS bucket as a disk, we must first declare it within the ClickHouse configuration in a file under conf.d. An example of a GCS disk declaration is shown below. This configuration includes multiple sections to configure the GCS "disk", the cache, and the policy that is specified in DDL queries when tables are to be created on the GCS disk. Each of these are described below.

storage_configuration > disks > gcs

This part of the configuration is shown in the highlighted section and specifies that:

  • Batch deletes are not to be performed. GCS does not currently support batch deletes, so the autodetect is disabled to suppress error messages.
  • The type of the disk is s3 because the S3 API is in use.
  • The endpoint as provided by GCS
  • The service account HMAC key and secret
  • The metadata path on the local disk
<clickhouse>
<storage_configuration>
<disks>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/BUCKET NAME/FOLDER NAME/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
</disks>
<policies>
<gcs_main>
<volumes>
<main>
<disk>gcs</disk>
</main>
</volumes>
</gcs_main>
</policies>
</storage_configuration>
</clickhouse>

storage_configuration > disks > cache

The example configuration highlighted below enables a 10Gi memory cache for the disk gcs.

<clickhouse>
<storage_configuration>
<disks>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/BUCKET NAME/FOLDER NAME/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
<gcs_cache>
<type>cache</type>
<disk>gcs</disk>
<path>/var/lib/clickhouse/disks/gcs_cache/</path>
<max_size>10Gi</max_size>
</gcs_cache>
</disks>
<policies>
<gcs_main>
<volumes>
<main>
<disk>gcs_cache</disk>
</main>
</volumes>
</gcs_main>
</policies>
</storage_configuration>
</clickhouse>

storage_configuration > policies > gcs_main

Storage configuration policies allow choosing where data is stored. The policy highlighted below allows data to be stored on the disk gcs by specifying the policy gcs_main. For example, CREATE TABLE ... SETTINGS storage_policy='gcs_main'.

<clickhouse>
<storage_configuration>
<disks>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/BUCKET NAME/FOLDER NAME/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
</disks>
<policies>
<gcs_main>
<volumes>
<main>
<disk>gcs</disk>
</main>
</volumes>
</gcs_main>
</policies>
</storage_configuration>
</clickhouse>

A complete list of settings relevant to this disk declaration can be found here.

Creating a table

Assuming you have configured your disk to use a bucket with write access, you should be able to create a table such as in the example below. For purposes of brevity, we use a subset of the NYC taxi columns and stream data directly to the GCS-backed table:

CREATE TABLE trips_gcs
(
`trip_id` UInt32,
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_datetime` DateTime,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`tip_amount` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4)
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
SETTINGS storage_policy='gcs_main'
INSERT INTO trips_gcs SELECT trip_id, pickup_date, pickup_datetime, dropoff_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tip_amount, total_amount, payment_type FROM s3('https://ch-nyc-taxi.s3.eu-west-3.amazonaws.com/tsv/trips_{0..9}.tsv.gz', 'TabSeparatedWithNames') LIMIT 1000000;

Depending on the hardware, this latter insert of 1m rows may take a few minutes to execute. You can confirm the progress via the system.processes table. Feel free to adjust the row count up to the limit of 10m and explore some sample queries.

SELECT passenger_count, avg(tip_amount) as avg_tip, avg(total_amount) as avg_amount FROM trips_gcs GROUP BY passenger_count;

Handling Replication

Replication with GCS disks can be accomplished by using the ReplicatedMergeTree table engine. See the replicating a single shard across two GCP regions using GCS guide for details.

Learn More

The Cloud Storage XML API is interoperable with some tools and libraries that work with services such as Amazon Simple Storage Service (Amazon S3).

For further information on tuning threads, see Optimizing for Performance.

Using Google Cloud Storage (GCS)

tip

Object storage is used by default in ClickHouse Cloud, you do not need to follow this procedure if you are running in ClickHouse Cloud.

Plan the deployment

This tutorial is written to describe a replicated ClickHouse deployment running in Google Cloud and using Google Cloud Storage (GCS) as the ClickHouse storage disk "type".

In the tutorial, you will deploy ClickHouse server nodes in Google Cloud Engine VMs, each with an associated GCS bucket for storage. Replication is coordinated by a set of ClickHouse Keeper nodes, also deployed as VMs.

Sample requirements for high availability:

  • Two ClickHouse server nodes, in two GCP regions
  • Two GCS buckets, deployed in the same regions as the two ClickHouse server nodes
  • Three ClickHouse Keeper nodes, two of them are deployed in the same regions as the ClickHouse server nodes. The third can be in the same region as one of the first two Keeper nodes, but in a different availability zone.

ClickHouse Keeper requires two nodes to function, hence a requirement for three nodes for high availability.

Prepare VMs

Deploy five VMS in three regions:

RegionClickHouse ServerBucketClickHouse Keeper
1chnode1bucket_regionnamekeepernode1
2chnode2bucket_regionnamekeepernode2
3 *keepernode3

* This can be a different availability zone in the same region as 1 or 2.

Deploy ClickHouse

Deploy ClickHouse on two hosts, in the sample configurations these are named chnode1, chnode2.

Place chnode1 in one GCP region, and chnode2 in a second. In this guide us-east1 and us-east4 are used for the compute engine VMs, and also for GCS buckets.

note

Do not start clickhouse server until after it is configured. Just install it.

Refer to the installation instructions when performing the deployment steps on the ClickHouse server nodes.

Deploy ClickHouse Keeper

Deploy ClickHouse Keeper on three hosts, in the sample configurations these are named keepernode1, keepernode2, and keepernode3. keepernode1 can be deployed in the same region as chnode1, keepernode2 with chnode2, and keepernode3 in either region, but in a different availability zone from the ClickHouse node in that region.

Refer to the installation instructions when performing the deployment steps on the ClickHouse Keeper nodes.

Create two buckets

The two ClickHouse servers will be located in different regions for high availability. Each will have a GCS bucket in the same region.

In Cloud Storage > Buckets choose CREATE BUCKET. For this tutorial two buckets are created, one in each of us-east1 and us-east4. The buckets are single region, standard storage class, and not public. When prompted, enable public access prevention. Do not create folders, they will be created when ClickHouse writes to the storage.

If you need step-by-step instructions to create buckets and an HMAC key, then expand Create GCS buckets and an HMAC key and follow along:

Create GCS buckets and an HMAC key

ch_bucket_us_east1

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (1)

ch_bucket_us_east4

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (2)

Generate an Access key

Create a service account HMAC key and secret

Open Cloud Storage > Settings > Interoperability and either choose an existing Access key, or CREATE A KEY FOR A SERVICE ACCOUNT. This guide covers the path for creating a new key for a new service account.

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (3)

Add a new service account

If this is a project with no existing service account, CREATE NEW ACCOUNT.

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (4)

There are three steps to creating the service account, in the first step give the account a meaningful name, ID, and description.

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (5)

In the Interoperability settings dialog the IAM role Storage Object Admin role is recommended; select that role in step two.

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (6)

Step three is optional and not used in this guide. You may allow users to have these privileges based on your policies.

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (7)

The service account HMAC key will be displayed. Save this information, as it will be used in the ClickHouse configuration.

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (8)

Configure ClickHouse Keeper

All of the ClickHouse Keeper nodes have the same configuration file except for the server_id line (first highlighted line below). Modify the file with the hostnames for your ClickHouse Keeper servers, and on each of the servers set the server_id to match the appropriate server entry in the raft_configuration. Since this example has server_id set to 3, we have highlighted the matching lines in the raft_configuration.

  • Edit the file with your hostnames, and make sure that they resolve from the ClickHouse server nodes and the Keeper nodes
  • Copy the file into place (/etc/clickhouse-keeper/keeper_config.xml on each of the Keeper servers
  • Edit the server_id on each machine, based on its entry number in the raft_configuration

/etc/clickhouse-keeper/keeper_config.xml

<clickhouse>
<logger>
<level>trace</level>
<log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
<errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
<listen_host>0.0.0.0</listen_host>
<keeper_server>
<tcp_port>9181</tcp_port>
<server_id>3</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>

<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<raft_logs_level>warning</raft_logs_level>
</coordination_settings>

<raft_configuration>
<server>
<id>1</id>
<hostname>keepernode1.us-east1-b.c.clickhousegcs-374921.internal</hostname>
<port>9234</port>
</server>
<server>
<id>2</id>
<hostname>keepernode2.us-east4-c.c.clickhousegcs-374921.internal</hostname>
<port>9234</port>
</server>
<server>
<id>3</id>
<hostname>keepernode3.us-east5-a.c.clickhousegcs-374921.internal</hostname>
<port>9234</port>
</server>
</raft_configuration>
</keeper_server>
</clickhouse>

Configure ClickHouse Server

best practice

Some of the steps in this guide will ask you to place a configuration file in /etc/clickhouse-server/config.d/. This is the default location on Linux systems for configuration override files. When you put these files into that directory ClickHouse will merge the content with the default configuration. By placing these files in the config.d directory you will avoid losing your configuration during an upgrade.

Networking

By default, ClickHouse listens on the loopback interface, in a replicated setup networking between machines is necessary. Listen on all interfaces:

/etc/clickhouse-server/config.d/network.xml

<clickhouse>
<listen_host>0.0.0.0</listen_host>
</clickhouse>

Remote ClickHouse Keeper servers

Replication is coordinated by ClickHouse Keeper. This configuration file identifies the ClickHouse Keeper nodes by hostname and port number.

  • Edit the hostnames to match your Keeper hosts

/etc/clickhouse-server/config.d/use-keeper.xml

<clickhouse>
<zookeeper>
<node index="1">
<host>keepernode1.us-east1-b.c.clickhousegcs-374921.internal</host>
<port>9181</port>
</node>
<node index="2">
<host>keepernode2.us-east4-c.c.clickhousegcs-374921.internal</host>
<port>9181</port>
</node>
<node index="3">
<host>keepernode3.us-east5-a.c.clickhousegcs-374921.internal</host>
<port>9181</port>
</node>
</zookeeper>
</clickhouse>

Remote ClickHouse servers

This file configures the hostname and port of each ClickHouse server in the cluster. The default configuration file contains sample cluster definitions, in order to show only the clusters that are completely configured the tag replace="true" is added to the remote_servers entry so that when this configuration is merged with the default it replaces the remote_servers section instead of adding to it.

  • Edit the file with your hostnames, and make sure that they resolve from the ClickHouse server nodes

/etc/clickhouse-server/config.d/remote-servers.xml

<clickhouse>
<remote_servers replace="true">
<cluster_1S_2R>
<shard>
<replica>
<host>chnode1.us-east1-b.c.clickhousegcs-374921.internal</host>
<port>9000</port>
</replica>
<replica>
<host>chnode2.us-east4-c.c.clickhousegcs-374921.internal</host>
<port>9000</port>
</replica>
</shard>
</cluster_1S_2R>
</remote_servers>
</clickhouse>

Replica identification

This file configures settings related to the ClickHouse Keeper path. Specifically the macros used to identify which replica the data is part of. On one server the replica should be specified as replica_1, and on the other server replica_2. The names can be changed, based on our example of one replica being stored in South Carolina and the other in Northern Virginia the values could be carolina and virginia; just make sure that they are different on each machine.

/etc/clickhouse-server/config.d/macros.xml

<clickhouse>
<distributed_ddl>
<path>/clickhouse/task_queue/ddl</path>
</distributed_ddl>
<macros>
<cluster>cluster_1S_2R</cluster>
<shard>1</shard>
<replica>replica_1</replica>
</macros>
</clickhouse>

Storage in GCS

ClickHouse storage configuration includes disks and policies. The disk being configured below is named gcs, and is of type s3. The type is s3 because ClickHouse accesses the GCS bucket as if it was an AWS S3 bucket. Two copies of this configuration will be needed, one for each of the ClickHouse server nodes.

These substitutions should be made in the configuration below.

These substitutions differ between the two ClickHouse server nodes:

  • REPLICA 1 BUCKET should be set to the name of the bucket in the same region as the server
  • REPLICA 1 FOLDER should be changed to replica_1 on one of the servers, and replica_2 on the other

These substitutions are common across the two nodes:

  • The access_key_id should be set to the HMAC Key generated earlier
  • The secret_access_key should be set to HMAC Secret generated earlier

/etc/clickhouse-server/config.d/storage.xml

<clickhouse>
<storage_configuration>
<disks>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/REPLICA 1 BUCKET/REPLICA 1 FOLDER/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
<cache>
<type>cache</type>
<disk>gcs</disk>
<path>/var/lib/clickhouse/disks/gcs_cache/</path>
<max_size>10Gi</max_size>
</cache>
</disks>
<policies>
<gcs_main>
<volumes>
<main>
<disk>gcs</disk>
</main>
</volumes>
</gcs_main>
</policies>
</storage_configuration>
</clickhouse>

Start ClickHouse Keeper

Use the commands for your operating system, for example:

sudo systemctl enable clickhouse-keeper
sudo systemctl start clickhouse-keeper
sudo systemctl status clickhouse-keeper

Check ClickHouse Keeper status

Send commands to the ClickHouse Keeper with netcat. For example, mntr returns the state of the ClickHouse Keeper cluster. If you run the command on each of the Keeper nodes you will see that one is a leader, and the other two are followers:

echo mntr | nc localhost 9181
zk_version v22.7.2.15-stable-f843089624e8dd3ff7927b8a125cf3a7a769c069
zk_avg_latency 0
zk_max_latency 11
zk_min_latency 0
zk_packets_received 1783
zk_packets_sent 1783
zk_num_alive_connections 2
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count 135
zk_watch_count 8
zk_ephemerals_count 3
zk_approximate_data_size 42533
zk_key_arena_size 28672
zk_latest_snapshot_size 0
zk_open_file_descriptor_count 182
zk_max_file_descriptor_count 18446744073709551615
zk_followers 2
zk_synced_followers 2

Start ClickHouse server

On chnode1 and chnode run:

sudo service clickhouse-server start
sudo service clickhouse-server status

Verification

Verify disk configuration

system.disks should contain records for each disk:

  • default
  • gcs
  • cache
SELECT *
FROM system.disks
FORMAT Vertical
Row 1:
──────
name: cache
path: /var/lib/clickhouse/disks/gcs/
free_space: 18446744073709551615
total_space: 18446744073709551615
unreserved_space: 18446744073709551615
keep_free_space: 0
type: s3
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 1
is_broken: 0
cache_path: /var/lib/clickhouse/disks/gcs_cache/

Row 2:
──────
name: default
path: /var/lib/clickhouse/
free_space: 6555529216
total_space: 10331889664
unreserved_space: 6555529216
keep_free_space: 0
type: local
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 0
is_broken: 0
cache_path:

Row 3:
──────
name: gcs
path: /var/lib/clickhouse/disks/gcs/
free_space: 18446744073709551615
total_space: 18446744073709551615
unreserved_space: 18446744073709551615
keep_free_space: 0
type: s3
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 1
is_broken: 0
cache_path:

3 rows in set. Elapsed: 0.002 sec.

Verify that tables created on the cluster are created on both nodes

create table trips on cluster 'cluster_1S_2R' (
`trip_id` UInt32,
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_datetime` DateTime,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`tip_amount` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4))
ENGINE = ReplicatedMergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
SETTINGS storage_policy='gcs_main'
┌─host───────────────────────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ chnode2.us-east4-c.c.gcsqa-375100.internal │ 9000 │ 0 │ │ 1 │ 1 │
└────────────────────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host───────────────────────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ chnode1.us-east1-b.c.gcsqa-375100.internal │ 9000 │ 0 │ │ 0 │ 0 │
└────────────────────────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2 rows in set. Elapsed: 0.641 sec.

Verify that data can be inserted

INSERT INTO trips SELECT
trip_id,
pickup_date,
pickup_datetime,
dropoff_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
tip_amount,
total_amount,
payment_type
FROM s3('https://ch-nyc-taxi.s3.eu-west-3.amazonaws.com/tsv/trips_{0..9}.tsv.gz', 'TabSeparatedWithNames')
LIMIT 1000000

Verify that the storage policy gcs_main is used for the table.

SELECT
engine,
data_paths,
metadata_path,
storage_policy,
formatReadableSize(total_bytes)
FROM system.tables
WHERE name = 'trips'
FORMAT Vertical
Row 1:
──────
engine: ReplicatedMergeTree
data_paths: ['/var/lib/clickhouse/disks/gcs/store/631/6315b109-d639-4214-a1e7-afbd98f39727/']
metadata_path: /var/lib/clickhouse/store/e0f/e0f3e248-7996-44d4-853e-0384e153b740/trips.sql
storage_policy: gcs_main
formatReadableSize(total_bytes): 36.42 MiB

1 row in set. Elapsed: 0.002 sec.

Verify in Google Cloud Console

Looking at the buckets you will see that a folder was created in each bucket with the name that was used in the storage.xml configuration file. Expand the folders and you will see many files, representing the data partitions.

Bucket for replica one

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (9)

Bucket for replica two

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (10)

Integrate Google Cloud Storage with ClickHouse | ClickHouse Docs (2024)

FAQs

Does Google storage support other Cloud Storage tools and libraries that work with services such as Amazon S3? ›

Google Cloud CLI

The gcloud CLI is the preferred command line tool for accessing Cloud Storage. It also lets you access and work with other cloud storage services that use HMAC authentication, like Amazon S3. After you add your Amazon S3 credentials to ~/.

What is the maximum concurrent connections in ClickHouse? ›

Concurrency Limits​

There is no limit to the number of queries per second (QPS) in your ClickHouse Cloud service. There is, however, a limit of 1000 concurrent queries per replica. QPS is ultimately a function of your average query execution time and the number of replicas in your service.

How do I connect Google Cloud Storage? ›

To create a connection, do the following steps:
  1. In the Cloud console, go to the Integration Connectors > Connections page and then select or create a Google Cloud project. ...
  2. Click + CREATE NEW to open the Create Connection page.
  3. In the Location section, choose the location for the connection.

What is the difference between Google Drive and Google Cloud Storage? ›

For example, Google Cloud Storage can host SaaS, IaaS and PaaS applications and offers multi-cloud platforms; and scales based on usage. Google Drive, however, is static in storage space and can handle only the standard type of files most businesses use daily, such as PDFs, documents and media files.

What is the difference between S3 and Google Cloud Storage? ›

AWS S3 provides high availability and durability, with a service level agreement (SLA) of 99.999999999% for object durability. Google Cloud Storage is highly reliable, with built-in redundancy and fault-tolerant capabilities to ensure the availability and durability of user data.

Is Google Cloud Storage the same as S3? ›

Google Cloud Storage might just be the ideal service if you want to store large amounts of data. While Amazon S3 is better at transferring data, they cannot compete with Google regarding storage.

How much data can ClickHouse handle? ›

The average clickhouse-server write size is approximately 1 MB (1024 KB), and thus the recommended stripe size is also 1 MB. The block size can be optimized if needed when set to 1 MB divided by the number of non-parity disks in the RAID array, such that each write is parallelized across all available non-parity disks.

How many cores does ClickHouse use? ›

By default, Click house server uses 8 CPU cores. If CPU cores increase beyond 8 than below line needs to add in the Click house server configuration file in order to make the extra cores in use.

What is the maximum memory limit for ClickHouse? ›

In the default configuration file, the maximum is 10 GB. The setting does not consider the volume of available memory or the total volume of memory on the machine.

How does Google Cloud work for storage? ›

How does Cloud Storage work? Cloud Storage uses remote servers to save data, such as files, business data, videos, or images. Users upload data to servers via an internet connection, where it is saved on a virtual machine on a physical server.

How to use Google Cloud Storage for free? ›

Visit drive.google.com in any web browser, then sign in using your name and password when prompted — and boom! You're all set. See the Google Drive website and Google Drive Help Center for more details about using your free storage space on Google Drive.

How much does Google cloud storage cost? ›

Google One premium plan

The base plan costs Rs 6,500 a year or Rs 650 per month with 2TB of cloud storage and the most expensive plan costs Rs 16,250 a year or Rs 1,625 and offers 5TB of cloud storage with additional benefits similar to the Google One base and standard plan.

Which is better iCloud or Google storage? ›

Both iCloud and Google Drive offer robust cloud storage solutions with their unique strengths. iCloud might be the go-to for those heavily invested in the Apple ecosystem, valuing seamless device integration. In contrast, Google Drive shines with its collaborative tools and cross-platform accessibility.

Is Google Cloud Storage for free? ›

Pricing for Cloud Storage services is primarily based on location and storage class. Additional usage-based data processing and data transfer charges may also apply. All customers get 5 GiB of US regional storage free per month, not charged against your credits. Learn more about Always Free limits.

Is OneDrive or Google Drive better? ›

If you work with large amounts of data, Google Drive can offer more space for free. However, OneDrive's pricing structure may be more favorable for Microsoft 365 users. You should also consider the platform you're running on and what devices you use, as both platforms support different operating systems.

Is Google Drive S3-compatible? ›

You can your Google Cloud Storage as a S3-compatible storage. If you access your Google Cloud Storage as a S3-compatible storage it does not requires OAuth authentication. You need NetDrive 3.17. 874 for Windows / NetDrive 3.17.

What types of data are supported by Google Cloud Storage? ›

Cloud Storage is typically used to store unstructured data. You can add objects of any kind and size, and up to 5 TB.

What can be stored in Google Cloud Storage? ›

Gmail messages and attachments, which includes your Spam and Trash folders. Files in Google Drive, which includes PDFs, images, and videos. Meet call recordings. Files created or edited in collaborative content creation apps like Google Docs, Sheets, Slides, Drawings, Forms, Recorder, and Jamboard.

What is S3-compatible object storage? ›

S3-Compatible Object Storage, or S3 is a storage solution that uses the S3 API. Data management and access to data is through an S3 compliant interface. The Spaces API aims to be interoperable with Amazon's AWS S3 API.

Top Articles
Latest Posts
Article information

Author: Otha Schamberger

Last Updated:

Views: 6835

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Otha Schamberger

Birthday: 1999-08-15

Address: Suite 490 606 Hammes Ferry, Carterhaven, IL 62290

Phone: +8557035444877

Job: Forward IT Agent

Hobby: Fishing, Flying, Jewelry making, Digital arts, Sand art, Parkour, tabletop games

Introduction: My name is Otha Schamberger, I am a vast, good, healthy, cheerful, energetic, gorgeous, magnificent person who loves writing and wants to share my knowledge and understanding with you.