Use hybrid storage for the DoubleCloud Managed ClickHouse® clusters

DoubleCloud Managed ClickHouse® clusters allow users to store data separately - the frequently used hot data present in the local disk storage and the cold data is stored in object storage (S3).

This scenario shows how to create a Managed ClickHouse® cluster, a database, and a table with the specified TTL policy. After that, we'll upload data that the service will automatically split between the disk and object storage.

Create a Managed ClickHouse® cluster

  1. Go to the Clusters overview page in the console.

  2. Click Create cluster in the upper-right corner of the page.

  3. Select ClickHouse®.

  4. Choose a provider and a region.

  5. Under Resources:

    1. Select the s1-c2-m4 preset for CPU, RAM capacity, and storage space to create a cluster with minimal configuration.

      Understand your ClickHouse® resource preset

      A resource preset has the following structure:

      <CPU platform> - C<number of CPU cores> - M<number of gigabytes of RAM>
      

      There are three available CPU platforms:

      • g - ARM Graviton

      • i - Intel (x86)

      • s - AMD (x86)

      For example, the i1-c2-m8 preset means that it's an Intel platform 2-core CPU with 8 gigabytes of RAM.

      You can see the availability of CPU platforms across our Managed Service for ClickHouse® areas and regions.

    2. Choose the number of replicas. Let's keep it as is with a single replica.

    3. Select the number of shards. Keep a single shard.

      Understand shards and replicas

      Shards refer to the servers that contain different parts of the data (to read all the data, you must access all the shards). Replicas are duplicating servers (to read all the data, you can access the data on any of the replicas).

  6. Under Basic settings:

    1. Enter the cluster Name, in this scenario - tutorial-cluster.

    2. From the Version drop-down list, select the ClickHouse® version the Managed ClickHouse® cluster will use. For most clusters, we recommend using the latest version.

  7. Under Advanced:

    1. Under Maintenance settings, select the scheduling type:

      • Arbitrary to delegate maintenance window selection to DoubleCloud. Usually, your cluster will perform maintenance procedure at the earliest available time slot.

        Warning

        We suggest not to use this scheduling type with single-host clusters, as it can lead to your cluster becoming unavailable at random.

      • By schedule to set the weekday and time (UTC) when DoubleCloud may perform maintenance on your cluster.

    2. Under NetworkingVPC, specify in which DoubleCloud VPC to locate your cluster. Use the default value in the previously selected region if you don't need to create this cluster in a specific network.

    3. Select the allocation for the ClickHouse Keeper service - embedded or dedicated.

      We recommend using dedicated hosts for high-load production clusters. Dedicated ClickHouse Keeper hosts ensure that your production cluster's performance remains unaffected under heavy loads - they don't use its CPU or memory.

      ClickHouse Keeper host location is irreversible

      After creating the cluster, you won't be able to change the ClickHouse Keeper deployment type.

      For dedicated ClickHouse Keeper hosts, select the appropriate resource preset. Please note that this resource preset will apply to all three hosts and will be billed accordingly.

    4. Specify or adjust your cluster's DBMS settings. For more information, see the Settings reference.

  8. Click Submit.

Your cluster will appear with the Creating status on the Clusters page in the console. Setting everything up may take some time. When the cluster is ready, it changes its state to Alive.

Click on the cluster to open its information page:

cluster-created

Tip

The DoubleCloud service creates the superuser admin and its password automatically. You can find both the User and the Password in the Overview tab on the cluster information page.

To create users for other roles, see Manage ClickHouse® users

Install clickhouse-client

Use one of the ways to install the CLI client:

  1. Open your terminal.

  2. (Optional) Start Docker if needed:

    sudo service docker start
    
  3. Pull the clickhouse-client Docker image:

    docker pull clickhouse/clickhouse-client
    
  1. Open your terminal.

  2. Connect to the ClickHouse® official DEB repository from your Linux system:

    sudo apt update && sudo apt install -y apt-transport-https ca-certificates dirmngr && \
    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754 && \
    echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
    
  3. Refresh the package list and install the clickhouse-client :

    sudo apt update && sudo apt install -y clickhouse-client
    
  1. Open your terminal.

  2. Connect to a ClickHouse® official RPM repository from your Linux system:

    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo https://packages.clickhouse.com/rpm/clickhouse.repo
    
  3. Install the clickhouse-client :

    sudo yum install -y clickhouse-client
    

Warning

If you run a RedHat 7-based Linux distribution, including Cent OS 7, Oracle Linux 7 and others, you need to download and install trusted certificates and manually add the path to them in the clickhouse-client configuration file as follows:

  1. Install the root certificate:

    curl https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem > \ 
    /etc/pki/ca-trust/source/anchors/isrg-root-x2-cross-signed.pem
    
  2. Install the intermediate certificate:

    curl https://letsencrypt.org/certs/lets-encrypt-r3-cross-signed.pem > \
    /etc/pki/ca-trust/source/anchors/lets-encrypt-r3-cross-signed.pem
    
  3. Update the list of trusted certificates:

    sudo update-ca-trust
    
  4. Locate your clickhouse-client configuration file (by default, you can find it at /etc/clickhouse-client/config.xml) and add the path to the certificates into the <openSSL> section:

    <client> <!-- Used for connection to server's secure tcp port -->
       <loadDefaultCAFile>true</loadDefaultCAFile>
       <cacheSessions>true</cacheSessions>
       <disableProtocols>sslv2,sslv3</disableProtocols>
       <preferServerCiphers>true</preferServerCiphers>
       <caConfig>/etc/ssl/certs/ca-bundle.crt</caConfig>
       <!-- Use for self-signed: <verificationMode>none</verificationMode> -->
       <invalidCertificateHandler>
       <!-- Use for self-signed: <name>AcceptCertificateHandler</name> -->
       <name>RejectCertificateHandler</name>
       </invalidCertificateHandler>
    </client>
    

Connect to the cluster

  1. Select Clusters from the list of services on the left.

  2. Select the name of your cluster to open its information page. By default, you will see the Overview tab.

  3. Under Connection strings, find the Native interface string and click Copy.

  4. Open your terminal and run a command to connect to your cluster:

    docker run --network host --rm -it clickhouse/<Native interface connection string>
    
    The complete Docker command structure
    docker run --network host --rm -it \ 
                clickhouse/clickhouse-client \
                --host <FQDN of your cluster> \
                --secure \
                --user <cluster user name> \
                --password <cluster user password> \
                --port 9440 
    
    <Native interface connection string>
    
  5. Once connected, you can also check if your cluster has all the required policies. Get the available storage policies with the following query:

    SELECT * FROM system.storage_policies
    

    Look at the policy_name column. The output should display the following policies for the above-mentioned disks - default, local (alias for default) and object_storage:

    ┌─policy_name────┬─volume_name────┬─volume_priority─┬─disks──────────────┬─────┬─prefer_not_to_merge─┐
    │ default        │ default        │               1 │ ['default']        │.....│                   0 │
    │ hybrid_storage │ default        │               1 │ ['default']        │.....│                   0 │
    │ hybrid_storage │ object_storage │               2 │ ['object_storage'] │.....│                   1 │
    │ local          │ default        │               1 │ ['default']        │.....│                   0 │
    │ object_storage │ object_storage │               1 │ ['object_storage'] │.....│                   1 │
    └────────────────┴────────────────┴─────────────────┴────────────────────┴─────┴─────────────────────┘
    

Create a database and a table with a non-default storage policy

When you've confirmed that all the required policies are present, you can proceed to creating a database.

  1. Type the following command to create a database:

    CREATE DATABASE IF NOT EXISTS "db_hits"
    
  2. You have the database, and now it's time to create a table for the data you'll upload later. You'll need to define the TTL expression. TTL sets the lifetime of a table row. In this case, it's the number of months from the last date of 2016 to the current date. The data that fits in this interval is stored on network drives. Therefore, all the data recorded before this interval will go into the object storage.

    Send the following query to create a table that will automatically split data by months:

    CREATE TABLE db_hits.hybrid_storage_table ON CLUSTER default
       (
          Hit_ID Int32, 
          Date Date, 
          Time_Spent Float32, 
          Cookie_Enabled Int32, 
          Region_ID Int32, 
          Gender String, 
          Browser String, 
          Traffic_Source String, 
          Technology String
       )
       ENGINE ReplicatedMergeTree
       PARTITION BY Date
       ORDER BY (Hit_ID, Date)
       TTL Date + toIntervalDay(dateDiff('day', toDate('2016-12-31'), now())) TO DISK 'object_storage'
       SETTINGS storage_policy = 'hybrid_storage'
    

    Warning

    The expression for TTL is shaped for the selected test dataset. You must split the fixed data collected long ago into parts for placement at different storage levels. For most tables that are constantly updated with new data, a simpler expression for TTL will suffice.

    For example, to move the data older than 5 days to the object storage, execute the following query:

    TTL Date + INTERVAL 5 DAY TO DISK 'object_storage'
    

Upload data to your cluster

  1. Now, you can open another terminal instance or exit from your cluster by typing the exit command.

  2. In a separate terminal, run the following query that will fetch the data from our S3 bucket and upload it with the INSERT query:

    curl https://doublecloud-docs.s3.eu-central-1.amazonaws.com/data-sets/hits_sample.csv \
    | docker run --network host --rm -i
    clickhouse/<Native interface connection string> \
    --query="INSERT INTO db_hits.hybrid_storage_table FORMAT CSVWithNames" \
    --format_csv_delimiter=";"
    
    The complete Docker command structure
    curl https://doublecloud-docs.s3.eu-central-1.amazonaws.com/data-sets/hits_sample.csv
    | docker run --network host --rm -i 
       clickhouse/clickhouse-client 
       --host <FQDN of your cluster> 
       --port 9440 --secure 
       --user <your cluster username> 
       --password <your cluster password> 
       --query="INSERT INTO db_hits.hybrid_storage_table FORMAT CSVWithNames"
       --format_csv_delimiter=";"
    
  1. Now, open another terminal instance or exit from your cluster by typing the exit command.

  2. Run the following query that will fetch the data from our S3 bucket and upload it with the INSERT query:

    curl https://doublecloud-docs.s3.eu-central-1.amazonaws.com/data-sets/hits_sample.csv \
    | <Native interface connection string> \
    --query="INSERT INTO db_hits.hybrid_storage_table FORMAT CSVWithNames" \
    --format_csv_delimiter=";"
    

If the upload was successful, perform a query to select partitions, their names, and disks where these partitions are located:

  1. Now, you can open another terminal instance or exit from your cluster by typing the exit command.

  2. In a separate terminal, run the following query that will fetch the data from our S3 bucket and upload it with the INSERT query:

    curl https://doublecloud-docs.s3.eu-central-1.amazonaws.com/data-sets/hits_sample.csv \
    | docker run --network host --rm -i
    clickhouse/<Native interface connection string> \
    --query="SELECT partition, name, disk_name from system.parts where table = 'hybrid_storage_table'"
    
    The complete Docker command structure
    curl https://doublecloud-docs.s3.eu-central-1.amazonaws.com/data-sets/hits_sample.csv
    | docker run --network host --rm -i 
       clickhouse/clickhouse-client 
       --host <FQDN of your cluster> 
       --port 9440 --secure 
       --user <your cluster username> 
       --password <your cluster password> 
       --query="SELECT partition, name, disk_name from system.parts where table = 'hybrid_storage_table'"
    
  1. Now, open another terminal instance or exit from the current clickhouse-client session with the exit command.

  2. Run the following query that will fetch the data from our S3 bucket and upload it with the INSERT query:

    curl https://doublecloud-docs.s3.eu-central-1.amazonaws.com/data-sets/hits_sample.csv \
    | <Native interface connection string> \
    --query="SELECT partition, name, disk_name from system.parts where table = 'hybrid_storage_table'"
    

If you configured your table and uploaded data correctly, the terminal output will show that your data is divided between the local disk and object storage. A fragment of this output looks as follows:

┌─partition──┬─name───────────┬─disk_name──────┐
│ 2016-01-10 │ 20160110_0_0_0 │ object_storage │
│ 2016-01-30 │ 20160130_0_0_0 │ object_storage │
│ .......... │ .............. │ .............. │
│ 2016-12-25 │ 20161225_0_0_0 │ object_storage │
│ 2016-12-31 │ 20161231_0_0_0 │ object_storage │
│ 2017-01-06 │ 20170106_0_0_0 │ default        │
│ 2017-01-12 │ 20170112_0_0_0 │ default        │
│ 2017-01-18 │ 20170118_0_0_0 │ default        │
│ .......... │ .............. │ .............. │
│ 2017-09-09 │ 20170909_0_0_0 │ default        │
└────────────┴────────────────┴────────────────┘

See also