Your quick-start into the DoubleCloud world

Take your first steps into the exciting new world of data management with DoubleCloud!

To understand how our service works and to start using it in your day-to-day work, let's learn the fundamentals:

  • Create a cluster and a database for data storage.
  • Transfer the data from a remote data warehouse to a DoubleCloud cluster.

Following this straightforward step-by-step quick-start tutorial, you will learn how to:

  1. Prepare to acquire the data

    1. Create a Managed ClickHouse® cluster

    2. Create a ClickHouse® database

  2. Transfer the data

    1. Create a source endpoint

    2. Create a target endpoint

    3. Create and activate a transfer

Prepare to acquire the data

The first things you need to do are to create a cluster and a database that will store the data:

  1. Create a Managed ClickHouse® cluster

    This is your resource allocation tool. It allows you to acquire CPU, memory, and storage quotas to operate your databases.

  2. Create a ClickHouse® database

    This section will explain how to talk to your cluster directly from the terminal and use the ClickHouse® CLI toolkit.

Create a Managed ClickHouse® cluster

Warning

During the trial period, you can create clusters with up to 8 cores, 32 GB RAM, and 400 GB storage. If you need to raise the quotas, don't hesitate to contact our support.

  1. Go to the console.

  2. Log in to DoubleCloud if you already have an account, or create one if you open the console for the first time.

  3. Select Clusters from the list of services on the left.

  4. Click Create cluster in the upper-right corner of the page.

    1. Select ClickHouse®.

    2. Choose a provider and a region closest to your geographical location.

    3. Under Resources:

      • Select a preset for CPU, RAM capacity, and storage space. The minimal s2-c2-m4 preset will be more than enough for this tutorial.

        A resource preset has the following structure:

        <CPU platform> - C<number of CPU cores> - M<number of gigabytes of RAM>
        

        There are three available CPU platforms:

        • g - ARM Graviton

        • i - Intel (x86)

        • s - AMD (x86)

        For example, the i1-c2-m8 preset means that it's an Intel platform 2-core CPU with 8 gigabytes of RAM.

        You can see the availability of CPU platforms across our Managed Service for ClickHouse® areas and regions.

      • Keep 1 replica and 1 shard.

    4. Under Basic settings:

      • Enter the cluster Name: doublecloud-quickstart.

      • Keep the Version as is - this is the latest stable version of ClickHouse®.

    5. Under NetworkingVPC, specify in which DoubleCloud VPC to locate your cluster. Use the default value in the previously selected region if you don't need to create this cluster in a specific network.

    6. Click Submit.

    Your cluster will appear with the Creating status on the Clusters page. Setting everything up may take some time. You can safely go to the next section of the tutorial while the cogs are moving in the background.

  5. When the cluster is ready to operate, its state in the console will change to Alive:

    quickstart-cluster-ready

    Tip

    The DoubleCloud service creates the superuser admin and its password automatically. You can find both the User and the Password in the Overview tab on the cluster information page.

    To create users for other roles, see Manage ClickHouse® users

Create a ClickHouse® database

This section gives you a glimpse into talking directly to your Managed ClickHouse® cluster from your terminal.

Tip

This tutorial shows how to connect to a cluster with a native CLI client and Docker , but you can use other tools of your choice. Refer to the following article to see other connection options: Connect to a ClickHouse® cluster.

  1. Install the ClickHouse® client.

  2. Let's connect to your new cluster:

    1. Select Clusters from the list of services on the left.

    2. Select the name of your cluster to open its information page. By default, you will see the Overview tab.

    3. Under Connection strings, find the Native interface string and click Copy.

    4. Run the following command in your terminal:

    docker run --network host --rm -it clickhouse/<Native interface connection string>
    
    The complete Docker command structure
    docker run --network host --rm -it \ 
                clickhouse/clickhouse-client \
                --host <FQDN of your cluster> \
                --secure \
                --user <cluster user name> \
                --password <cluster user password> \
                --port 9440 
    
    <Native interface connection string>
    
  3. You are now connected to your cluster via the clickhouse-client. It's time to create a database. Let's call it start_db:

    CREATE DATABASE IF NOT EXISTS "start_db"
    
  4. Let's test if the database was created successfully. Type SHOW DATABASES. You should see start_db in the readout:

    ┌─name───────────────┐
    │ INFORMATION_SCHEMA │
    │ _system            │
    │ default            │
    │ information_schema │
    │ start_db           │
    │ system             │
    └────────────────────┘
    

Transfer the data

Now it's time to set up the tools to get the data from a remote source and transfer it to your start_db ClickHouse® database. To accomplish this, you need to complete the following steps:

  • Create a source endpoint

    This is your data fetcher. It will connect to a remote source and send the data to your Managed ClickHouse® cluster.

  • Create a target endpoint

    This is your receiver. It will acquire the data sent by the source endpoint and write it to the database on your Managed ClickHouse® cluster.

  • Create and activate a transfer

    This is your data pipeline tool. It will connect your endpoints and ensure the integrity of the data.

Create a source endpoint

  1. In the list of services, select Transfer.

  2. Select Endpoints tab, click Create endpoint and choose Source.

  3. Select S3 as the Source type.

  4. Under Basic settings:

    1. Enter the Name of the endpoint: s3-source-quickstart.

    2. (optional) Enter a Description of the endpoint.

  5. Specify endpoint parameters under Endpoint settings:

    1. Specify the Dataset: bookings.

    2. Provide the Path pattern: data-sets/bookings.csv.

    3. Auto-infer the Schema by typing {}.

    4. Select the data format - CSV.

  6. Under CSV, specify the Delimiter - ;. Keep the rest of the fields with their default values.

    This is what it should look like on your screen:

    image-namesource-endpoint-filled-in

  7. Under S3: Amazon Web Services, enter the name of the Bucket: doublecloud-docs. As the bucket is public, leave the rest of the fields blank.

  8. Optionally, you can test your source endpoint:

    After configuring your endpoint, click Test. You'll see an endpoint test dialog:

    endpoint-test

    You can use two runtime types for connection testing - Dedicated and Serverless.

    Runtime compatibility warning

    Don't use endpoints with different runtime types in the same transfer - this will cause it to fail.

    Dedicated

    The Transfer service uses this runtime to connect to your data source via an internal or external network.

    This runtime is useful when you need to use a specific network - it may be an external network or an internal one with a peer connection.

    To run the test with this runtime:

    1. Under Runtime, select Dedicated.

    2. From the drop-down list, select the network to use to connect to your data source.

    3. Click Test connection.

      After completing the test procedure, you'll see a list of the endpoint's data sources. For Apache Kafka® endpoints, you'll also see data samples for each data source.

    Serverless

    The Transfer service uses this runtime to connect to your data sources available from the internet via an automatically defined network.

    Use this runtime to test an endpoint to a data source located outside isolated networks.

    To run the test with this runtime:

    1. Under Runtime, select Serverless.

    2. Click Test.

      After completing the test procedure, you'll see a list of the endpoint's data sources. For Apache Kafka® endpoints, you'll also see data samples for each data source.

    Warning

    Please be patient - testing may take up to several minutes.

  9. Click Submit. You'll see the following line on your Endpoints list:

    source-endpoint-ready

The transmitter is ready to go. We need to create an endpoint to receive the data from a remote source.

Create a target endpoint

  1. In the list of services, select Transfer.

  2. Select the Endpoints tab, click Create endpoint and choose Target.

  3. Select ClickHouse® as the Target type.

  4. Under Basic settings:

    1. Enter the Name of the endpoint: clickhouse-target-quickstart

    2. (optional) Enter a Description of the endpoint.

  5. Specify endpoint parameters under Endpoint settings:

    1. Select connection type. This tutorial transfers data to the Managed cluster.

    2. Specify the connection properties:

      • Under Managed cluster, select your cluster name (doublecloud-quickstart) from the drop-down list.

      • Specify the User of the database: admin.

      • Enter the Password of the database user.

      • Specify the Database name you want to transfer the data to: start_db.

    This is what it should look like on your screen:

    image-namesource-endpoint-filled-in

    1. Under Cleanup policy, select Drop.
  6. Leave all the other fields blank or with their default values.

  7. Click Submit. You'll see the following line on your Endpoints list:

    target-endpoint-ready

Good work. Now we've created an endpoint that will receive and write the data to your ClickHouse® database. All we need now is a tool that will connect both endpoints and transfer the data.

Create and activate a transfer

  1. In the list of services, select Transfer.

  2. Click Create transfer.

  3. Under Endpoints:

    1. Select s3-source-quickstart from the Source drop-down menu.

    2. Select clickhouse-target-quickstart from the Target.

  4. Under Basic settings:

    1. Enter the transfer Name: transfer-quickstart

    2. (optional) Enter the transfer Description.

  5. Under Transfer settings, select the Transfer type. In this use case, we choose Snapshot to make the transfer process as fast as possible.

    This is what it should look like on your screen:

    transfers-form-filled-in

  6. Leave all the other fields blank or with their default values.

  7. Click Submit. You will see the following line in your Transfers tab:

    transfer-ready

  8. After you've created a transfer, click Activate.

  9. Wait until your transfer status changes to Done.

  10. Check the data transferred to your ClickHouse® database:

    1. Open your terminal.

    2. Connect to your cluster and type the following command:

      SELECT * FROM "start_db".bookings LIMIT 100
      

Nice work! You have all the data transferred from a remote source and replicated with complete integrity in your own ClickHouse® database.

Keep exploring

For more information on what you can do with DoubleCloud, see the links below and continue exploring!