Create an Apache Kafka® cluster

Warning

During the trial period, you can create clusters with up to 8 cores, 32 GB RAM, and 400 GB storage. If you need to raise the quotas, don't hesitate to contact our support.

  1. Go to the Clusters page in the console.

  2. Click Create cluster in the upper-right corner of the page.

  3. Select Apache Kafka®.

  4. Choose a provider and a region.

  5. Specify the resources settings:

    1. Under Resources:

      • Select a preset for CPU and RAM.

        Understand your Apache Kafka® resource preset

        A resource preset has the following structure:

        <CPU platform> - C<number of CPU cores> - M<number of gigabytes of RAM>
        

        There are three available CPU platforms:

        • g - ARM Graviton

        • i - Intel (x86)

        • s - AMD (x86)

        For example, the i1-c2-m8 preset means that it's an Intel platform 2-core CPU with 8 gigabytes of RAM.

        You can see the availability of CPU platforms across our Managed Service for Apache Kafka® areas and regions.

      • Specify your SSD Storage capacity.

        When you increase your storage capacity, it's available for use immediately after you submit the new configuration. However, the whole process of modifying your storage volume may take from minutes to hours, depending on the volume increase applied. For example, a volume of 1 TB in size typically takes up to 6 hours to modify.

        Warning

        After you increase your SSD Storage capacity, wait at least six hours per 1 TB to modify it again.

      • Select the Number of Zones. The number of zones is multiplied by the number of brokers and determines the number of hosts.

      • Specify the number of Brokers per zone.

  6. Under Basic settings:

    • Enter the cluster Name, for example, Quickstart-cluster.

    • Select the version of Apache Kafka® for your cluster from the Version drop-down list. For most clusters, we recommend using the latest version.

  7. Under NetworkingVPC, select the network where you want to create the cluster.

    If you don’t need to place the cluster in a specific network, leave the preselected default option.

  8. Under Advanced:

    1. Under Maintenance settings, select between the arbitrary and scheduled maintenance:

      About maintenance settings

      If you select Arbitrary, DoubleCloud selects the maintenance window automatically. Usually, maintenance takes place at the earliest available time slot.

      Warning

      If your cluster has only one host, arbitrary maintenance can make it unavailable at a random time.

      To perform maintenance on a specific date and time, select By schedule and specify the day and time (UTC) when you want the cluster maintenance to be performed.

    2. In Autoscaling, select whether you want the cluster resources to automatically scale and specify the maximum limits they can increase to.

      If autoscaling is enabled, DoubleCloud regularly checks the resource utilization and automatically adjusts them depending on the cluster usage. Learn more

    3. Under NetworkingVPC, select the network where you want to create the cluster.

      If you don’t need to place the cluster in a specific network, leave the preselected default option.

    4. Under Cluster settings:

      1. Enable Data encryption (Note that it's enabled by default). We use the LUKS specification and KMS to encrypt data.

      2. Check the Schema registry box to enable it.

      3. Specify or adjust your cluster's settings under kafkaConfig. For more information, see the Settings reference.

  9. Click Submit.

Your cluster will appear with the Creating status on the Clusters page in the console. Setting everything up may take some time. When the cluster is ready, it changes its state to Alive.

Note

The DoubleCloud service creates the superuser admin and its password automatically. You can find both the User and the Password in the Overview tab on the cluster information page.

You can find the fully qualified domain name (FQDN) in the Hosts tab.

You can create a Managed Apache Kafka® cluster using the DoubleCloud Terraform provider .

Tip

If you haven't used Terraform before, refer to Create DoubleCloud resources with Terraform for more detailed instructions.

Example provider and resource configuration:

# main.tf

terraform {
  required_providers {
    doublecloud = {
      source    = "registry.terraform.io/doublecloud/doublecloud"
    }
  }
}

provider "doublecloud" {
  authorized_key = file("authorized_key.json")
}

data "doublecloud_network" "default" {
  name       = NETWORK_NAME              # Replace with the name of the network you want to use
  project_id = DOUBLECLOUD_PROJECT_ID    # Replace with your project ID
}

resource "doublecloud_kafka_cluster" "example-kafka" {
  project_id = DOUBLECLOUD_PROJECT_ID    # Replace with your project ID
  name       = "example-kafka"
  region_id  = "eu-central-1"
  cloud_type = "aws"
  network_id = data.doublecloud_network.default.id

  resources {
    kafka {
      resource_preset_id = "s2-c2-m4"
      disk_size          = 34359738368
      broker_count       = 1
      zone_count         = 1
    }
  }

  schema_registry {
    enabled = false
  }

  access {
    data_services    = ["transfer"]
    ipv4_cidr_blocks = [
      {
        value       = "10.0.0.0/24"
        description = "Office in Berlin"
      }
    ]
  }
}

To learn how to get the authorized_key.json file, refer to Create an API key. You can find the DoubleCloud project ID on the project settings page.

Tip

This example contains a minimum set of parameters required to create a functional example cluster. When you create your production cluster, make sure to use the configuration that is suitable for your needs. For a full list of available parameters, refer to the DoubleCloud Kafka cluster resource schema .

To create a Apache Kafka® cluster, use the ClusterService create method. The required parameters to create a functional cluster:

  • project_id - the ID of your project. You can get this value on your project's information page.

  • cloud_type - aws or gcp.

  • region_id - for the list of available regions and their region codes, see Areas and regions for Managed Service for Apache Kafka®.

  • name - your cluster's name. It must be unique within the project.

  • resources - specify the following from the doublecloud.ckafka.v1.Cluster model:

    • resource_preset_id - specify the name of the hardware resource preset for your cluster. For the list of available presets for Apache Kafka® clusters, see DoubleCloud hardware instances.

    • disk_size - the storage size for your cluster in bytes. We recommend allocating no less than 34359738368 bytes (32 GB).

    • broker_count - specify the number of brokers.

    • zone_count - specify the number of zones.

  • You can also enable schema registry for your cluster: use the schema_registry_config object within the ClusterService Create method.

github-mark-white

View this example on GitHub

import json
import logging

from google.protobuf.wrappers_pb2 import Int64Value

import doublecloud

from doublecloud.kafka.v1.cluster_pb2 import ClusterResources
from doublecloud.kafka.v1.cluster_service_pb2 import CreateClusterRequest
from doublecloud.kafka.v1.cluster_service_pb2_grpc import ClusterServiceStub

def create_cluster(sdk, project_id, cloud_type, region_id, name, network_id):
    cluster_service = sdk.client(ClusterServiceStub)
    operation = cluster_service.Create(
        CreateClusterRequest(
            project_id=project_id,
            cloud_type=cloud_type,
            region_id=region_id,
            name=name,
            resources=ClusterResources(
                kafka=ClusterResources.Kafka(
                    resource_preset_id="s2-c2-m4",
                    disk_size=Int64Value(value=<storage_size_in_bytes>),
                    broker_count=Int64Value(value=<number_of_brokers>),
                    broker_count=Int64Value(value=<number_of_zones>),
                )
            ),
            network_id=network_id,
        )
    )
    logging.info("Creating initiated")
    return operation
import (
   "context"
   "flag"
   "fmt"
   "log"

   "github.com/doublecloud/go-genproto/doublecloud/kafka/v1"
   dc "github.com/doublecloud/go-sdk"
   "google.golang.org/protobuf/types/known/wrapperspb"

   "github.com/doublecloud/go-sdk/iamkey"
   "github.com/doublecloud/go-sdk/operation"
)

func createCluster(ctx context.Context, dc *dc.SDK, flags *cmdFlags) (*operation.Operation, error) {
   // See https://double.cloud/docs/en/public-api/api-reference/kafka/ClusterService/all_operations#request2
   x, err := dc.Kafka().Cluster().Create(ctx, &kafka.CreateClusterRequest{
      ProjectId: *flags.projectID,
      CloudType: "aws",
      RegionId:  *flags.region,
      Name:      *flags.name,
      Resources: &kafka.ClusterResources{
         Kafka: &kafka.ClusterResources_Kafka{
            ResourcePresetId: "s2-c2-m4",
            DiskSize:         wrapperspb.Int64(32 * 2 << 30),
            BrokerCount:      wrapperspb.Int64(1),
            ZoneCount:        wrapperspb.Int64(1),
         },
      },
      NetworkId: *flags.networkID,
   })
   if err != nil {
      return nil, err
   }
   log.Println("Creating kafka cluster ...")
   log.Println("https://app.double.cloud/kafka/" + x.ResourceId + "/operations")
   op, err := dc.WrapOperation(x, err)
   if err != nil {
      panic(err)
   }
   err = op.Wait(ctx)
   return op, err
}

For more in-depth examples, check out DoubleCloud API Go SDK repository .

See also