Create an Apache Kafka® cluster
Warning
During the trial period, you can create clusters with up to 8 cores, 32 GB RAM, and 400 GB storage. If you need to raise the quotas, don't hesitate to contact our support.
-
Go to the Clusters overview
-
Click Create cluster in the upper-right corner of the page.
-
Select Apache Kafka®.
-
Choose a provider and a region.
-
Specify the resources settings:
-
Under Resources:
-
Select a preset for CPU and RAM.
Understand your Apache Kafka® resource presetA resource preset has the following structure:
<CPU platform> - C<number of CPU cores> - M<number of gigabytes of RAM>
There are three available CPU platforms:
-
g
- ARM Graviton -
i
- Intel (x86) -
s
- AMD (x86)
For example, the
i1-c2-m8
preset means that it's an Intel platform 2-core CPU with 8 gigabytes of RAM.You can see the availability of CPU platforms across our Managed Service for Apache Kafka® areas and regions.
-
-
Specify your SSD Storage capacity.
When you increase your storage capacity, it's available for use immediately after you submit the new configuration. However, the whole process of modifying your storage volume may take from minutes to hours, depending on the volume increase applied. For example, a volume of 1 TB in size typically takes up to
6
hours to modify.Warning
After you increase your SSD Storage capacity, please wait at least six hours per 1 TB to modify it again.
-
Select the Number of Zones. The number of zones is multiplied by the number of brokers and determines the number of hosts.
-
Specify the number of Brokers per zone.
-
-
-
Under Basic settings:
-
Enter the cluster Name, for example,
Quickstart-cluster
. -
Select the version of Apache Kafka® for your cluster from the Version drop-down list. For most clusters, we recommend using the latest version.
-
-
Under Networking → VPC, specify in which DoubleCloud VPC to locate your cluster. Use the
default
value in the previously selected region if you don't need to create this cluster in a specific network. -
Under Advanced:
-
Under Maintenance settings, select the scheduling type:
-
Arbitrary to delegate maintenance window selection to DoubleCloud. Usually, your cluster will perform maintenance procedure at the earliest available time slot.
Warning
We suggest not to use this scheduling type with single-host clusters, as it can lead to your cluster becoming unavailable at random.
-
By schedule to set the weekday and time (UTC) when DoubleCloud may perform maintenance on your cluster.
-
-
Under Networking → VPC, specify in which DoubleCloud VPC to locate your cluster. Use the
default
value in the previously selected region if you don't need to create this cluster in a specific network. -
Under Cluster settings:
-
Enable the Data encryption (Note that it's enabled by default). We use the LUKS
-
Check the Schema registry box to enable it.
-
Specify or adjust your cluster's settings under kafkaConfig. For more information, see the Settings reference.
-
-
-
Click Submit.
Your cluster will appear with the Creating
status on the Clusters page in the console. Setting everything up may take some time. When the cluster is ready, it changes its state to Alive
.
Note
The DoubleCloud service creates the superuser admin
and its password automatically. You can find both the User and the Password in the Overview tab on the cluster information page.
You can find the fully qualified domain name (FQDN) in the Hosts tab.
To create a Apache Kafka® cluster, use the ClusterService
create method. The required parameters to create a functional cluster:
-
project_id
- the ID of your project. You can get this value on your project's information page. -
cloud_type
- currently, you can use only the defaultaws
type. -
region_id
- for the list of available regions and their region codes, see Areas and regions for Managed Service for Apache Kafka®. -
name
- your cluster's name. It must be unique within the project. -
resources
- specify the following from the doublecloud.ckafka.v1.Cluster model:-
resource_preset_id
- specify the name of the hardware resource preset for your cluster. For the list of available presets for Apache Kafka® clusters, see DoubleCloud hardware instances. -
disk_size
- the storage size for your cluster in bytes. We recommend allocating no less than34359738368
bytes (32 GB). -
broker_count
- specify the number of brokers. -
zone_count
- specify the number of zones.
-
-
You can also enable schema registry for your cluster: use the
schema_registry_config
object within theClusterService
Create method.
import json
import logging
from google.protobuf.wrappers_pb2 import Int64Value
import doublecloud
from doublecloud.kafka.v1.cluster_pb2 import ClusterResources
from doublecloud.kafka.v1.cluster_service_pb2 import CreateClusterRequest
from doublecloud.kafka.v1.cluster_service_pb2_grpc import ClusterServiceStub
def create_cluster(sdk, project_id, region_id, name, network_id):
cluster_service = sdk.client(ClusterServiceStub)
operation = cluster_service.Create(
CreateClusterRequest(
project_id="<your_project_id>",
cloud_type="aws",
region_id="<resource_preset>",
name="<cnew_cluster_name>",
resources=ClusterResources(
kafka=ClusterResources.Kafka(
resource_preset_id="s1-c2-m4",
disk_size=Int64Value(value=<Storage_size_in_bytes>),
broker_count=Int64Value(value=<number_of_brokers>),
broker_count=Int64Value(value=<number_of_zones>),
)
),
network_id=network_id,
)
)
logging.info("Creating initiated")
return operation
import (
"context"
"flag"
"fmt"
"log"
"github.com/doublecloud/go-genproto/doublecloud/kafka/v1"
dc "github.com/doublecloud/go-sdk"
"google.golang.org/protobuf/types/known/wrapperspb"
"github.com/doublecloud/go-sdk/iamkey"
"github.com/doublecloud/go-sdk/operation"
)
func createCluster(ctx context.Context, dc *dc.SDK, flags *cmdFlags) (*operation.Operation, error) {
// See https://double.cloud/docs/en/public-api/api-reference/kafka/ClusterService/all_operations#request2
x, err := dc.Kafka().Cluster().Create(ctx, &kafka.CreateClusterRequest{
ProjectId: *flags.projectID,
CloudType: "aws",
RegionId: *flags.region,
Name: *flags.name,
Resources: &kafka.ClusterResources{
Kafka: &kafka.ClusterResources_Kafka{
ResourcePresetId: "s1-c2-m4",
DiskSize: wrapperspb.Int64(32 * 2 << 30),
BrokerCount: wrapperspb.Int64(1),
ZoneCount: wrapperspb.Int64(1),
},
},
NetworkId: *flags.networkID,
})
if err != nil {
return nil, err
}
log.Println("Creating kafka cluster ...")
log.Println("https://app.double.cloud/kafka/" + x.ResourceId + "/operations")
op, err := dc.WrapOperation(x, err)
if err != nil {
panic(err)
}
err = op.Wait(ctx)
return op, err
}
For more in-depth examples, check out DoubleCloud API Go SDK repository