S3 Sink Apache Kafka® Connector
This connector periodically requests data from Apache Kafka® and uploads it to a specified Amazon S3 storage.
-
Go to the Clusters
-
Select the Apache Kafka® cluster for which you want to create a connector.
-
Open the Connectors tab and click Create.The Connector configuration page will open.
-
Select S3 Sink.
-
Select one or more topics from which to export the data.
-
Choose the Compression type:
-
Under File max records, specify the number of records after which Apache Kafka® connector creates a new file at the destination.
-
Under Basic parameters:
-
Name your connector.
-
Under Max tasks, specify the maximum number of tasks the connector will run in parallel.
-
-
Under S3 connection:
-
Specify the Endpoint for storage access. Inquire the endpoint information from your storage provider.
-
(optional) Specify the endpoint's AWS region (
us-east-1
by default). See the complete list of available regions in the AWS documentation -
Specify the Bucket name to which you want to replicate the messages.
-
Under Access key ID and Secret Access key, provide the credentials for your endpoint.
-
-
Click Submit.
Use the ConnectorService
create method and pass the following parameters:
cluster_id
To find thecluster_id
, get a list of clusters in the project.connector_spec
with connector parameters.