S3 Sink Apache Kafka® Connector

This connector periodically requests data from Apache Kafka® and uploads it to a specified Amazon S3 storage.

  1. Go to the Clusters page in the console.

  2. Select the Apache Kafka® cluster for which you want to create a connector.

  3. Open the Connectors tab and click Create.The Connector configuration page will open.

  4. Select S3 Sink.

  5. Select one or more topics from which to export the data.

  6. Choose the Compression type:

    • none (default): No compression.
    • gzip: The gzip codec.
    • snappy: The snappy codec.
    • zstd: The zstd codec.
  7. Under File max records, specify the number of records after which Apache Kafka® connector creates a new file at the destination.

  8. Under Basic parameters:

    1. Name your connector.

    2. Under Max tasks, specify the maximum number of tasks the connector will run in parallel.

  9. Under S3 connection:

    1. Specify the Endpoint for storage access. Inquire the endpoint information from your storage provider.

    2. (optional) Specify the endpoint's AWS region (us-east-1 by default). See the complete list of available regions in the AWS documentation .

    3. Specify the Bucket name to which you want to replicate the messages.

    4. Under Access key ID and Secret Access key, provide the credentials for your endpoint.

  10. Click Submit.

Use the ConnectorService create method and pass the following parameters:

See also