In Source, select an existing source endpoint or create a new one.
in Target, select an existing target endpoint or create a new one.
Under Basic settings, enter a transfer name and (optionally) a description.
Under Transfer Parameters, select a Transfer type.
Available transfer types depend on the type of source and destination endpoints:
Snapshot
Makes a one-time transfer of the database snapshot.
Under Snapshot settings → Parallel snapshot settings, configure the transfer performance:
Processes count sets the number of parallel instances of a container with a transfer. Increasing this number will speed up your transfer execution. You can specify up to 8 instances.
Threads count specifies the number of processes within each container. You can run up to 10 processes.
The average download speed of data transfers is between 1 and 10 mb/s.
Add one or more Incremental tables. With incremental tables, you can transfer only the data that has changed. Each of these tables has the following fields:
Schema corresponds to the database schema (as in PostgreSQL), name, or dataset.
Table in the source to compare with your target table.
Key column is the name of a column that contains some value (a cursor) that indicates whether to increment the table. A common example of a cursor is a column with timestamps. Refer to the Airbyte - Incremental Sync - Append to learn more about cursor definitions and usage examples.
Start value (Optional) defines a value in the Key column based on which to track the changes. For example, you can specify a date as the Start value. In this case, the service will transfer only the rows with a date value greater than the start value.
Periodic snapshot
Runs snapshots at the specified interval.
Set the Period from 5 minutes to 24 hours between the transfer runs.
Under Snapshot settings → Parallel snapshot settings, configure the transfer performance:
Processes count sets the number of parallel instances of a container with a transfer. Increasing this number will speed up your transfer execution. You can specify up to 8 instances.
Threads count specifies the number of processes within each container. You can run up to 10 processes.
The average download speed of data transfers is between 1 and 10 mb/s.
Add one or more Incremental tables. With incremental tables, you can transfer only the data that has changed. Each of these tables has the following fields:
Table in the source with which to compare your target table.
Key column is the name of a column that contains some value that indicates whether the table should be incremented. A common example of a column that contains a cursor is a column with timestamps. Refer to the Airbyte - Incremental Sync - Append to learn more about cursor definitions and usage examples.
Initial value (optional) defines a value in the Key column based on which to start tracking changes. For example, you can specify a date as the Initial value. In this case, the service will exclusively transfer the rows with a date value greater than the start value.
Replication
Continuously retrieves changes from the source database and applies them to the target database.
Snapshot and replication
Transfers the data and keeps the target database in sync with the source database.
Under Snapshot settings → Parallel snapshot settings, configure the transfer performance:
Processes count sets the number of parallel instances of a container with a transfer. Increasing this number will speed up your transfer execution. You can specify up to 8 instances.
Threads count specifies the number of processes within each container. You can run up to 10 processes.
The average download speed of data transfers is between 1 and 10 mb/s.
Add one or more Incremental tables. With incremental tables, you can transfer only the data that has changed. Each of these tables has the following fields:
Schema corresponds to the database schema (as in PostgreSQL), name, or dataset.
Table in the source to compare with your target table.
Key column is the name of a column that contains some value (a cursor) that indicates whether to increment the table. A common example of a cursor is a column with timestamps. Refer to the Airbyte - Incremental Sync - Append to learn more about cursor definitions and usage examples.
Start value (Optional) defines a value in the Key column based on which to track the changes. For example, you can specify a date as the Start value. In this case, the service will transfer only the rows with a date value greater than the start value.
Set up Data transformations if you need to modify a list of tables to transfer.
Public Preview notice
This feature is provided as Public Preview and is free of charge.
After the end of the Public Preview period, the functionality and pricing of this feature will be subject to change.
For example, you don't want to transfer the column with passwords.
In Transformer list, click Add transformer to create a set of transformation rules.
Add this column's name to the Exclude columns section.
In Transformer[number] → Columns filter, configure transformations for tables and columns:
Include columns sets the list of columns to transfer and Exclude columns makes the list of columns that won't be transferred. Set these table names as regular expressions.
Each transformer is a separate set of rules, and you can combine different rules within each set.
For example, you can set a table in Include tables and an Exclude column. In this case, the service will ignore the specified Exclude columns only for the included table. If you combine Exclude tables and Include columns, only the specified columns will be transferred from all tables except those specified in theExclude tables field.
Click + Transformation to add a new transformation layer. You can apply multiple layers to your data.
From the dropdown menu, select the appropriate transformation type:
Mask secret fields
This transformation allows you to apply a hash function to specified columns in tables to further protect sensitive data during transfer.
Under Tables, specify the following:
Included tables restricts the set of tables to transfer.
Excluded tables allow transferring all data except the specified tables.
Set these table names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Column list, click + to add a column name. The masking will be applied to the columns listed in this section.
Under Mask function → Hash → User-defined Salt, specify the Salt hash you want to apply to your data.
Columns filter
This transformation allows you to apply filtering to the list of columns to transfer from the data source.
Under Tables, click + Tables and specify the following:
Included tables restricts the set of tables to transfer.
Excluded tables allow transferring all data except the specified tables.
Set these table names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Columns, specify the following:
Included columns restricts the set of columns to transfer from the tables specified above.
Excluded columns allow transferring all columns except the specified ones.
Set these column names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Rename tables
This transformation gives you a capability to associate the table name on the source with a new table name on the target without changing the contents of the transferred table.
Under Tables list to rename, click + Table.
Under Table 1 → Source table name:
For PostgreSQL data sources, use the Named schema field to provide the source table schema. Leave empty for the data sources that don't support schema and/or database abstractions.
Specify the initial Table name on the source.
Under Target table name:
For PostgreSQL data sources, use the Named schema field to provide the target table schema. Leave empty for the data sources that don't support schema and/or database abstractions.
Specify the intended Table name on the target.
Replace primary key
This transformation allows you to reassign the primary key column on the target table.
Under Tables, click + Tables and specify the following:
Included tables restricts the set of tables to transfer.
Excluded tables allow transferring all data except the specified tables.
Set these table names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Key columns names, specify the pairs of columns to replace separated by a , comma as follows:
<column name at the source> <column name at the target>,
Warning
Assigning two or more primary keys per table makes these tables incompatible with ClickHouse®.
Convert values to string
This transformation allows you to convert a certain data column in a specified table to a string.
Under Tables, click + Tables and specify the following:
Included tables restricts the set of tables to transfer.
Excluded tables allow transferring all data except the specified tables.
Set these table names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Columns, specify the following:
Included columns restricts the set of columns to transfer from the tables specified above.
Excluded columns allow transferring all columns except the specified ones.
Set these column names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Key columns names, click + to add a name of the column containing the primary key.
Under Non-key column names, click + to add a column without a primary key.
The conversion will be applied to the columns listed in both sections.
Convert data to raw JSON
This transformation gives you a capability to convert a certain data column in a specified table to a raw JSON.
Under Tables, specify the following:
Included tables restricts the set of tables to transfer.
Excluded tables allow transferring all data except the specified tables.
Set these table names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Columns, specify the following:
Included columns restricts the set of columns to transfer from the tables specified above.
Excluded columns allow transferring all columns except the specified ones.
Set these column names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Key columns names, click + to add a name of the column containing the primary key.
Under Non-key column names, click + to add a column without a primary key.
The conversion will be applied to the columns listed in both sections.
Sharding
This transformation allows you to distribute the tables between multiple shards on the ClickHouse® data destination.
Under Tables, click + Tables and specify the following:
Included tables restricts the set of tables to transfer.
Excluded tables allow transferring all data except the specified tables.
Set these table names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Columns, click + Columns and specify the following:
Included columns restricts the set of columns to transfer from the tables specified above.
Excluded columns allow transferring all columns except the specified ones.
Enter the Count of shards between which you want to distribute the table data.
Under Tables, click + Tables and specify the following:
Included tables restricts the set of tables to transfer.
Excluded tables allow transferring all data except the specified tables.
Set these table names as regular expressions:
Collection of regular expression patterns to parse table names
Pattern
Description
Example
abc
An explicit series of characters
test returns the table names containing test.
.
A character wildcard. Use it to match an expression with defined character positions.
t..t returns test, tent, tart etc.
\
An escape character. Use it to match special characters.
\_ returns the table names containing an underscore.
?
Use it to express that a character (or a group of characters) is optional.
c?.n returns can, con, in, on, en, etc.
+
Use it to express that a character (or a group of characters) can appear one and more times.
-+ returns the table names containing -, --, --- etc.
{n}
Use it to express that a character (or a group of characters) must appear explicitly n times
-{2} returns the table names containing --.
{n,m}
Use it to express that a character (or a group of characters) must appear between n and m times.
_{1,3} returns the table names containing _, __ and ___.
\w
An alphanumeric wildcard. Use it to match any alphanumeric characters. The match pattern is case-sensitive.
\w+ returns the table names containing letters and/or digits.
\W
A non-alphanumeric wildcard. Use it to match any non-alphanumeric character. The match pattern is case-sensitive.
\W+ returns the table names containing characters other than letters or digits.
\d
A digit wildcard. Use it to match any digit characters. The match pattern is case-sensitive.
\d+ returns the table names containing digits.
\D
A non-digit wildcard. Use it to match any non-digit characters. The match pattern is case-sensitive.
\D+ returns the table names containing any characters other than digits.
$
Use it to match the position after the table name's last character.
metrics$ returns the table names ending with metrics.
^
Use it to match the position before the table name's first character.
This position is useful to define database names. For example, ^monthly_sales returns all the tables from the monthly_sales database.
Under Key columns names, click + to add a name of the column containing the primary key.
Under Non-key column names, click + to add a column without a primary key.
The conversion will be applied to the columns listed in both sections.
SQL
This transformer accepts ClickHouse® SQL dialect and allows you to produce SQL-like in-memory data transformation. The solution is based on ClickHouse® Local
The source table inside ClickHouse® Local is named table, the ClickHouse® table structure mimics the source table structure.
Since each source change item (row) contains extra metadata, we must match source and target data. Therefore, each row must have a key defined. All these keys should be unique in every batch. Do do this, we call a collapse function.
If we can't match source keys with transformed data, we mark such row as containing an error.
When writing an SQL query, you must preserve original key-columns:
SELECT
parseDateTime32BestEffortJSONExtractString(CloudTrailEvent, 'eventTime')) AS eventTime,
JSONExtractString(CloudTrailEvent, 'http_request.user_agent') AS http_useragent,
JSONExtractString(CloudTrailEvent, 'errorMessage') AS error_message,
JSONExtractString(CloudTrailEvent, 'errorCode') AS error_kind,
JSONExtractString(CloudTrailEvent, 'sourceIPAddress') AS network_client_ip,
JSONExtractString(CloudTrailEvent, 'eventVersion') AS eventVersion,
JSONExtractString(CloudTrailEvent, 'eventSource') AS eventSource,
JSONExtractString(CloudTrailEvent, 'eventName') AS eventName,
JSONExtractString(CloudTrailEvent, 'awsRegion') AS awsRegion,
JSONExtractString(CloudTrailEvent, 'sourceIPAddress') AS sourceIPAddress,
JSONExtractString(CloudTrailEvent, 'userAgent') AS userAgent,
JSONExtractString(CloudTrailEvent, 'requestID') AS requestID,
JSONExtractString(CloudTrailEvent, 'eventID') AS eventID,
JSONExtractBool(CloudTrailEvent, 'readOnly') AS readOnly,
JSONExtractString(CloudTrailEvent, 'eventType') AS eventType,
JSONExtractBool(CloudTrailEvent, 'managementEvent') AS managementEvent,
JSONExtractString(CloudTrailEvent, 'recipientAccountId') AS recipientAccountId,
JSONExtractString(CloudTrailEvent, 'eventCategory') AS eventCategory,
JSONExtractString(CloudTrailEvent, 'aws_account') AS account,
JSONExtractString(CloudTrailEvent, 'userIdentity.type') AS userIdentity_type,
JSONExtractString(CloudTrailEvent, 'userIdentity.principalId') AS userIdentity_principalId,
JSONExtractString(CloudTrailEvent, 'userIdentity.arn') AS userIdentity_arn,
JSONExtractString(CloudTrailEvent, 'userIdentity.accountId') AS userIdentity_accountId,
JSONExtractString(CloudTrailEvent, 'userIdentity.accessKeyId') AS userIdentity_accessKeyId,
JSONExtractString(CloudTrailEvent, 'userIdentity.sessionContext.sessionIssuer.type') AS sessionIssuer_type,
JSONExtractString(CloudTrailEvent, 'userIdentity.sessionContext.sessionIssuer.principalId') AS sessionIssuer_principalId,
JSONExtractString(CloudTrailEvent, 'userIdentity.sessionContext.sessionIssuer.arn') AS sessionIssuer_arn,
JSONExtractString(CloudTrailEvent, 'userIdentity.sessionContext.sessionIssuer.accountId') AS sessionIssuer_accountId,
JSONExtractString(CloudTrailEvent, 'userIdentity.sessionContext.sessionIssuer.userName') AS sessionIssuer_userName,
JSONExtractString(CloudTrailEvent, 'userIdentity.sessionContext.webIdFederationData.federatedProvider') AS federatedProvider,
JSONExtractString(CloudTrailEvent, 'userIdentity.sessionContext.attributes.creationDate') AS attributes_creationDate,
JSONExtractBool(CloudTrailEvent, 'userIdentity.sessionContext.attributes.mfaAuthenticated') AS attributes_mfaAuthenticated,
JSONExtractString(CloudTrailEvent, 'requestParameters.commandId') AS requestParameters_commandId,
JSONExtractString(CloudTrailEvent, 'requestParameters.instanceId') AS requestParameters_instanceId,
JSONExtractString(CloudTrailEvent, 'tlsDetails.tlsVersion') AS tlsDetails_tlsVersion,
JSONExtractString(CloudTrailEvent, 'tlsDetails.cipherSuite') AS tlsDetails_cipherSuite,
JSONExtractString(CloudTrailEvent, 'tlsDetails.clientProvidedHostHeader') AS tlsDetails_clientProvidedHostHeader
FROMtable
dbt
Apply your dbt project to the snapshot of the data transferred to ClickHouse®.
Specify the address of the Git repository containing your dbt project. It must start with https://. The root directory of the repository must contain a dbt_project.yml file.
Under Git branch, specify the branch or a tag of the git repository containing your dbt project.
Provide the DBT profile name which will be created automatically using the settings of the destination endpoint. The name must match the profile property in the dbt_project.yml file.
From the dropdown list, select the Operation for your dbt project to perform. For more information, see the official dbt documentation .
Deleting a transformation layer
To delete a transformation layer,
click
to the right of the transformation type dropdown menu → Delete .
Click Submit.
You can create a transfer using the
Transfer
and
Transfer endpoint
resources of the DoubleCloud Terraform provider.
You can monitor the health status of your transfer using the Status history timeline on the Overview page:
The timeline shows the status snapshots of your transfer as colored vertical bars. Currently, there are two possible status messages:
OK: The transfer is running normally.
ERROR: Something’s wrong with the transfer.
For more information, refer to the transfer logs.
In the top-right corner of the timeline,
you can select the scale: one hour (1h), one day (1d), or one week (1w).
You can see the current time as a vertical red marker on the timeline.
To navigate the timeline:
Use and buttons to move the time scale in selected intervals.
Use your mouse cursor to drag the highlighted interval across the timeline located above the colored bars.
On the right of the status display, you can see the information card with the current transfer statistics:
The amount of bytes read.
The number of data rows read. This parameter is used to calculate the transfer's resource consumption.
Transfer data from the AWS networks
Transfer requires access to the resources in AWS that are usually protected by a set of tools. Refer to the following guide to see how to peer networks and configure DoubleCloud and AWS to perform a successful transfer: Transfer data from a PostgreSQL RDS instance to ClickHouse®.