MySQL Sink (JDBC) Connector for Confluent Cloud¶
The Kafka Connect MySQL Sink connector for Confluent Cloud exports data from Kafka topics to a MySQL database.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Features¶
The MySQL Sink connector provides the following features:
- Table and column auto-creation:
auto.create
andauto-evolve
are supported. If tables or columns are missing, they can be created automatically. Table names are created based on Kafka topic names. - Database authentication: username/password authentication.
- Schemas: The connector supports Avro, JSON Schema, and Protobuf input data formats. Schema Registry must be enabled to use a Schema Registry-based format.
- Modes: This connector inserts and upserts Kafka records into a MySQL database.
- Primary key support: Supported PK modes are
kafka
,none
, andrecord_value
. Used in conjunction with the PK Fields property.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Refer to Cloud connector limitations for additional information.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud MySQL sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a MySQL database.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
- Access to a MySQL database.
- The Confluent Cloud CLI installed and configured for the cluster. See Install the Confluent Cloud CLI.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
- For networking considerations, see Internet access to resources. To use static egress IPs, see Static Egress IP Addresses.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
- Kafka cluster credentials. You can use one of the following ways to get credentials:
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use the Confluent Cloud CLI or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster.¶
See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.
Step 2: Add a connector.¶
Click Connectors. If you already have connectors in your cluster, click Add connector.
Step 3: Select your connector.¶
Click the MySQL Sink connector icon.
Step 4: Set up the connection.¶
Complete the following and click Continue.
Note
- Make sure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
- Select an Input message format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format.
- Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.
- Enter your MySQL database connection details.
- Select one of the following insert modes:
INSERT
: Use the standardINSERT
row function. An error occurs if the row already exists in the table.UPSERT
: This mode is similar toINSERT
. However, if the row already exists, theUPSERT
function overwrites column values with the new values provided.
- Enter a Table name format. This is a format string to use for the destination table name. This property may contain
${topic}
as a placeholder for the originating topic name. For example,kafka_${topic}
for the topicorders
maps to the table namekafka_orders
. - Select your Database timezone.
- Select a PK mode. Supported modes are listed below:
kafka
: Kafka coordinates are used as the primary key. Must be used with the PK Fields.none
: No primary keys used.record_value
: Fields from the Kafka record value are used. This must be a struct type.
- Enter the PK fields values. This is a list of comma-separated primary key field names. The runtime interpretation of this property depends on the PK mode selected. Options are listed below:
kafka
: Must be three values representing the Kafka coordinates. If left empty, the coordinates default to__connect_topic,__connect_partition,__connect_offset
.none
: PK Fields not used.record_value
: Used to extract fields from the record value. If left empty, all fields from the value struct are used.
- Select whether to automatically create a table if none exists.
- Select whether to automatically create columns in the table for a Kafka record if none exists.
- Enter the maximum rows per batch. The default value is
3000
. - Enter the maximum number of tasks the connector can run. See Confluent Cloud connector limitations for additional task information.
Configuration properties that are not shown in the Confluent Cloud UI use the default values. For default values and property definitions, see JDBC Sink Connector Configuration Properties.
Step 5: Launch the connector.¶
Verify the connection details and click Launch.
Step 6: Check the connector status.¶
The status for the connector should go from Provisioning to Running.
Step 7: Check the results in the database.¶
Verify that new records are being added to the MySQL database.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.
Using the Confluent Cloud CLI¶
Complete the following steps to set up and run the connector using the Confluent Cloud CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors.¶
Enter the following command to list available connectors:
ccloud connector-catalog list
Step 2: Show the required connector configuration properties.¶
Enter the following command to show the required connector properties:
ccloud connector-catalog describe <connector-catalog-name>
For example:
ccloud connector-catalog describe MySqlSink
Example output:
Following are the required configs:
connector.class: MySqlSink
input.data.format
name
kafka.api.key
kafka.api.secret
connection.host
connection.port
connection.user
connection.password
db.name
tasks.max
topics
Step 3: Create the connector configuration file.¶
Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties:
{
"topics": "pageviews",
"input.data.format": "AVRO",
"connector.class": "MySqlSink",
"name": "MySqlSinkConnector_0",
"kafka.api.key": "****************",
"kafka.api.secret": "****************************************************************",
"connection.host": "dev-testing-temp.dvomdiaryntj.us-west-7.rds.amazonaws.com",
"connection.port": "3306",
"connection.user": "admin",
"connection.password": "**********",
"db.name": "test",
"insert.mode": "INSERT",
"auto.create": "true",
"auto.evolve": "true",
"tasks.max": "1"
}
Note the following property definitions:
"topics"
: Identifies the topic name or a comma-separated list of topic names."input.data.format"
: Sets the input message format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR (JSON Schema), or PROTOBUF. You must have Confluent Cloud Schema Registry configured if using a schema-based message format."connector.class"
: Identifies the connector plugin name."name"
: Sets a name for your new connector."insert.mode"
: Enter one of the following modes:INSERT
: Use the standardINSERT
row function. An error occurs if the row already exists in the table.UPSERT
: This mode is similar toINSERT
. However, if the row already exists, theUPSERT
function overwrites column values with the new values provided.
"auto.create"
(tables) and"auto-evolve"
(columns): (Optional) Sets whether to automatically create tables or columns if they are missing relative to the input record schema. If not entered in the configuration, both default tofalse
."tasks.max"
: Maximum number of tasks the connector can run. See Confluent Cloud connector limitations for additional task information.
Optional:
"pk.mode"
: Supported modes are listed below:kafka
: Kafka coordinates are used as the primary key. Must be used with the PK Fields.none
: No primary keys used.record_value
: Fields from the Kafka record value are used. This must be a struct type.
"pk.fields"
: A list of comma-separated primary key field names. The runtime interpretation of this property depends on thepk.mode
selected. Options are listed below:kafka
: Must be three values representing the Kafka coordinates. If left empty, the coordinates default to__connect_topic,__connect_partition,__connect_offset
.none
: PK Fields not used.record_value
: Used to extract fields from the record value. If left empty, all fields from the value struct are used.
Configuration properties that are not shown in the Confluent Cloud UI use the default values. For default values and property definitions, see JDBC Sink Connector Configuration Properties.
Step 4: Load the configuration file and create the connector.¶
Enter the following command to load the configuration and start the connector:
ccloud connector create --config <file-name>.json
For example:
ccloud connector create --config mysql-server-sink-config.json
Example output:
Created connector MySqlSinkConnector_0 lcc-ix4dl
Step 5: Check the connector status.¶
Enter the following command to check the connector status:
ccloud connector list
Example output:
ID | Name | Status | Type
+-----------+-------------------------+---------+------+
lcc-ix4dl | MySqlSinkConnector_0 | RUNNING | sink
Step 6: Check the results in the database.¶
Verify that new records are being added to the MySQL database.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.
Next Steps¶
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.