Limitations¶
Refer to the following for Confluent Cloud connector limitations.
Connector Limitations¶
Supported connector limitations¶
See the following limitations for supported connectors.
Amazon CloudWatch Metrics Sink¶
The Amazon CloudWatch Metrics region must in the same region where your Confluent Cloud cluster is, and where you are running the Amazon CloudWatch Metrics Sink Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Amazon Kinesis Source Connector¶
There are no current limitations for the Amazon Kinesis Source Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Amazon Redshift Sink Connector¶
The following are limitations for the Amazon Redshift Sink Connector for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - The Confluent Cloud cluster and the target Redshift cluster must be in the same AWS region.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Amazon SQS Source Connector¶
There are no current limitations for the Amazon SQS Source Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Amazon S3 Sink Connector¶
The following are limitations for the Amazon S3 Sink Connector for Confluent Cloud.
The Confluent Cloud cluster and the target S3 bucket must be in the same AWS region.
One task can handle up to 100 partitions.
Partitioning (hourly or daily) is based on Kafka record time.
flush.size
defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
schema.compatibility
is set toNONE
.A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
AWS Lambda Sink Connector¶
The following are limitations for the AWS Lambda Sink Connector for Confluent Cloud.
- The Confluent Cloud cluster and your AWS Lambda project should be in the same AWS region.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Azure Blob Storage Sink Connector¶
The following are limitations for the Azure Blob Storage Sink Connector for Confluent Cloud.
The Azure Blob Storage Container should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Blob storage in different regions.
One task can handle up to 100 partitions.
Partitioning (hourly or daily) is based on Kafka record time.
flush.size
defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
schema.compatibility
is set toNONE
.A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
Azure Cognitive Search Sink Connector¶
The following are limitations for Azure Cognitive Search Sink Connector for Confluent Cloud.
- Batching multiple metrics: The connector tries to batch metrics in a single payload. The maximum payload size is 16 megabytes for each API request. For additional details, refer to Size limits per API call.
- The Azure Cognitive Search service must be in the same region as your Confluent Cloud cluster.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Azure Cosmos Sink Connector¶
The following are limitations for Azure Cosmos DB Sink Connector for Confluent Cloud.
- The Azure Cosmos DB must be in the same region as your Confluent Cloud cluster.
- The Kafka topic must not contain tombstone records. The connector does not handle tombstone or null values.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Azure Data Lake Storage Gen2 Sink Connector¶
The following are limitations for the Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud.
Azure Data Lake storage should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Data Lake storage in different regions.
Public inbound traffic access (
0.0.0.0/0
) must be allowed for this connector. For more information about public Internet access to resources, see Internet access to resources.Input format JSON to output format AVRO does not work for the preview connector.
One task can handle up to 100 partitions.
Partitioning (hourly or daily) is based on Kafka record time.
flush.size
defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
schema.compatibility
is set toNONE
.A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Azure Event Hubs Source Connector¶
The following are limitations for the Azure Event Hubs Source Connector for Confluent Cloud.
max.events
:499
is the maximum number of events allowed. Defaults to50
.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Azure Functions Sink Connector¶
There is one limitation for the Azure Functions Sink Connector for Confluent Cloud.
The target Azure Function should be in the same region as your Confluent Cloud cluster.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Azure Service Bus Source Connector¶
There are no current limitations for the Azure Service Bus Source Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Azure Synapse Analytics Sink Connector¶
The following are limitations for Azure Synapse Analytics Sink Connector for Confluent Cloud.
- This connector can only insert data into an Azure SQL data warehouse database. Azure Synapse Analytics does not support primary keys. Since updates, upserts, and deletes are all performed on the primary keys, these queries are not supported for this connector.
- When
auto.evolve
is enabled, if a new column with a default value is added, that default value is only used for new records. Existing records will have"null"
as the value for the new column.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Datadog Metrics Sink Connector¶
- Batching multiple metrics: The connector tries to batch metrics in a single payload. The maximum payload size is 3.2 megabytes for each API request. For additional details, refer to Post timeseries points.
- Metrics Rate Limiting: The API endpoints are rate limited. The rate limit for metrics retrieval is 100 per hour, per organization. These limits can be modified by contacting Datadog support.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Datagen Source Connector¶
There are no current limitations for the Datagen Source Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Elasticsearch Service Sink Connector¶
The following are limitations for the Elasticsearch Service Sink Connector for Confluent Cloud.
- The connector only works with the Elasticsearch Service from Elastic Cloud.
- The connector supports connecting to Elasticsearch version 7.1 (and later). The connector does not support Elasticsearch version 8.x.
- The Confluent Cloud cluster and the target Elasticsearch deployment must be in the same region.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Google BigQuery Sink Connector¶
The following are limitations for the Google Cloud BigQuery Sink Connector for Confluent Cloud.
- Source topic names must comply with BigQuery naming conventions even if
sanitizeTopics
is set totrue
in the connector configuration. - A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- Configuration properties that are not shown in the Confluent UI use default values. See Google BigQuery Sink Connector Configuration Properties for all connector properties.
- Topic names are mapped to BigQuery table names. For example, if you have a topic named
pageviews
, a topic namedvisitors
, and a dataset namedwebsite
, the result is two tables in BigQuery; one namedpageviews
and one namedvisitors
under thewebsite
dataset.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Google Functions Sink Connector¶
There is one limitation for the Google Cloud Functions Sink Connector for Confluent Cloud.
The target Google Function should be in the same region as your Confluent Cloud cluster.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Google Pub/Sub Source Connector¶
There are no current limitations for the Google Pub/Sub Source Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Google Cloud Spanner Sink Connector¶
The following are limitations for the Google Cloud Spanner Sink Connector for Confluent Cloud.
- The Confluent Cloud cluster and the target Google Spanner cluster must be in the same GCP region.
- A valid schema must be available in Confluent Cloud Schema Registry to use Avro, JSON Schema, or Protobuf.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Google Cloud Storage Sink Connector¶
The following are limitations for the Google Cloud Storage Sink Connector for Confluent Cloud.
The Confluent Cloud cluster and the target Google Cloud Storage (GCS) bucket must be in the same Google Cloud Platform region.
One task can handle up to 100 partitions.
Partitioning (hourly or daily) is based on Kafka record time.
flush.size
defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
schema.compatibility
is set toNONE
.A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
HTTP Sink Connector¶
There are no current limitations for the HTTP Sink Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Microsoft SQL Server Sink Connector¶
The following are limitations for the Microsoft SQL Server Sink (JDBC) Connector for Confluent Cloud.
- Public inbound traffic access (0.0.0.0/0) must be allowed for the connector. For more information about public Internet access to resources, see Internet access to resources.
- The database and Kafka cluster should be in the same region. If you use a different region, you may incur additional data transfer charges.
- The connector cannot handle tombstone records.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Microsoft SQL Server CDC Source Connector (Debezium)¶
The following are limitations for the Microsoft SQL Server CDC Source (Debezium) Connector for Confluent Cloud.
- Change data capture (CDC) is only available in the Enterprise, Developer, Enterprise Evaluation, and Standard editions.
- Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - Public access may be required for your database. For more information about public Internet access to resources, see Internet access to resources.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- Organizations can run multiple connectors with a limit of one task per connector (that is,
"tasks.max": "1"
).
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Microsoft SQL Server Source Connector¶
The following are limitations for the Microsoft SQL Server Source (JDBC) Connector for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - Public access may be required for your database. For more information about public Internet access to resources, see Internet access to resources.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- A timestamp column must not be nullable and should be datetime2
- Bulk and Incrementing are not supported.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
MongoDB Atlas Sink Connector¶
The following are limitations for the MongoDB Atlas Sink Connector for Confluent Cloud.
This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database.
Document post processing configuration properties are not supported. These include:
post.processor.chain
key.projection.type
value.projection.type
field.renamer.mapping
A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
The MongoDB database and Kafka cluster should be in the same region.
For networking considerations, see Internet access to resources.
You cannot use a dot in a field name (for example,
Client.Email
). The error shown below is displayed if a field name includes a dot. You should also not use$
in a field name. For additional information, see Field Names.Your record has an invalid BSON field name. Please check Mongo documentation for details.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
MongoDB Atlas Source Connector¶
The following are limitations for the MongoDB Atlas Source Connector for Confluent Cloud.
- This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database.
- For networking considerations, see Internet access to resources.
- Customers with a VPC-peered Kafka cluster in Confluent Cloud on AWS should consider configuring a PrivateLink Connection between MongoDB Atlas and the AWS VPC.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
MySQL Sink Connector¶
The following are limitations for the MySQL Sink (JDBC) Connector for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - The database and Kafka cluster should be in the same region.
- The connector cannot handle tombstone records.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
MySQL CDC Source Connector (Debezium)¶
The following are limitations for the MySQL CDC Source (Debezium) Connector for Confluent Cloud.
- MariaDB is not currently supported. See the Debezium docs for more information.
- Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - Public access may be required for your database. For more information about public Internet access to resources, see Internet access to resources.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- Organizations can run multiple connectors with a limit of one task per connector (that is,
"tasks.max": "1"
).
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
MySQL Source Connector¶
The following are limitations for the MySQL Source (JDBC) Connector for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - Public access may be required for your database. For more information about public Internet access to resources, see Internet access to resources.
- A timestamp column must not be nullable.
- Bulk and Incrementing are not supported.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Oracle Database Source Connector¶
The following are limitations for the Oracle Database Source Connector for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - Public access may be required for your database. For more information about public Internet access to resources, see Internet access to resources.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- A timestamp column must not be nullable.
- Bulk and Incrementing are not supported.
- Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
PostgreSQL Sink Connector¶
The following are limitations for the PostgreSQL Sink (JDBC) Connector for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) must be allowed. For more information about public Internet access to resources, see Internet access to resources. - The database and Kafka cluster should be in the same region. If you use a different region, be aware that you may incur additional data transfer charges.
- The connector cannot handle tombstone records.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
PostgreSQL CDC Source (Debezium) Connector¶
The following are limitations for the PostgreSQL CDC Source Connector (Debezium) for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - Public access may be required for your database. For more information about public Internet access to resources, see Internet access to resources.
- For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- Clients from Azure Virtual Networks are not allowed to access the server by default. Please make sure your Azure Virtual Network is correctly configured and that Allow access to Azure Services is enabled.
- The following are the default partition and replication factor properties:
topic.creation.default.partitions=1
topic.creation.default.replication.factor=3
- See the After-state only output limitation if you are planning to use the optional property
After-state only
. - Organizations can run multiple connectors with a limit of one task per connector (that is,
"tasks.max": "1"
).
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
PostgreSQL Source Connector¶
The following are limitations for the PostgreSQL Source (JDBC) Connector for Confluent Cloud.
- Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For more information about public Internet access to resources, see Internet access to resources. - Public access may be required for your database. For more information about public Internet access to resources, see Internet access to resources.
- For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- Clients from Azure Virtual Networks are not allowed to access the server by default. Please make sure your Azure Virtual Network is correctly configured and enable “Allow access to Azure Services”.
- A timestamp column must not be nullable.
- Bulk and Incrementing are not supported.
- Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Salesforce PushTopic Source Connector¶
There is one limitation for the Salesforce PushTopic Source Connector for Confluent Cloud.
Organizations can run multiple connectors with a limit of one task per connector (that is,
"tasks.max": "1"
).Note the following limitations for at least once delivery:
- When the connector operates, it periodically records the replay ID of the last record written to Kafka. When the connector is stopped and then restarted within 24 hours, the connector continues consuming the PushTopic where it stopped, with no missed events. However, if the connector stops for more than 24 hours, some events are discarded in Salesforce before the connector can read them.
- If the connector stops unexpectedly due to a failure, it may not record the replay ID of the last record successfully written to Kafka. When the connector restarts, it resumes from the last recorded replay ID. This means that some events may be duplicated in Kafka.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Salesforce Platform Event Sink Connector¶
The following are limitations for the Salesforce Platform Event Sink Connector for Confluent Cloud.
- The connector is limited to one task only.
- There are Salesforce streaming allocations and limits that apply to this connector. For example, the number of API calls that can occur within a 24-hour period is capped for free developer org accounts.
- There are data and file storage limits that are based on the type of organization you use.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Salesforce Platform Event Source Connector¶
There is one limitation for the Salesforce Platform Event Source Connector for Confluent Cloud.
- Organizations can run multiple connectors with a limit of one task per connector (that is,
"tasks.max": "1"
).
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Salesforce PushTopic Source Connector¶
There is one limitation for the Salesforce PushTopic Source Connector for Confluent Cloud.
Organizations can run multiple connectors with a limit of one task per connector (that is,
"tasks.max": "1"
).Note the following limitations for at least once delivery:
- When the connector operates, it periodically records the replay ID of the last record written to Kafka. When the connector is stopped and then restarted within 24 hours, the connector continues consuming the PushTopic where it stopped, with no missed events. However, if the connector stops for more than 24 hours, some events are discarded in Salesforce before the connector can read them.
- If the connector stops unexpectedly due to a failure, it may not record the replay ID of the last record successfully written to Kafka. When the connector restarts, it resumes from the last recorded replay ID. This means that some events may be duplicated in Kafka.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Salesforce SObject Sink Connector¶
The following are limitations for the Salesforce SObject Sink Connector for Confluent Cloud.
- There are Salesforce streaming allocations and limits that apply to this connector. For example, the number of API calls that can occur within a 24-hour period is capped for free developer org accounts.
- There are data and file storage limits that are based on the type of organization you use.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
ServiceNow Sink Connector¶
There are no current limitations for the ServiceNow Sink Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
ServiceNow Source Connector¶
There are no current limitations for the ServiceNow Source Connector for Confluent Cloud.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Snowflake Sink Connector¶
The following are limitations for the Snowflake Sink Connector for Confluent Cloud.
- The Snowflake database and Kafka cluster must be in the same region.
- The Snowflake Sink connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
- A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
- Each task is limited to a number of topic partitions based on the
buffer.size.bytes
property value. For example, a10
MB buffer size is limited to 50 topic partitions, a20
MB buffer is limited to 25 topic partitions,50
MB buffer is limited to 10 topic partitions, and a100
MB buffer to 5 topic partitions.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Solace Sink Connector¶
The following are limitations for the Solace Sink Connector for Confluent Cloud.
- The connector can create queues, but not durable topic endpoints.
- A valid schema must be available in Schema Registry to use a Schema Registry-based format, like Avro.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Preview connector limitations¶
See the following limitations for preview connectors.
Caution
Preview connectors are not currently supported and are not recommended for production use.
Google Cloud Dataproc Sink Connector¶
The following are limitations for the Google Cloud Dataproc Sink Connector for Confluent Cloud.
The Confluent Cloud cluster and the target Dataproc cluster must be in a VPC peering configuration.
Note
For a non-VPC peered environment, public inbound traffic access (
0.0.0.0/0
) must be allowed to the VPC where the Dataproc cluster is located. You must also make configuration changes to allow public access to the Dataproc cluster while retaining the private IP addresses for the Dataproc master and worker nodes (HDFS NameNode and DataNodes). For configuration details, see Configuring a non-VPC peering environment. For more information about public Internet access to resources, see Internet access to resources.The Dataproc image version must be 1.4 (or later). See Cloud Dataproc Image version list.
One task can handle up to 100 partitions.
Input format JSON to output format AVRO does not work for the preview connector.
Partitioning (hourly or daily) is based on Kafka record time.
flush.size
defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
schema.compatibility
is set toNONE
.A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector. Use of this connector is free for a limited time.
Note
After this connector becomes generally available, Confluent Cloud Enterprise customers should contact their Confluent account executive for more information about using it.