Tools for Confluent Cloud Clusters¶
Overview¶
Confluent provides Terraform scripts to aid developers working with Confluent Cloud to automatically provision Confluent Platform tools, such as Schema Registry, REST Proxy, Kafka Connect, ksqlDB and Control Center. Once provisioned, these tools are connected to the Confluent Cloud cluster.
These scripts handle creating the underlying infrastructure details, like VPC, Subnets, Firewalls, Storage, Compute, Load Balancers, as well as bootstrapping any processes that are necessary to install and configure the Confluent Platform tools.
Note
These scripts are provided without support and are intended to be a guideline. Report any issues you encounter via the ccloud-tools GitHub repository, and they will be addressed on a best-effort basis.
Requirements¶
- Hashicorp Terraform v0.11+
- Access to a Cloud Provider (Amazon Web Services or Google Cloud Platform)
- Access to Confluent Cloud
Introduction¶
Terraform is a tool for developing, changing and versioning infrastructure safely and efficiently using a Infrastructure-as-Code approach. This enables you to have an infrastructure that is immutable and potentially reusable across different Cloud vendors. The ccloud-tools repository leverages Terraform to provide a superior developer experience for anyone building applications with Confluent Cloud. It allows you to get the tools from Confluent Platform in a snap and in your chosen Cloud provider, while taking care of the details about infrastructure provisioning and software bootstrapping.
The following diagram shows an example of what is created by the scripts. Everything related to the infrastructure is managed by the script, including the VPC, subnets, firewall rules, storage, compute and the load balancers that expose the tools. The tools are created in private subnets with no internet access whatsoever. Inbound access is possible only through the public load balancers, which ensures that the tools are secured within the VPC. The link with the Confluent Cloud cluster is done automatically for you. Finally, the script also manages the scale-out process through multiple availability zones.
This means that if you want two or more copies of a given tool, like Schema Registry, it is created across availability zones automatically, ensuring high availability and fault-tolerance within the region for your tools.

The scripts also provides the ability to provision a bastion server, which might be used to perform local troubleshooting in the compute instances. Since the compute instances are created within private subnets, there is no way to access them directly via SSH. The compute instances are created with firewall rules that give SSH access to the compute instances only from the bastion server. By default, the bastion server is not created, so you need to enable it explicitly to use it.
Step 1: Create Kafka Cluster in Confluent Cloud¶
Important
This step is for Confluent Cloud users only. Confluent Cloud Enterprise users can skip to Step 2: Get Connectivity Details from your Confluent Cloud Cluster.
Sign in to Confluent Cloud at https://confluent.cloud.
Click Add cluster, and on the Create cluster page, click Basic.
Important
This example creates a Basic cluster which only supports single zone availability. For information about other cluster types, including standard and dedicated types, see Confluent Cloud Features and Limits by Cluster Type.
Click Begin configuration. The Region/zones page opens. Choose a cloud provider, region, and availability zone. Click Continue.
Specify a cluster name, review your settings, cost, usage, and click Launch cluster.
Once the cluster has provisioned, the Cluster Overview page displays. Next, you can get started configuring apps and data on your new cluster.
Note
Depending on the chosen cloud provider and other settings, it may take a few minutes to provision your cluster.
Step 2: Get Connectivity Details from your Confluent Cloud Cluster¶
In this step, you retrieve the connectivity details from your Confluent Cloud cluster. The actions shown here are usually performed when you want to install the Confluent Cloud CLI and interact with your cluster. However, you don’t need to perform a full install of the Confluent Cloud CLI. We only need the connectivity details that are created as part of the process.
Save the bootstrap server value by choosing Cluster settings in the sidebar and hovering the mouse to the right of the Bootstrap server value. Choose Copy and save the value where you can easily access it as you will need this for the Terraform script.
Create a Kafka API key by selecting Data integration > API keys in the sidebar. Click Create key, and save the key information in a safe place.
Step 3: Clone the Confluent Cloud Tools GitHub Repository¶
Clone the Confluent Cloud tools repository from GitHub.
git clone git@github.com:confluentinc/ccloud-tools.git
Step 4: Specify the Confluent Cloud Cluster Connectivity Details¶
In this step, you configure your Confluent Cloud cluster connectivity details, so that once the tools are provisioned, they can connect to the cluster automatically. The information used here was gathered during Step 2: Get Connectivity Details from your Confluent Cloud Cluster.
Navigate to the
/terraform/<provider>
directory.cd terraform/<provider>
Rename the file
ccloud.auto.tfvars.example
toccloud.auto.tfvars
Open the
ccloud.auto.tfvars
in a text editor to edit its contents.########################################### ############# Confluent Cloud ############# ########################################### ccloud_broker_list = "<CCLOUD_BOOTSTRAP_SERVER>" ccloud_access_key = "<CCLOUD_ACCESS_KEY>" ccloud_secret_key = "<CCLOUD_SECRET_KEY>"
Provide the requested information accordingly and save the changes in the file.
Step 5: Specify the Cloud Provider Credentials Details¶
In this step, you configure your Cloud credentials, so Terraform can connect to your Cloud provider and perform infrastructure provisioning. The steps provided here are high-level steps, because each Cloud provider has its own way to specify credentials.
Navigate to the
/terraform/<provider>
directory.cd terraform/<provider>
Rename the file
cloud.auto.tfvars.example
tocloud.auto.tfvars
Open the
cloud.auto.tfvars
in a text editor to edit its contents. The example below shows the information that AWS requires for credentials.########################################### ################## AWS #################### ########################################### aws_access_key = "<YOUR_AWS_ACCESS_KEY>" aws_secret_key = "<YOUR_AWS_SECRET_KEY>" aws_region = "us-east-1"
Provide the requested information accordingly and save the changes in the file.
Step 6: Run Terraform¶
In this step, you run Terraform, so it can create the infrastructure and install the Confluent Platform tools on it.
Navigate to the
/terraform/<provider>
directory.cd terraform/<provider>
Initialize the folder to download the required plugins. This step is only required once.
terraform init
Perform a dry-run of the install by planning what is going to be created, modified and destroyed.
terraform plan
Apply the plan configuration in the chosen Cloud provider. After running the command below, you will be asked to acknowledge what is about to be performed. Type yes and press enter.
terraform apply
Once the script finishes, it will print the endpoints from the load balancers that fronts each tool.
Apply complete! Resources: 56 added, 0 changed, 0 destroyed. Outputs: control_center_endpoint = http://control-center-00000000.us-east-1.elb.amazonaws.com kafka_connect_endpoint = http://kafka-connect-0000000000.us-east-1.elb.amazonaws.com ksql_server_endpoint = http://ksql-server-00000000000000.us-east-1.elb.amazonaws.com rest_proxy_endpoint = http://rest-proxy-0000000000000000.us-east-1.elb.amazonaws.com schema_registry_endpoint = http://schema-registry-000000.us-east-1.elb.amazonaws.com
Optional: Destroy the plan configuration from your Cloud provider. After running the following command, you will be asked to acknowledge what is about to be performed. Type yes and press enter.
terraform destroy
Next Steps¶
- Learn more about Confluent Cloud in the documentation.
- Read the blog Securing the Infrastructure of Confluent with HashiCorp Vault.
- Listen to the podcast Using Terraform and Confluent Cloud with Ricardo Ferreira.