This project is based on Confluent's Kafka JDBC connector with additional functionalities, namely:
- Support for
TimescaleDB
databases - Support for multiple
createTable
statements. - Support for schema creation and setting of schema name format in the connector config.
- Support for
TIMESTAMPTZ
data type inPostgreSQL
databases. - Add a custom JDBC Credential Provider to support receiving database username and password from environmental variables.
- Sentry monitoring for the JDBC connector.
This project depends on a transform plugin that transforms the Kafka record before it is written to the database. See RADAR-base / kafka-connect-transform-keyvalue for more information.
If you're using Docker, the transform plugin image is included in the Dockerfile. If you're installing manually, the
kafka-connect-transform-keyvalue
plugin must be installed to your Confluent plugin path.
This repository relies on a recent version of docker and docker-compose as well as an installation of Java 8 or later.
Copy docker/sink-timescale.properties.template
to docker/sink-timescale.properties
and enter your database
connection URL, username, and password.
To localy deploy a full Kafka stack (for development and testing) you can now run:
docker-compose up -d --build
In order to receive database credentials from environmental variables, set the CONNECT_JDBC_CONNECTION_USER
and
CONNECT_JDBC_CONNECTION_PASSWORD
environmental variables with values for the database username and password
respectively. In addition, set the jdbc.credentials.provider.class
property in the docker/sink-timescale.properties
file to
io.confluent.connect.jdbc.util.EnvVarsJdbcCredentialsProvider
like so:
jdbc.credentials.provider.class = io.confluent.connect.jdbc.util.EnvVarsJdbcCredentialsProvider
Code should be formatted using the Google Java Code Style Guide. If you want to contribute a feature or fix browse our issues, and please make a pull request.
This repository is a fork of Confluent's JDBC connector. This repository is present as a git subtree directory
kafka-jdbc-connector
.
To upgrade to a newer version of the JDBC connector, follow these steps:
- Set subtree pull strategy to rebase:
git config pull.rebase true
- Add the Confluent repository as a remote:
git remote add upstream git@github.com:confluentinc/kafka-connect-jdbc.git
- Pull the latest changes from the Confluent repository. For instance to pull the latest changes from the
10.8.x
tag:
git pull -s subtree upstream 10.8.x
- Resolve any conflicts that may arise.
To enable Sentry monitoring for the JDBC connector, follow these steps:
- Set a
SENTRY_DSN
environment variable that points to the desired Sentry DSN. - (Optional) Set the
SENTRY_LOG_LEVEL
environment variable to control the minimum log level of events sent to Sentry. The default log level for Sentry isERROR
. Possible values areTRACE
,DEBUG
,INFO
,WARN
, andERROR
.
For further configuration of Sentry via environmental variables see here. For instance:
SENTRY_LOG_LEVEL: 'ERROR'
SENTRY_DSN: 'https://000000000000.ingest.de.sentry.io/000000000000'
SENTRY_ATTACHSTACKTRACE: true
SENTRY_STACKTRACE_APP_PACKAGES: io.confluent.connect.jdbc