You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-14Lines changed: 5 additions & 14 deletions
Original file line number
Diff line number
Diff line change
@@ -53,21 +53,20 @@ Kafka Connect is very flexible but it's important to understand the way that it
53
53
54
54
This is rather complicated and it's likely that a future update of the connector will simplify matters.
55
55
56
-
Each message in Kafka Connect is associated with a representation of the message format known as a *schema*. Each Kafka message actually has two parts, key and value, and each part has its own schema. The MQ source connector does not currently use message keys, but some of the configuration options use the word *Value* because they refer to the Kafka message value.
56
+
Each message in Kafka Connect is associated with a representation of the message format known as a *schema*. Each Kafka message actually has two parts, key and value, and each part has its own schema. The MQ source connector does not currently make much use of message keys, but some of the configuration options use the word *Value* because they refer to the Kafka message value.
57
57
58
58
When the MQ source connector reads a message from MQ, it chooses a schema to represent the message format and creates an internal object called a *record* containing the message value. This conversion is performed using a *record builder*. Each record is then processed using a *converter* which creates the message that's published on a Kafka topic.
59
59
60
60
There are two record builders supplied with the connector, although you can write your own. The basic rule is that if you just want the message to be passed along to Kafka unchanged, the default record builder is probably the best choice. If the incoming data is in JSON format and you want to use a schema based on its structure, use the JSON record builder.
61
61
62
-
There are three converters build into Apache Kafka and another which is part of the Confluent Platform. You need to make sure that the incoming message format, the setting of the *mq.message.body.jms* configuration, the record builder and converter are all compatible. By default, everything is just treated as bytes but if you want the connector to understand the message format and apply more sophisticated processing such as single-message transforms, you'll need a more complex configuration. The following table shows the basic options that work.
62
+
There are three converters built into Apache Kafka. You need to make sure that the incoming message format, the setting of the *mq.message.body.jms* configuration, the record builder and converter are all compatible. By default, everything is just treated as bytes but if you want the connector to understand the message format and apply more sophisticated processing such as single-message transforms, you'll need a more complex configuration. The following table shows the basic options that work.
63
63
64
64
| Record builder class | Incoming MQ message | mq.message.body.jms | Converter class | Outgoing Kafka message |
| com.ibm.eventstreams.connect.mqsource.builders.JsonRecordBuilder | JSON, may have schema | Not used | org.apache.kafka.connect.json.JsonConverter |**JSON, no schema**|
70
-
| com.ibm.eventstreams.connect.mqsource.builders.JsonRecordBuilder | JSON, may have schema | Not used | io.confluent.connect.avro.AvroConverter |**Binary-encoded Avro**|
71
70
72
71
There's no single configuration that will always be right, but here are some high-level suggestions.
73
72
@@ -110,8 +109,6 @@ You must then choose a converter than can handle the value schema and class. The
110
109
| org.apache.kafka.connect.storage.StringConverter | Works, not useful |**String data**| Works, not useful |
In addition, there is another converter for the Avro format that is part of the Confluent Platform. This has not been tested with the MQ source connector at this time.
114
-
115
112
### Key support and partitioning
116
113
By default, the connector does not use keys for the Kafka messages it publishes. It can be configured to use the JMS message headers to set the key of the Kafka records. You could use this, for example, to use the MQMD correlation identifier as the partitioning key when the messages are published to Kafka. There are three valid values for the `mq.record.builder.key.header` that controls this behavior.
117
114
@@ -127,12 +124,12 @@ In MQ, the message ID and correlation ID are both 24-byte arrays. As strings, th
127
124
## Security
128
125
The connector supports authentication with user name and password and also connections secured with TLS using a server-side certificate and mutual authentication with client-side certificates.
129
126
130
-
### Setting up TLS using a server-side certificate
127
+
### Setting up MQ connectivity using TLS with a server-side certificate
131
128
To enable use of TLS, set the configuration `mq.ssl.cipher.suite` to the name of the cipher suite which matches the CipherSpec in the SSLCIPH attribute of the MQ server-connection channel. Use the table of supported cipher suites for MQ 9.1 [here] ((https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.1.0/com.ibm.mq.dev.doc/q113220_.htm) as a reference. Note that the names of the CipherSpecs as used in the MQ configuration are not necessarily the same as the cipher suite names that the connector uses. The connector uses the JMS interface so it follows the Java conventions.
132
129
133
130
You will need to put the public part of the queue manager's certificate in the JSSE truststore used by the Kafka Connect worker that you're using to run the connector. If you need to specify extra arguments to the worker's JVM, you can use the EXTRA_ARGS environment variable.
134
131
135
-
### Setting up TLS for mutual authentication
132
+
### Setting up MQ connectivity using TLS for mutual authentication
136
133
You will need to put the public part of the client's certificate in the queue manager's key repository. You will also need to configure the worker's JVM with the location and password for the keystore containing the client's certificate.
137
134
138
135
### Troubleshooting
@@ -148,7 +145,7 @@ If messages are being received faster than they can be committed, the connector
148
145
149
146
150
147
## Configuration
151
-
The configuration options for the MQ Source Connector are as follows:
148
+
The configuration options for the Kafka Connect source connector for IBM MQ are as follows:
152
149
153
150
| Name | Description | Type | Default | Valid values |
@@ -166,12 +163,6 @@ The configuration options for the MQ Source Connector are as follows:
166
163
| topic | The name of the target Kafka topic | string || Topic name |
167
164
168
165
169
-
## Future enhancements
170
-
The connector is intentionally basic. The idea is to enhance it over time with additional features to make it more capable. Some possible future enhancements are:
171
-
* JMX metrics
172
-
* Separate TLS configuration for the connector so that keystore location and so on can be specified as configurations
173
-
174
-
175
166
## Support
176
167
A commercially supported version of this connector is available for customers with a support entitlement for [IBM Event Streams](https://developer.ibm.com/messaging/event-streams/).
0 commit comments