You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+40-25Lines changed: 40 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ Build the connector using Maven:
34
34
mvn clean package
35
35
```
36
36
37
-
Once built, the output is a single JAR called `target/kafka-connect-mq-source-0.1-SNAPSHOT-jar-with-dependencies.jar` which contains all of the required dependencies.
37
+
Once built, the output is a single JAR called `target/kafka-connect-mq-source-0.2-SNAPSHOT-jar-with-dependencies.jar` which contains all of the required dependencies.
Kafka Connect is very flexible but it's important to understand the way that it processes messages to end up with a reliable system. When the connector encounters a message that it cannot process, it stops rather than throwing the message away. Therefore, you need to make sure that the configuration you use can handle the messages the connector will process.
62
62
63
+
This is rather complicated and it's likely that a future update of the connector will simplify matters.
64
+
63
65
Each message in Kafka Connect is associated with a representation of the message format known as a *schema*. Each Kafka message actually has two parts, key and value, and each part has its own schema. The MQ source connector does not currently use message keys, but some of the configuration options use the word *Value* because they refer to the Kafka message value.
64
66
65
-
When the MQ source connector reads a message from MQ, it chooses a schema to represent the message format and creates a Java object containing the message value. Each message is then processed using a *converter* which creates the message that's published on a Kafka topic. You need to choose a converter appropriate to the format of messages that will pass through the connector.
67
+
When the MQ source connector reads a message from MQ, it chooses a schema to represent the message format and creates an internal object called a *record* containing the message value. This conversion is performed using a *record builder*. Each record is then processed using a *converter* which creates the message that's published on a Kafka topic.
68
+
69
+
There are two record builders supplied with the connector, although you can write your own. The basic rule is that if you just want the message to be passed along to Kafka unchanged, the default record builder is probably the best choice. If the incoming data is in JSON format and you want to use a schema based on its structure, use the JSON record builder.
70
+
71
+
There are three converters build into Apache Kafka and another which is part of the Confluent Platform. You need to make sure that the incoming message format, the setting of the *mq.message.body.jms* configuration, the record builder and converter are all compatible. By default, everything is just treated as bytes but if you want the connector to understand the message format and apply more sophisticated processing such as single-message transforms, you'll need a more complex configuration. The following table shows the basic options that work.
72
+
73
+
| Record builder class | Incoming MQ message | mq.message.body.jms | Converter class | Outgoing Kafka message |
| com.ibm.mq.kafkaconnect.builders.JsonRecordBuilder | JSON, may have schema | Not used | org.apache.kafka.connect.json.JsonConverter |**JSON, no schema**|
79
+
| com.ibm.mq.kafkaconnect.builders.JsonRecordBuilder | JSON, may have schema | Not used | io.confluent.connect.avro.AvroConverter |**Binary-encoded Avro**|
66
80
67
81
There's no single configuration that will always be right, but here are some high-level suggestions.
68
82
69
-
* Pass unchanged binary data as the Kafka message value
83
+
* Pass unchanged binary (or string) data as the Kafka message value
The MQ source connector has a configuration option *mq.message.body.jms* that controls whether it interprets the MQ messages as JMS messages or regular MQ messages. By default, *mq.message.body.jms=false* which gives the following behaviour.
86
-
87
-
| Incoming message format | Value schema | Value class |
This means that all messages are treated as arrays of bytes, and the converter must be able to handle arrays of bytes.
92
-
93
-
When you set *mq.message.body.jms=true*, the MQ messages are interpreted as JMS messages. This is appropriate if the applications sending the messages are themselves using JMS. This gives the following behaviour.
104
+
### The gory detail
105
+
The messages received from MQ are processed by a record builder which builds a Kafka Connect record to represent the message. There are two record builders supplied with the MQ source connector. The connector has a configuration option *mq.message.body.jms* that controls whether it interprets the MQ messages as JMS messages or regular MQ messages.
94
106
95
-
| Incoming message format | Value schema | Value class |
| com.ibm.mq.kafkaconnect.builders.JsonRecordBuilder | Not used | JSON | Depends on message | Depends on message |
100
114
101
-
There are three basic converters built into Apache Kafka, with the likely useful combinations in **bold**.
115
+
You must then choose a converter than can handle the value schema and class. There are three basic converters built into Apache Kafka, with the likely useful combinations in **bold**.
In addition, there is another converter for the Avro format that is part of the Confluent Platform. This has not been tested with the MQ source connector at this time.
110
124
@@ -135,6 +149,7 @@ The configuration options for the MQ Source Connector are as follows:
135
149
| mq.queue | The name of the source MQ queue | string || MQ queue name |
136
150
| mq.user.name | The user name for authenticating with the queue manager | string || User name |
137
151
| mq.password | The password for authenticating with the queue manager | string || Password |
152
+
| mq.record.builder | The class used to build the Kafka Connect record | string || Class implementing RecordBuilder |
138
153
| mq.message.body.jms | Whether to interpret the message body as a JMS message type | boolean | false ||
139
154
| mq.ssl.cipher.suite | The name of the cipher suite for TLS (SSL) connection | string || Blank or valid cipher suite |
140
155
| mq.ssl.peer.name | The distinguished name pattern of the TLS (SSL) peer | string || Blank or DN pattern |
@@ -143,10 +158,10 @@ The configuration options for the MQ Source Connector are as follows:
143
158
144
159
## Future enhancements
145
160
The connector is intentionally basic. The idea is to enhance it over time with additional features to make it more capable. Some possible future enhancements are:
161
+
* Simplification of handling message formats
146
162
* Message key support
147
163
* Configurable schema for MQ messages
148
164
* JMX metrics
149
-
* JSON parsing so that the JSON type information is supplied to the converter
150
165
* Testing with the Confluent Platform Avro converter and Schema Registry
151
166
* Separate TLS configuration for the connector so that keystore location and so on can be specified as configurations
0 commit comments