Skip to content

Commit a8ad063

Browse files
authored
Merge pull request #63 from Emrehzl94/release-5.1.5
Release 5.1.5
2 parents 479f510 + 5514521 commit a8ad063

File tree

6 files changed

+220
-1
lines changed

6 files changed

+220
-1
lines changed

antora.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ asciidoc:
1212
page-product: Neo4j Connector for Kafka
1313
kafka-connect-version: 3.0
1414
connector-version: '5.1'
15-
exact-connector-version: '5.1.4'
15+
exact-connector-version: '5.1.5'
1616
page-pagination: true
1717
product-name: Neo4j Connector for Kafka
1818
url-common-license-page: https://neo4j.com/docs/license/

modules/ROOT/content-nav.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
** xref:source/query.adoc[Query]
1616
* xref:source/schema-registry.adoc[Schema Registry]
1717
* xref:source/configuration.adoc[Settings]
18+
* xref:source/payload-mode.adoc[Payload Mode]
1819
1920
* *Sink connector*
2021
* xref::sink.adoc[Configuration]

modules/ROOT/pages/changelog.adoc

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,29 @@
22

33
This page lists changes to the {product-name}.
44

5+
== Version 5.1.5
6+
7+
=== New and Updated Features
8+
9+
[cols="1,2", options="header"]
10+
|===
11+
| Feature | Details
12+
13+
a|
14+
label:functionality[]
15+
label:new[]
16+
17+
Added `neo4j.payload-mode` configuration property for source connector.
18+
| Introduced the `neo4j.payload-mode` option to define the structure of change messages. Available values are `COMPACT` and `EXTENDED`. `COMPACT` provides simpler messages but faces schema compatibility issues with property type changes, while `EXTENDED` includes type information to avoid such issues. Default is `EXTENDED`.
19+
20+
a|
21+
label:bug[]
22+
label:fixed[]
23+
24+
Prevented exception caused by adding duplicate fields in schema generation for CDC source events.
25+
| Resolved an issue in the ChangeEvent schema generation process where duplicate fields were causing an exception (`org.apache.kafka.connect.errors.SchemaBuilderException: Cannot create field because of field name duplication id`). When building the schema for key array elements, if different maps contained the same field name, the field was being added multiple times, leading to this exception. Now, duplicate fields are handled appropriately to avoid this issue.
26+
|===
27+
528
== Version 5.1.4
629

730
=== New and updated features

modules/ROOT/pages/source/configuration.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,10 @@ Default: `false`
3535

3636
Default: `1000`
3737

38+
| neo4j.payload-mode
39+
| Defines the structure of change messages. One of `COMPACT`, `EXTENDED`. `COMPACT` provides simpler messages but faces schema compatibility issues if property types change. `EXTENDED` includes type information to avoid such issues.
40+
41+
Default: `EXTENDED`.
3842
|===
3943

4044
== CDC Strategy Settings
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
= Kafka Source Connector: Payload Mode Configuration
2+
3+
The Kafka Source Connector for Neo4j supports two payload modes to control the format of data serialized and published to Kafka topics: `EXTENDED` and `COMPACT`. This feature is configurable through the `neo4j.payload-mode` property, allowing users to select the preferred serialization format based on data requirements.
4+
5+
== Payload Modes
6+
7+
The `neo4j.payload-mode` configuration offers the following options:
8+
9+
* **`EXTENDED` (Default)**: Provides a detailed structure for each property, supporting schema compatibility and consistency. This format is especially useful in cases where schema changes (such as property type changes) or temporal types are present, ensuring data consistency across changes.
10+
11+
* **`COMPACT`**: Produces a simpler format that only includes the essential fields. This format is lighter and may be preferable when schema compatibility or complex data types are not required.
12+
13+
[WARNING]
14+
====
15+
*Limitations of `COMPACT` Mode*
16+
17+
* **Property Type Changes**: `COMPACT` mode does not support changes in property types. If a property type changes in Neo4j (e.g., from integer to string), it can break the schema.
18+
* **Protobuf Compatibility**: `COMPACT` mode is not supported with Protobuf. It does not support serialization of temporal types (e.g., `LocalDate`, `LocalDateTime`).
19+
====
20+
21+
22+
== Configuration
23+
24+
The payload mode can be configured in the source connector's settings as follows:
25+
26+
[source,json]
27+
----
28+
"neo4j.payload-mode": "EXTENDED" // Or "COMPACT" based on requirements
29+
----
30+
31+
== Example Data Formats
32+
33+
The following examples show how data will be published in each payload mode.
34+
35+
=== `COMPACT` Mode Example
36+
37+
The `COMPACT` mode produces a minimalistic payload with only the essential fields:
38+
39+
[source,json]
40+
----
41+
{
42+
"name": "mary",
43+
"surname": "doe",
44+
"timestamp": 1729779296311
45+
}
46+
----
47+
48+
This mode is useful when performance and simplicity are priorities, and it is suitable for scenarios where schema evolution and temporal consistency are not a primary concern.
49+
50+
=== `EXTENDED` Mode Example
51+
52+
The `EXTENDED` mode includes additional structure and metadata to support complex types and schema consistency, preventing issues when property types change over time:
53+
54+
[source,json]
55+
----
56+
{
57+
"name": {
58+
"type": "S",
59+
"B": null,
60+
"I64": null,
61+
"F64": null,
62+
"S": "mary",
63+
"BA": null,
64+
"TLD": null,
65+
"TLDT": null,
66+
"TLT": null,
67+
"TZDT": null,
68+
"TOT": null,
69+
"TD": null,
70+
"SP": null,
71+
"LB": null,
72+
"LI64": null,
73+
"LF64": null,
74+
"LS": null,
75+
"LTLD": null,
76+
"LTLDT": null,
77+
"LTLT": null,
78+
"LZDT": null,
79+
"LTOT": null,
80+
"LTD": null,
81+
"LSP": null
82+
},
83+
"surname": {
84+
"type": "S",
85+
"B": null,
86+
"I64": null,
87+
"F64": null,
88+
"S": "doe",
89+
"BA": null,
90+
"TLD": null,
91+
"TLDT": null,
92+
"TLT": null,
93+
"TZDT": null,
94+
"TOT": null,
95+
"TD": null,
96+
"SP": null,
97+
"LB": null,
98+
"LI64": null,
99+
"LF64": null,
100+
"LS": null,
101+
"LTLD": null,
102+
"LTLDT": null,
103+
"LTLT": null,
104+
"LZDT": null,
105+
"LTOT": null,
106+
"LTD": null,
107+
"LSP": null
108+
},
109+
"timestamp": {
110+
"type": "I64",
111+
"B": null,
112+
"I64": 1729779365447,
113+
"F64": null,
114+
"S": null,
115+
"BA": null,
116+
"TLD": null,
117+
"TLDT": null,
118+
"TLT": null,
119+
"TZDT": null,
120+
"TOT": null,
121+
"TD": null,
122+
"SP": null,
123+
"LB": null,
124+
"LI64": null,
125+
"LF64": null,
126+
"LS": null,
127+
"LTLD": null,
128+
"LTLDT": null,
129+
"LTLT": null,
130+
"LZDT": null,
131+
"LTOT": null,
132+
"LTD": null,
133+
"LSP": null
134+
}
135+
}
136+
137+
----
138+
139+
This mode is especially beneficial for data with complex schema requirements, as it ensures compatibility even if property types change on the Neo4j side.
140+
141+
== Understanding the `EXTENDED` Payload Structure
142+
143+
In `EXTENDED` mode, each property includes fields for every supported Neo4j type. Only the field corresponding to the actual property type will contain a non-null value, while all others are set to null. This structure ensures that any change in the type of a property does not cause schema enforcement errors at either the source or sink connector.
144+
145+
[cols="1,2"]
146+
|===
147+
| Field | Description
148+
149+
| type | Indicates the type of the property. Possible values include: `B`, `I64`, `F64`, `S`, `BA`, `TLD`, `TLDT`, `TLT`, `TZDT`, `TOT`, `TD`, `SP`, or their list equivalents (e.g., `LB`, `LI64`, `LF64`, `LS`, `LTLD`, etc.).
150+
| B | Boolean type (true or false)
151+
| I64 | 64-bit integer
152+
| F64 | 64-bit floating point
153+
| S | String
154+
| BA | Byte array
155+
| TLD | Temporal Local Date
156+
| TLDT | Temporal Local DateTime
157+
| TLT | Temporal Local Time
158+
| TZDT | Temporal Zoned DateTime
159+
| TOT | Temporal Offset Time
160+
| TD | Temporal Duration
161+
| SP | Spatial Point
162+
| LB, LI64, LF64, LS, LTLD, etc. | Lists of each corresponding type
163+
|===
164+
165+
For example, a string field will be represented as:
166+
167+
[source,json]
168+
----
169+
{
170+
"type": "S",
171+
"B": null,
172+
"I64": null,
173+
"F64": null,
174+
"S": "actual_value",
175+
...
176+
}
177+
----
178+
179+
== Configuration Recommendations
180+
181+
`COMPACT` mode is useful and easier to work with when generated messages are consumed with other connectors or applications, and you can relax your schema compatibility mode on target topics. If your environment requires schema compatibility, temporal data types, or you have strong type safety requirements with different converters (`AVRO`, `JSON Schema`, `PROTOBUF` or `JSON Embedded`), `EXTENDED` mode should be preferred.
182+
183+
== Compatibility with Sink Connectors
184+
185+
The `EXTENDED` format was introduced in connector version 5.1.0 to ensure that all data published to Kafka topics adheres to a consistent schema. This prevents issues when a property changes type on the Neo4j side (e.g., a name property changes from integer to string), enabling smooth data processing across connectors and Kafka consumers. When a Neo4j sink connector is fed by a Neo4j source connector, it’s recommended to use `EXTENDED` mode, as the Neo4j sink connector can seamlessly handle the `EXTENDED` data type.

modules/ROOT/pages/whats-new.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,12 @@ It is no longer possible to turn off this behavior in the connector itself, and
2525

2626
* It is now possible to ignore stored offsets by setting `neo4j.ignore-stored-offset` to `true` if required.
2727

28+
* The new `payload.mode` configuration provides options to control the payload structure:
29+
30+
** **`EXTENDED`**: Provides detailed data and type information, ensuring compatibility even if property types change.
31+
32+
** **`COMPACT`**: Provides a simpler, lightweight format with only essential fields, best used when schema compatibility or complex types aren’t needed.
33+
2834
== Sink
2935

3036
* Changes are now applied in the order they are received from Kafka Connect, grouped by their topics.

0 commit comments

Comments
 (0)