Skip to content

Commit 1e7461d

Browse files
committed
add backsticks for COMPACT and EXTENDED texts
1 parent 92a966f commit 1e7461d

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

modules/ROOT/pages/source/payload-mode.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,9 @@ The payload mode can be configured in the source connector's settings as follows
3232

3333
The following examples show how data will be published in each payload mode.
3434

35-
=== COMPACT Mode Example
35+
=== `COMPACT` Mode Example
3636

37-
The COMPACT mode produces a minimalistic payload with only the essential fields:
37+
The `COMPACT` mode produces a minimalistic payload with only the essential fields:
3838

3939
[source,json]
4040
----
@@ -47,9 +47,9 @@ The COMPACT mode produces a minimalistic payload with only the essential fields:
4747

4848
This mode is useful when performance and simplicity are priorities, and it is suitable for scenarios where schema evolution and temporal consistency are not a primary concern.
4949

50-
=== EXTENDED Mode Example
50+
=== `EXTENDED` Mode Example
5151

52-
The EXTENDED mode includes additional structure and metadata to support complex types and schema consistency, preventing issues when property types change over time:
52+
The `EXTENDED` mode includes additional structure and metadata to support complex types and schema consistency, preventing issues when property types change over time:
5353

5454
[source,json]
5555
----
@@ -138,9 +138,9 @@ The EXTENDED mode includes additional structure and metadata to support complex
138138

139139
This mode is especially beneficial for data with complex schema requirements, as it ensures compatibility even if property types change on the Neo4j side.
140140

141-
== Understanding the EXTENDED Payload Structure
141+
== Understanding the `EXTENDED` Payload Structure
142142

143-
In EXTENDED mode, each property includes fields for every supported Neo4j type. Only the field corresponding to the actual property type will contain a non-null value, while all others are set to null. This structure ensures that any change in the type of a property does not cause schema enforcement errors at either the source or sink connector.
143+
In `EXTENDED` mode, each property includes fields for every supported Neo4j type. Only the field corresponding to the actual property type will contain a non-null value, while all others are set to null. This structure ensures that any change in the type of a property does not cause schema enforcement errors at either the source or sink connector.
144144

145145
[cols="1,2"]
146146
|===
@@ -178,8 +178,8 @@ For example, a string field will be represented as:
178178

179179
== Configuration Recommendations
180180

181-
For production environments where performance and payload simplicity are important, COMPACT mode is recommended. If your environment involves schema evolution, temporal data types, or other complex data requirements, EXTENDED mode provides the necessary structure for schema compatibility.
181+
For production environments where performance and payload simplicity are important, `COMPACT` mode is recommended. If your environment involves schema evolution, temporal data types, or other complex data requirements, `EXTENDED` mode provides the necessary structure for schema compatibility.
182182

183183
== Compatibility with Sink Connectors
184184

185-
The EXTENDED format was introduced in connector version 5.1.0 to ensure that all data published to Kafka topics adheres to a consistent schema. This prevents issues when a property changes type on the Neo4j side (e.g., a name property changes from integer to string), enabling smooth data processing across connectors and Kafka consumers.
185+
The `EXTENDED` format was introduced in connector version 5.1.0 to ensure that all data published to Kafka topics adheres to a consistent schema. This prevents issues when a property changes type on the Neo4j side (e.g., a name property changes from integer to string), enabling smooth data processing across connectors and Kafka consumers.

0 commit comments

Comments
 (0)