@@ -144,6 +144,9 @@ processors:
144
144
# Time (in ms) after which data will be read from stream even if
145
145
# read_batch_size was not reached.
146
146
# duration: 100
147
+ # Data type to use in Redis target database: `hash` for Redis Hash,
148
+ # `json` for JSON (which requires the RedisJSON module).
149
+ # target_data_type: hash
147
150
# The batch size for writing data to the target Redis database. Should be
148
151
# less than or equal to the read_batch_size.
149
152
# write_batch_size: 200
@@ -247,6 +250,13 @@ configuration above contains the following properties:
247
250
- ` read_batch_size ` : Maximum number of records to read from the source database. RDI will
248
251
wait for the batch to fill up to ` read_batch_size ` or for ` duration ` to elapse,
249
252
whichever happens first. The default is 2000.
253
+ - ` target_data_type ` : Data type to use in the target Redis database. The options are ` hash `
254
+ for Redis Hash (the default), or ` json ` for RedisJSON, which is available only if you have added the
255
+ RedisJSON module to the target database. Note that this setting is mainly useful when you
256
+ don't provide any custom jobs. When you do provide jobs, you can specify the
257
+ target data type in each job individually and choose from a wider range of data types.
258
+ See [ Job files] ({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples" >}})
259
+ (which requires the RedisJSON module) for more information.
250
260
- ` duration ` : Time (in ms) after which data will be read from the stream even if
251
261
` read_batch_size ` was not reached. The default is 100 ms.
252
262
- ` write_batch_size ` : The batch size for writing data to the target Redis database. This should be
0 commit comments