You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -73,8 +74,8 @@ After you deploy a pipeline, it goes through the following phases:
73
74
Then, the [operator]({{< relref "/integrate/redis-data-integration/architecture#how-rdi-is-deployed">}}) creates and configures the collector and stream processor that will run the pipeline.
74
75
1.*Snapshot* - The collector starts the pipeline by creating a snapshot of the full
75
76
dataset. This involves reading all the relevant source data, transforming it and then
76
-
writing it into the Redis target. You should expect this phase to take minutes or
77
-
hours to complete if you have a lot of data.
77
+
writing it into the Redis target. This phase typically takes minutes to
78
+
hours if you have a lot of data.
78
79
1.*CDC* - Once the snapshot is complete, the collector starts listening for updates to
79
80
the source data. Whenever a change is committed to the source, the collector captures
80
81
it and adds it to the target through the pipeline. This phase continues indefinitely
@@ -115,7 +116,7 @@ structure of the configuration:
115
116
The main configuration for the pipeline is in the `config.yaml` file.
116
117
This specifies the connection details for the source database (such
117
118
as host, username, and password) and also the queries that RDI will use
118
-
to extract the required data. You should place job configurations in the `Jobs`
119
+
to extract the required data. You should place job files in the `Jobs`
119
120
folder if you want to specify your own data transformations.
0 commit comments