You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| spark.marklogic.client.host | Required; the host name to connect to; this can be the name of a host in your MarkLogic cluster or the host name of a load balancer. |
17
17
| spark.marklogic.client.port | Required; the port of the app server in MarkLogic to connect to. |
18
18
| spark.marklogic.client.basePath | Base path to prefix on each request to MarkLogic. |
@@ -83,29 +83,30 @@ describes the other choices for this option.
83
83
These options control how the connector reads data from MarkLogic. See [the guide on reading](reading.md) for more
| spark.marklogic.read.opticQuery | Required; the Optic DSL query to run for retrieving rows; must use `op.fromView` as the accessor. |
89
-
| spark.marklogic.read.numPartitions | The number of Spark partitions to create; defaults to `spark.default.parallelism`. |
86
+
| Option | Description |
87
+
| --- | --- |
90
88
| spark.marklogic.read.batchSize | Approximate number of rows to retrieve in each call to MarkLogic; defaults to 100000. |
89
+
| spark.marklogic.read.numPartitions | The number of Spark partitions to create; defaults to `spark.default.parallelism`. |
90
+
| spark.marklogic.read.opticQuery | Required; the Optic DSL query to run for retrieving rows; must use `op.fromView` as the accessor. |
91
91
| spark.marklogic.read.pushDownAggregates | Whether to push down aggregate operations to MarkLogic; defaults to `true`. Set to `false` to prevent aggregates from being pushed down to MarkLogic. |
92
+
92
93
## Write options
93
94
94
95
These options control how the connector writes data to MarkLogic. See [the guide on writing](writing.md) for more
| spark.marklogic.write.abortOnFailure | Whether the Spark job should abort if a batch fails to be written; defaults to `true`. |
100
101
| spark.marklogic.write.batchSize | The number of documents written in a call to MarkLogic; defaults to 100. |
101
-
| spark.marklogic.write.collections | Comma-delimited string of collection names to add to each document |
102
-
| spark.marklogic.write.permissions | Comma-delimited string of role names and capabilities to add to each document - e.g. role1,read,role2,update,role3,execute |
103
-
| spark.marklogic.write.temporalCollection | Name of a temporal collection to assign each document to |
102
+
| spark.marklogic.write.collections | Comma-delimited string of collection names to add to each document.|
103
+
| spark.marklogic.write.permissions | Comma-delimited string of role names and capabilities to add to each document - e.g. role1,read,role2,update,role3,execute . |
104
+
| spark.marklogic.write.temporalCollection | Name of a temporal collection to assign each document to.|
104
105
| spark.marklogic.write.threadCount | The number of threads used within each partition to send documents to MarkLogic; defaults to 4. |
105
-
| spark.marklogic.write.transform | Name of a REST transform to apply to each document |
106
-
| spark.marklogic.write.transformParams | Comma-delimited string of transform parameter names and values - e.g. param1,value1,param2,value2 |
107
-
| spark.marklogic.write.transformParamsDelimiter | Delimiter to use instead of a command for the `transformParams` option |
108
-
| spark.marklogic.write.uriPrefix | String to prepend to each document URI, where the URI defaults to a UUID |
109
-
| spark.marklogic.write.uriSuffix | String to append to each document URI, where the URI defaults to a UUID |
106
+
| spark.marklogic.write.transform | Name of a REST transform to apply to each document.|
107
+
| spark.marklogic.write.transformParams | Comma-delimited string of transform parameter names and values - e.g. param1,value1,param2,value2 . |
108
+
| spark.marklogic.write.transformParamsDelimiter | Delimiter to use instead of a command for the `transformParams` option.|
109
+
| spark.marklogic.write.uriPrefix | String to prepend to each document URI, where the URI defaults to a UUID.|
110
+
| spark.marklogic.write.uriSuffix | String to append to each document URI, where the URI defaults to a UUID.|
110
111
| spark.marklogic.write.uriTemplate | String defining a template for constructing each document URI. See [Writing data](writing.md) for more information. |
Copy file name to clipboardExpand all lines: docs/getting-started/setup.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ environment's documentation on how to achieve this.
19
19
## Deploy an example application
20
20
21
21
The connector allows a user to specify an
22
-
[Optic DSL query](https://docs.marklogic.com/guide/app-dev/OpticAPI#id_46710) to select rows to retrieve from
22
+
[Optic query](https://docs.marklogic.com/guide/app-dev/OpticAPI#id_46710) to select rows to retrieve from
23
23
MarkLogic. The query depends on a [MarkLogic view](https://docs.marklogic.com/guide/app-dev/OpticAPI#id_68685) that
24
24
projects data from documents in MarkLogic into rows.
25
25
@@ -42,14 +42,14 @@ MarkLogic server that includes a
42
42
43
43
After the deployment finishes, your MarkLogic server will now have the following:
44
44
45
-
- An app server named `spark-example` listening on port 8020 (or the port you chose if you overrode the `mlPort`
45
+
- An app server named `spark-example` listening on port 8020 (or the port you chose if you modified the `mlPort`
46
46
property).
47
47
- A database named `spark-example-content` that contains 1000 JSON documents in the `employee` collection.
48
48
- A TDE with a schema name of `example` and a view name of `employee`.
49
-
- A user named `spark-example-user` that can be used with the Spark connector and in MarkLogic's qconsole tool.
49
+
- A user named `spark-example-user` that can be used with the Spark connector and [MarkLogic's qconsole tool](https://docs.marklogic.com/guide/qconsole/intro).
50
50
51
51
To verify that your application was deployed correctly, access your MarkLogic server's qconsole tool - for example,
52
-
if your MarkLogic server is deployed locally, you will go to http://localhost:8000/qconsole . You can authenticate as
52
+
if your MarkLogic server is deployed locally, you will go to <http://localhost:8000/qconsole> . You can authenticate as
53
53
the `spark-example-user` user that was created above, as it's generally preferable to test as a non-admin user.
54
54
After authenticating, perform the following steps:
0 commit comments