@@ -48,11 +48,21 @@ query expansion via [a thesaurus](https://docs.marklogic.com/guide/search-dev/th
48
48
49
49
## Optic query requirements
50
50
51
- As of the 2.0 release of the connector, the Optic query must use the
52
- [ op.fromView] ( https://docs.marklogic.com/op.fromView ) accessor function. The query must also adhere to the
53
- restrictions that the
54
- [ RowBatcher in the Data Movement SDK] ( https://github.com/marklogic/java-client-api/wiki/Row-Batcher#building-a-plan-for-exporting-the-view )
55
- adheres to as well.
51
+ As of the 2.0.0 release of the connector, the Optic query must use the
52
+ [ op.fromView] ( https://docs.marklogic.com/op.fromView ) accessor function. Future releases of both the connector and
53
+ MarkLogic will strive to relax this requirement.
54
+
55
+ In addition, calls to ` groupBy ` , ` orderBy ` , ` limit ` , and ` offset ` should be performed via Spark instead of within
56
+ the initial Optic query. A key benefit of Spark and the MarkLogic connector is the ability to execute the query in
57
+ parallel via multiple Spark partitions. The aforementioned calls, if made in the Optic query, may not produce the
58
+ expected results if more than one Spark partition is used or if more than one request is made to MarkLogic. The
59
+ equivalent Spark operations should be called instead, or the connector should be configured to make a single request
60
+ to MarkLogic. See the "Pushing down operations" and "Tuning performance" sections below for more information.
61
+
62
+ Finally, the query must adhere to the handful of limitations imposed by the
63
+ [ Optic Query DSL] ( https://docs.marklogic.com/guide/app-dev/OpticAPI#id_46710 ) . A good practice in validating a
64
+ query is to run it in your [ MarkLogic server's qconsole tool] ( https://docs.marklogic.com/guide/qconsole ) in a buffer
65
+ with a query type of "Optic DSL".
56
66
57
67
## Schema inference
58
68
0 commit comments