You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: client/src/commonMain/kotlin/com/algolia/client/api/IngestionClient.kt
+34-2Lines changed: 34 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -922,14 +922,46 @@ public class IngestionClient(
922
922
}
923
923
924
924
/**
925
-
* Push a `batch` request payload through the Pipeline. You can check the status of task pushes with the observability endpoints.
925
+
* Pushes records through the Pipeline, directly to an index. You can make the call synchronous by providing the `watch` parameter, for asynchronous calls, you can use the observability endpoints and/or debugger dashboard to see the status of your task. If you want to leverage the [pre-indexing data transformation](https://www.algolia.com/doc/guides/sending-and-managing-data/send-and-update-your-data/how-to/transform-your-data/), this is the recommended way of ingesting your records. This method is similar to `pushTask`, but requires an `indexName` instead of a `taskID`. If zero or many tasks are found, an error will be returned.
926
+
*
927
+
* Required API Key ACLs:
928
+
* - addObject
929
+
* - deleteIndex
930
+
* - editSettings
931
+
* @param indexName Name of the index on which to perform the operation.
932
+
* @param pushTaskPayload
933
+
* @param watch When provided, the push operation will be synchronous and the API will wait for the ingestion to be finished before responding.
require(indexName.isNotBlank()) { "Parameter `indexName` is required when calling `push`." }
938
+
val requestConfig =RequestConfig(
939
+
method =RequestMethod.POST,
940
+
path =listOf("1", "push", "$indexName"),
941
+
query = buildMap {
942
+
watch?.let { put("watch", it) }
943
+
},
944
+
body = pushTaskPayload,
945
+
)
946
+
return requester.execute(
947
+
requestConfig = requestConfig,
948
+
requestOptions =RequestOptions(
949
+
readTimeout =180000.milliseconds,
950
+
writeTimeout =180000.milliseconds,
951
+
connectTimeout =180000.milliseconds,
952
+
) + requestOptions,
953
+
)
954
+
}
955
+
956
+
/**
957
+
* Pushes records through the Pipeline, directly to an index. You can make the call synchronous by providing the `watch` parameter, for asynchronous calls, you can use the observability endpoints and/or debugger dashboard to see the status of your task. If you want to leverage the [pre-indexing data transformation](https://www.algolia.com/doc/guides/sending-and-managing-data/send-and-update-your-data/how-to/transform-your-data/), this is the recommended way of ingesting your records. This method is similar to `push`, but requires a `taskID` instead of a `indexName`, which is useful when many `destinations` target the same `indexName`.
926
958
*
927
959
* Required API Key ACLs:
928
960
* - addObject
929
961
* - deleteIndex
930
962
* - editSettings
931
963
* @param taskID Unique identifier of a task.
932
-
* @param pushTaskPayload Request body of a Search API `batch` request that will be pushed in the Connectors pipeline.
964
+
* @param pushTaskPayload
933
965
* @param watch When provided, the push operation will be synchronous and the API will wait for the ingestion to be finished before responding.
0 commit comments