You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _analyzers/supported-analyzers/standard.md
+57-34Lines changed: 57 additions & 34 deletions
Original file line number
Diff line number
Diff line change
@@ -7,17 +7,19 @@ nav_order: 50
7
7
8
8
# Standard analyzer
9
9
10
-
The `standard` analyzer is the default analyzer used when no other analyzer is specified. It is designed to provide a basic and efficient approach to generic text processing.
10
+
The `standard` analyzer is the built-in default analyzer used for general-purpose full-text search in OpenSearch. It is designed to provide consistent, language-agnostic text processing by efficiently breaking down text into searchable terms.
11
11
12
-
This analyzer consists of the following tokenizers and token filters:
12
+
The `standard`analyzer performs the following operations:
13
13
14
-
-`standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters.
15
-
-`lowercase` token filter: Converts all tokens to lowercase, ensuring case-insensitive matching.
16
-
-`stop` token filter: Removes common stopwords, such as "the", "is", and "and", from the tokenized output.
14
+
-**Tokenization**: Uses the [`standard`]({{site.url}}{{site.baseurl}}/analyzers/tokenizers/standard/) tokenizer, which splits text into words based on Unicode text segmentation rules, handling spaces, punctuation, and common delimiters.
15
+
-**Lowercasing**: Applies the [`lowercase`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/lowercase/) token filter to convert all tokens to lowercase, ensuring consistent matching regardless of input case.
17
16
18
-
## Example
17
+
This combination makes the `standard` analyzer ideal for indexing a wide range of natural language content without needing language-specific customizations.
19
18
20
-
Use the following command to create an index named `my_standard_index` with a `standard` analyzer:
19
+
20
+
## Example: Creating an index with the standard analyzer
21
+
22
+
You can assign the `standard` analyzer to a text field when creating an index:
21
23
22
24
```json
23
25
PUT /my_standard_index
@@ -26,41 +28,43 @@ PUT /my_standard_index
26
28
"properties": {
27
29
"my_field": {
28
30
"type": "text",
29
-
"analyzer": "standard"
31
+
"analyzer": "standard"
30
32
}
31
33
}
32
34
}
33
35
}
34
36
```
35
37
{% include copy-curl.html %}
36
38
39
+
37
40
## Parameters
38
41
39
-
You can configure a `standard` analyzer with the following parameters.
42
+
The `standard` analyzer supports the following optional parameters.
40
43
41
-
Parameter | Required/Optional | Data type | Description
42
-
:--- | :--- | :--- | :---
43
-
`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in `max_token_length`. Default is `255`.
44
-
`stopwords` | Optional | String or list of strings | A string specifying a predefined list of stopwords (such as `_english_`) or an array specifying a custom list of stopwords. Default is `_none_`.
45
-
`stopwords_path` | Optional | String | The path (absolute or relative to the config directory) to the file containing a list of stop words.
44
+
| Parameter| Data type |Default |Description|
45
+
|:----------|:-----|:--------|:------------|
46
+
|`max_token_length`|Integer|`255`|The maximum length that a token can be before it is split. |
47
+
|`stopwords`| String or list of strings |None | A list of stopwords or a [predefined stopword set for a language]({{site.url}}{{site.baseurl}}/analyzers/token-filters/stop/#predefined-stopword-sets-by-language) to remove during analysis. For example, `_english_`. |
48
+
|`stopwords_path`|String|None| The path to a file containing stopwords to be used during analysis. |
46
49
50
+
Only use one of the parameters `stopwords` or `stopwords_path`. If both are used, no error is returned but only the `stopwords` parameter is applied.
51
+
{: .note}
47
52
48
-
## Configuring a custom analyzer
53
+
## Example: Analyzer with parameters
49
54
50
-
Use the following command to configure an index with a custom analyzer that is equivalent to the `standard` analyzer:
55
+
The following example creates a `products` index and configures the `max_token_length` and `stopwords` parameters:
51
56
52
57
```json
53
-
PUT /my_custom_index
58
+
PUT /animals
54
59
{
55
60
"settings": {
56
61
"analysis": {
57
62
"analyzer": {
58
-
"my_custom_analyzer": {
59
-
"type": "custom",
60
-
"tokenizer": "standard",
61
-
"filter": [
62
-
"lowercase",
63
-
"stop"
63
+
"my_manual_stopwords_analyzer": {
64
+
"type": "standard",
65
+
"max_token_length": 10,
66
+
"stopwords": [
67
+
"the", "is", "and", "but", "an", "a", "it"
64
68
]
65
69
}
66
70
}
@@ -70,28 +74,47 @@ PUT /my_custom_index
70
74
```
71
75
{% include copy-curl.html %}
72
76
73
-
## Generated tokens
74
-
75
-
Use the following request to examine the tokens generated using the analyzer:
77
+
Use the following `_analyze` API request to see how the `my_manual_stopwords_analyzer` processes text:
Copy file name to clipboardExpand all lines: _analyzers/token-filters/stop.md
+42-2Lines changed: 42 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ The `stop` token filter can be configured with the following parameters.
17
17
18
18
Parameter | Required/Optional | Data type | Description
19
19
:--- | :--- | :--- | :---
20
-
`stopwords` | Optional | String | Specifies either a custom array of stopwords or a language for which to fetch the predefined Lucene stopword list:<br><br>- [`_arabic_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt)<br>- [`_armenian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/hy/stopwords.txt)<br>- [`_basque_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/eu/stopwords.txt)<br>- [`_bengali_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt)<br>- [`_brazilian_` (Brazilian Portuguese)](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/br/stopwords.txt) <br>- [`_bulgarian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt)<br>- [`_catalan_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ca/stopwords.txt)<br>- [`_cjk_` (Chinese, Japanese, and Korean)](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/cjk/stopwords.txt)<br>- [`_czech_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/cz/stopwords.txt)<br>- [`_danish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/danish_stop.txt)<br>- [`_dutch_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/dutch_stop.txt)<br>- [`_english_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/java/org/apache/lucene/analysis/en/EnglishAnalyzer.java#L48) (Default)<br>- [`_estonian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/et/stopwords.txt)<br>- [`_finnish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/finnish_stop.txt)<br>- [`_french_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/french_stop.txt)<br>- [`_galician_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/gl/stopwords.txt)<br>- [`_german_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/german_stop.txt)<br>- [`_greek_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/el/stopwords.txt)<br>- [`_hindi_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt)<br>- [`_hungarian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/hungarian_stop.txt)<br>- [`_indonesian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/id/stopwords.txt)<br>- [`_irish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ga/stopwords.txt)<br>- [`_italian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/italian_stop.txt)<br>- [`_latvian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/lv/stopwords.txt)<br>- [`_lithuanian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/lt/stopwords.txt)<br>- [`_norwegian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/norwegian_stop.txt)<br>- [`_persian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt)<br>- [`_portuguese_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/portuguese_stop.txt)<br>- [`_romanian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt)<br>- [`_russian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/russian_stop.txt)<br>- [`_sorani_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/sr/stopwords.txt)<br>- [`_spanish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ckb/stopwords.txt)<br>- [`_swedish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/swedish_stop.txt)<br>- [`_thai_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/th/stopwords.txt)<br>- [`_turkish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/tr/stopwords.txt)
20
+
`stopwords` | Optional | String | Specifies either a custom array of stopwords or a [predefined stopword set for a language](#predefined-stopword-sets-by-language). Default is `_english_`.
21
21
`stopwords_path` | Optional | String | Specifies the file path (absolute or relative to the config directory) of the file containing custom stopwords.
22
22
`ignore_case` | Optional | Boolean | If `true`, stopwords will be matched regardless of their case. Default is `false`.
23
23
`remove_trailing` | Optional | Boolean | If `true`, trailing stopwords will be removed during analysis. Default is `true`.
@@ -108,4 +108,44 @@ The response contains the generated tokens:
108
108
}
109
109
]
110
110
}
111
-
```
111
+
```
112
+
113
+
## Predefined stopword sets by language
114
+
115
+
The following is a list of all available predefined stopword sets by language:
-[`_cjk_` (Chinese, Japanese, and Korean)](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/cjk/stopwords.txt)
Copy file name to clipboardExpand all lines: _api-reference/index-apis/alias.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ POST _aliases
25
25
26
26
All parameters are optional.
27
27
28
-
Parameter | Data Type | Description
28
+
Parameter | Data type | Description
29
29
:--- | :--- | :---
30
30
cluster_manager_timeout | Time | The amount of time to wait for a response from the cluster manager node. Default is `30s`.
31
31
timeout | Time | The amount of time to wait for a response from the cluster. Default is `30s`.
@@ -34,7 +34,7 @@ timeout | Time | The amount of time to wait for a response from the cluster. Def
34
34
35
35
In your request body, you need to specify what action to take, the alias name, and the index you want to associate with the alias. Other fields are optional.
36
36
37
-
Field | Data Type | Description | Required
37
+
Field | Data type | Description | Required
38
38
:--- | :--- | :--- | :---
39
39
actions | Array | Set of actions you want to perform on the index. Valid options are: `add`, `remove`, and `remove_index`. You must have at least one action in the array. | Yes
40
40
add | N/A | Adds an alias to the specified index. | No
Copy file name to clipboardExpand all lines: _im-plugin/index-transforms/transforms-apis.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -28,15 +28,15 @@ PUT _plugins/_transform/<transform_id>
28
28
29
29
### Path parameters
30
30
31
-
Parameter | Data Type | Description
31
+
Parameter | Data type | Description
32
32
:--- | :--- | :---
33
33
transform_id | String | Transform ID |
34
34
35
35
### Request body fields
36
36
37
37
You can specify the following options in the HTTP request body:
38
38
39
-
Option | Data Type | Description | Required
39
+
Option | Data type | Description | Required
40
40
:--- | :--- | :--- | :---
41
41
enabled | Boolean | If true, the transform job is enabled at creation. | No
42
42
continuous | Boolean | Specifies whether the transform job should be continuous. Continuous jobs execute every time they are scheduled according to the `schedule` field and run based off of newly transformed buckets as well as any new data added to source indexes. Non-continuous jobs execute only once. Default is `false`. | No
0 commit comments