Skip to content

Commit 1eeb901

Browse files
AntonEliatrakolchfa-awsnatebower
authored
updating standard analyzer docs (#9747)
* updating standard analyzer docs Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * Update _analyzers/supported-analyzers/standard.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * addressing the PR comments Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * replacing add Data Type with Data type Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * Update standard.md Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> * addressing the PR comments Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> * Apply suggestions from code review Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> * Apply suggestions from code review Co-authored-by: Nathan Bower <nbower@amazon.com> Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> * Update standard.md Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> --------- Signed-off-by: Anton Rubin <anton.rubin@eliatra.com> Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: AntonEliatra <anton.rubin@eliatra.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower <nbower@amazon.com>
1 parent 5f2acf3 commit 1eeb901

File tree

7 files changed

+109
-46
lines changed

7 files changed

+109
-46
lines changed

_analyzers/supported-analyzers/standard.md

Lines changed: 57 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,19 @@ nav_order: 50
77

88
# Standard analyzer
99

10-
The `standard` analyzer is the default analyzer used when no other analyzer is specified. It is designed to provide a basic and efficient approach to generic text processing.
10+
The `standard` analyzer is the built-in default analyzer used for general-purpose full-text search in OpenSearch. It is designed to provide consistent, language-agnostic text processing by efficiently breaking down text into searchable terms.
1111

12-
This analyzer consists of the following tokenizers and token filters:
12+
The `standard` analyzer performs the following operations:
1313

14-
- `standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters.
15-
- `lowercase` token filter: Converts all tokens to lowercase, ensuring case-insensitive matching.
16-
- `stop` token filter: Removes common stopwords, such as "the", "is", and "and", from the tokenized output.
14+
- **Tokenization**: Uses the [`standard`]({{site.url}}{{site.baseurl}}/analyzers/tokenizers/standard/) tokenizer, which splits text into words based on Unicode text segmentation rules, handling spaces, punctuation, and common delimiters.
15+
- **Lowercasing**: Applies the [`lowercase`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/lowercase/) token filter to convert all tokens to lowercase, ensuring consistent matching regardless of input case.
1716

18-
## Example
17+
This combination makes the `standard` analyzer ideal for indexing a wide range of natural language content without needing language-specific customizations.
1918

20-
Use the following command to create an index named `my_standard_index` with a `standard` analyzer:
19+
20+
## Example: Creating an index with the standard analyzer
21+
22+
You can assign the `standard` analyzer to a text field when creating an index:
2123

2224
```json
2325
PUT /my_standard_index
@@ -26,41 +28,43 @@ PUT /my_standard_index
2628
"properties": {
2729
"my_field": {
2830
"type": "text",
29-
"analyzer": "standard"
31+
"analyzer": "standard"
3032
}
3133
}
3234
}
3335
}
3436
```
3537
{% include copy-curl.html %}
3638

39+
3740
## Parameters
3841

39-
You can configure a `standard` analyzer with the following parameters.
42+
The `standard` analyzer supports the following optional parameters.
4043

41-
Parameter | Required/Optional | Data type | Description
42-
:--- | :--- | :--- | :---
43-
`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in `max_token_length`. Default is `255`.
44-
`stopwords` | Optional | String or list of strings | A string specifying a predefined list of stopwords (such as `_english_`) or an array specifying a custom list of stopwords. Default is `_none_`.
45-
`stopwords_path` | Optional | String | The path (absolute or relative to the config directory) to the file containing a list of stop words.
44+
| Parameter | Data type | Default | Description |
45+
|:----------|:-----|:--------|:------------|
46+
| `max_token_length` | Integer | `255` | The maximum length that a token can be before it is split. |
47+
| `stopwords` | String or list of strings | None | A list of stopwords or a [predefined stopword set for a language]({{site.url}}{{site.baseurl}}/analyzers/token-filters/stop/#predefined-stopword-sets-by-language) to remove during analysis. For example, `_english_`. |
48+
| `stopwords_path` | String | None | The path to a file containing stopwords to be used during analysis. |
4649

50+
Only use one of the parameters `stopwords` or `stopwords_path`. If both are used, no error is returned but only the `stopwords` parameter is applied.
51+
{: .note}
4752

48-
## Configuring a custom analyzer
53+
## Example: Analyzer with parameters
4954

50-
Use the following command to configure an index with a custom analyzer that is equivalent to the `standard` analyzer:
55+
The following example creates a `products` index and configures the `max_token_length` and `stopwords` parameters:
5156

5257
```json
53-
PUT /my_custom_index
58+
PUT /animals
5459
{
5560
"settings": {
5661
"analysis": {
5762
"analyzer": {
58-
"my_custom_analyzer": {
59-
"type": "custom",
60-
"tokenizer": "standard",
61-
"filter": [
62-
"lowercase",
63-
"stop"
63+
"my_manual_stopwords_analyzer": {
64+
"type": "standard",
65+
"max_token_length": 10,
66+
"stopwords": [
67+
"the", "is", "and", "but", "an", "a", "it"
6468
]
6569
}
6670
}
@@ -70,28 +74,47 @@ PUT /my_custom_index
7074
```
7175
{% include copy-curl.html %}
7276

73-
## Generated tokens
74-
75-
Use the following request to examine the tokens generated using the analyzer:
77+
Use the following `_analyze` API request to see how the `my_manual_stopwords_analyzer` processes text:
7678

7779
```json
78-
POST /my_custom_index/_analyze
80+
POST /animals/_analyze
7981
{
80-
"analyzer": "my_custom_analyzer",
81-
"text": "The slow turtle swims away"
82+
"analyzer": "my_manual_stopwords_analyzer",
83+
"text": "The Turtle is Large but it is Slow"
8284
}
8385
```
8486
{% include copy-curl.html %}
8587

86-
The response contains the generated tokens:
88+
The returned tokens:
89+
90+
- Have been split on spaces.
91+
- Have been lowercased.
92+
- Have had stopwords removed.
8793

8894
```json
8995
{
9096
"tokens": [
91-
{"token": "slow","start_offset": 4,"end_offset": 8,"type": "<ALPHANUM>","position": 1},
92-
{"token": "turtle","start_offset": 9,"end_offset": 15,"type": "<ALPHANUM>","position": 2},
93-
{"token": "swims","start_offset": 16,"end_offset": 21,"type": "<ALPHANUM>","position": 3},
94-
{"token": "away","start_offset": 22,"end_offset": 26,"type": "<ALPHANUM>","position": 4}
97+
{
98+
"token": "turtle",
99+
"start_offset": 4,
100+
"end_offset": 10,
101+
"type": "<ALPHANUM>",
102+
"position": 1
103+
},
104+
{
105+
"token": "large",
106+
"start_offset": 14,
107+
"end_offset": 19,
108+
"type": "<ALPHANUM>",
109+
"position": 3
110+
},
111+
{
112+
"token": "slow",
113+
"start_offset": 30,
114+
"end_offset": 34,
115+
"type": "<ALPHANUM>",
116+
"position": 7
117+
}
95118
]
96119
}
97120
```

_analyzers/token-filters/stop.md

Lines changed: 42 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The `stop` token filter can be configured with the following parameters.
1717

1818
Parameter | Required/Optional | Data type | Description
1919
:--- | :--- | :--- | :---
20-
`stopwords` | Optional | String | Specifies either a custom array of stopwords or a language for which to fetch the predefined Lucene stopword list:<br><br>- [`_arabic_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt)<br>- [`_armenian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/hy/stopwords.txt)<br>- [`_basque_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/eu/stopwords.txt)<br>- [`_bengali_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt)<br>- [`_brazilian_` (Brazilian Portuguese)](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/br/stopwords.txt) <br>- [`_bulgarian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt)<br>- [`_catalan_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ca/stopwords.txt)<br>- [`_cjk_` (Chinese, Japanese, and Korean)](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/cjk/stopwords.txt)<br>- [`_czech_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/cz/stopwords.txt)<br>- [`_danish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/danish_stop.txt)<br>- [`_dutch_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/dutch_stop.txt)<br>- [`_english_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/java/org/apache/lucene/analysis/en/EnglishAnalyzer.java#L48) (Default)<br>- [`_estonian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/et/stopwords.txt)<br>- [`_finnish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/finnish_stop.txt)<br>- [`_french_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/french_stop.txt)<br>- [`_galician_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/gl/stopwords.txt)<br>- [`_german_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/german_stop.txt)<br>- [`_greek_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/el/stopwords.txt)<br>- [`_hindi_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt)<br>- [`_hungarian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/hungarian_stop.txt)<br>- [`_indonesian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/id/stopwords.txt)<br>- [`_irish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ga/stopwords.txt)<br>- [`_italian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/italian_stop.txt)<br>- [`_latvian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/lv/stopwords.txt)<br>- [`_lithuanian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/lt/stopwords.txt)<br>- [`_norwegian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/norwegian_stop.txt)<br>- [`_persian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt)<br>- [`_portuguese_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/portuguese_stop.txt)<br>- [`_romanian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt)<br>- [`_russian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/russian_stop.txt)<br>- [`_sorani_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/sr/stopwords.txt)<br>- [`_spanish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ckb/stopwords.txt)<br>- [`_swedish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/swedish_stop.txt)<br>- [`_thai_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/th/stopwords.txt)<br>- [`_turkish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/tr/stopwords.txt)
20+
`stopwords` | Optional | String | Specifies either a custom array of stopwords or a [predefined stopword set for a language](#predefined-stopword-sets-by-language). Default is `_english_`.
2121
`stopwords_path` | Optional | String | Specifies the file path (absolute or relative to the config directory) of the file containing custom stopwords.
2222
`ignore_case` | Optional | Boolean | If `true`, stopwords will be matched regardless of their case. Default is `false`.
2323
`remove_trailing` | Optional | Boolean | If `true`, trailing stopwords will be removed during analysis. Default is `true`.
@@ -108,4 +108,44 @@ The response contains the generated tokens:
108108
}
109109
]
110110
}
111-
```
111+
```
112+
113+
## Predefined stopword sets by language
114+
115+
The following is a list of all available predefined stopword sets by language:
116+
117+
- [`_arabic_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt)
118+
- [`_armenian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/hy/stopwords.txt)
119+
- [`_basque_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/eu/stopwords.txt)
120+
- [`_bengali_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt)
121+
- [`_brazilian_` (Brazilian Portuguese)](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/br/stopwords.txt)
122+
- [`_bulgarian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt)
123+
- [`_catalan_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ca/stopwords.txt)
124+
- [`_cjk_` (Chinese, Japanese, and Korean)](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/cjk/stopwords.txt)
125+
- [`_czech_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/cz/stopwords.txt)
126+
- [`_danish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/danish_stop.txt)
127+
- [`_dutch_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/dutch_stop.txt)
128+
- [`_english_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/java/org/apache/lucene/analysis/en/EnglishAnalyzer.java#L48)
129+
- [`_estonian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/et/stopwords.txt)
130+
- [`_finnish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/finnish_stop.txt)
131+
- [`_french_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/french_stop.txt)
132+
- [`_galician_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/gl/stopwords.txt)
133+
- [`_german_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/german_stop.txt)
134+
- [`_greek_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/el/stopwords.txt)
135+
- [`_hindi_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt)
136+
- [`_hungarian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/hungarian_stop.txt)
137+
- [`_indonesian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/id/stopwords.txt)
138+
- [`_irish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ga/stopwords.txt)
139+
- [`_italian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/italian_stop.txt)
140+
- [`_latvian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/lv/stopwords.txt)
141+
- [`_lithuanian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/lt/stopwords.txt)
142+
- [`_norwegian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/norwegian_stop.txt)
143+
- [`_persian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt)
144+
- [`_portuguese_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/portuguese_stop.txt)
145+
- [`_romanian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt)
146+
- [`_russian_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/russian_stop.txt)
147+
- [`_sorani_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/sr/stopwords.txt)
148+
- [`_spanish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/ckb/stopwords.txt)
149+
- [`_swedish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/snowball/swedish_stop.txt)
150+
- [`_thai_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/th/stopwords.txt)
151+
- [`_turkish_`](https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/resources/org/apache/lucene/analysis/tr/stopwords.txt)

_api-reference/index-apis/alias.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ POST _aliases
2525

2626
All parameters are optional.
2727

28-
Parameter | Data Type | Description
28+
Parameter | Data type | Description
2929
:--- | :--- | :---
3030
cluster_manager_timeout | Time | The amount of time to wait for a response from the cluster manager node. Default is `30s`.
3131
timeout | Time | The amount of time to wait for a response from the cluster. Default is `30s`.
@@ -34,7 +34,7 @@ timeout | Time | The amount of time to wait for a response from the cluster. Def
3434

3535
In your request body, you need to specify what action to take, the alias name, and the index you want to associate with the alias. Other fields are optional.
3636

37-
Field | Data Type | Description | Required
37+
Field | Data type | Description | Required
3838
:--- | :--- | :--- | :---
3939
actions | Array | Set of actions you want to perform on the index. Valid options are: `add`, `remove`, and `remove_index`. You must have at least one action in the array. | Yes
4040
add | N/A | Adds an alias to the specified index. | No

_im-plugin/index-transforms/transforms-apis.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,15 +28,15 @@ PUT _plugins/_transform/<transform_id>
2828

2929
### Path parameters
3030

31-
Parameter | Data Type | Description
31+
Parameter | Data type | Description
3232
:--- | :--- | :---
3333
transform_id | String | Transform ID |
3434

3535
### Request body fields
3636

3737
You can specify the following options in the HTTP request body:
3838

39-
Option | Data Type | Description | Required
39+
Option | Data type | Description | Required
4040
:--- | :--- | :--- | :---
4141
enabled | Boolean | If true, the transform job is enabled at creation. | No
4242
continuous | Boolean | Specifies whether the transform job should be continuous. Continuous jobs execute every time they are scheduled according to the `schedule` field and run based off of newly transformed buckets as well as any new data added to source indexes. Non-continuous jobs execute only once. Default is `false`. | No
@@ -184,7 +184,7 @@ Parameter | Description | Required
184184

185185
You can update the following fields.
186186

187-
Option | Data Type | Description
187+
Option | Data type | Description
188188
:--- | :--- | :---
189189
schedule | Object | The schedule for the transform job. Contains the fields `interval.start_time`, `interval.period`, and `interval.unit`.
190190
start_time | Integer | The Unix epoch start time of the transform job.

_search-plugins/sql/datatypes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
layout: default
3-
title: Data Types
3+
title: Data types
44
parent: SQL and PPL
55
nav_order: 7
66
---

0 commit comments

Comments
 (0)