Skip to content

Commit be46f9c

Browse files
authored
Update Vale (#208)
1 parent 136a3e8 commit be46f9c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+198
-242
lines changed

.github/CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ If you want to contribute but don’t know where to start, browse the open issue
1919
## Guidelines for authors
2020

2121
- In your contributions, comply with the [Google style guide](https://developers.google.com/style). Use Vale to check your contribution for stylistic consistency.
22-
- Before starting work on an issue, search the repo for open or closed pull requests (PRs) that relate to your submission to avoid duplicate effort.
22+
- Before starting work on an issue, search the repo for open or closed pull requests (PRs) that relate to your submission to avoid duplicate effort.
2323
- Associate each PR with a specific Issue. If an issue doesn’t exist, create it first.
2424
- Only create a PR if you intend to merge it soon. If your work isn’t ready for review, keep it as a branch.
2525
- When you create a PR, add a descriptive title that starts with an action verb (add, update, fix, etc.). Reference all supporting material in the description to make the reviewer’s task easier.

.vale.ini

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,23 +6,26 @@ Vocab = docs
66

77
Packages = Google
88

9-
IgnoredScopes = code, tt, img, url, a
9+
IgnoredScopes = code, tt, img, url, a, link, blockquote
1010

1111
SkippedScopes = script, style, pre, figure, code
1212

1313
# Treat MDX as Markdown
1414
[formats]
1515
mdx = md
1616

17-
[*.{md, mdx}]
17+
[*.{md,mdx}]
1818

19-
BasedOnStyles = Vale, Google, docs
19+
BasedOnStyles = Google, docs
20+
CommentDelimiters = {/*, */}
2021

2122
# For now, ignore rules because they give too many false positives
2223
Google.Passive = NO
2324
Google.Acronyms = NO
2425
Google.Headings = NO
2526
Google.Parens = NO
27+
Google.Colons = NO
2628

2729
# Ignore code surrounded by backticks or plus sign, parameters defaults, URLs, and angle brackets.
28-
# TokenIgnores = (<\/?[A-Z].+>), (\x60[^\n\x60]+\x60), ([^\n]+=[^\n]*), (\+[^\n]+\+), (http[^\n]+\[)
30+
TokenIgnores = (<\/?[A-Z].+>), (\x60[^\n\x60]+\x60), ([^\n]+=[^\n]*), (\+[^\n]+\+), (http[^\n]+\[)
31+
BlockIgnores = (```[a-z]*[\s\S]*?\n```), (---[\s\S]*?\n---), (keywords: [\s\S]*?\n), (sidebarTitle: [\s\S]*?\n)

apl/aggregation-function/dcount.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: dcount
33
description: 'This page explains how to use the dcount aggregation function in APL.'
44
---
55

6-
The `dcount` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column. This function is essential when you need to know the number of unique values, such as counting distinct users, unique requests, or distinct error codes in log files.
6+
The `dcount` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column. This function is essential when you need to know the number of unique values, such as counting distinct users, unique requests, or distinct error codes in log files.
77

88
Use `dcount` for analyzing datasets where it’s important to identify the number of distinct occurrences, such as unique IP addresses in security logs, unique user IDs in application logs, or unique trace IDs in OpenTelemetry traces.
99

apl/aggregation-function/dcountif.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: dcountif
33
description: 'This page explains how to use the dcountif aggregation function in APL.'
44
---
55

6-
The `dcountif` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column that meet a specific condition. This is useful when you want to filter records and count only the unique occurrences that satisfy a given criterion.
6+
The `dcountif` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column that meet a specific condition. This is useful when you want to filter records and count only the unique occurrences that satisfy a given criterion.
77

88
Use `dcountif` in scenarios where you need a distinct count but only for a subset of the data, such as counting unique users from a specific region, unique error codes for specific HTTP statuses, or distinct traces that match a particular service type.
99

apl/aggregation-function/make-list.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: make_list
33
description: 'This page explains how to use the make_list aggregation function in APL.'
44
---
55

6-
The `make_list` aggregation function in Axiom Processing Language (APL) collects all values from a specified column into a dynamic array for each group of rows in a dataset. This aggregation is particularly useful when you want to consolidate multiple values from distinct rows into a single grouped result.
6+
The `make_list` aggregation function in Axiom Processing Language (APL) collects all values from a specified column into a dynamic array for each group of rows in a dataset. This aggregation is particularly useful when you want to consolidate multiple values from distinct rows into a single grouped result.
77

88
For example, if you have multiple log entries for a particular user, you can use `make_list` to gather all request URIs accessed by that user into a single list. You can also apply `make_list` to various contexts, such as trace aggregation, log analysis, or security monitoring, where collating related events into a compact form is needed.
99

apl/aggregation-function/make-set.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: 'This page explains how to use the make_set aggregation function in
55

66
The `make_set` aggregation in APL (Axiom Processing Language) is used to collect unique values from a specific column into an array. It is useful when you want to reduce your data by grouping it and then retrieving all unique values for each group. This aggregation is valuable for tasks such as grouping logs, traces, or events by a common attribute and retrieving the unique values of a specific field for further analysis.
77

8-
You can use `make_set` when you need to collect non-repeating values across rows within a group, such as finding all the unique HTTP methods in web server logs or unique trace IDs in telemetry data.
8+
You can use `make_set` when you need to collect non-repeating values across rows within a group, such as finding all the unique HTTP methods in web server logs or unique trace IDs in telemetry data.
99

1010
## For users of other query languages
1111

apl/aggregation-function/percentiles-arrayif.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,6 @@ You can use `percentiles_arrayif` to analyze request durations in HTTP logs whil
9292
| 1.981 ms |
9393
| 2.612 ms |
9494

95-
9695
This query filters records to those with a status of 200 and returns the percentile values for the request durations.
9796

9897
</Tab>

apl/aggregation-function/topk.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: topk
33
description: 'This page explains how to use the topk aggregation function in APL.'
44
---
55

6-
The `topk` aggregation in Axiom Processing Language (APL) allows you to identify the top `k` results based on a specified field. This is especially useful when you want to quickly analyze large datasets and extract the most significant values, such as the top-performing queries, most frequent errors, or highest latency requests.
6+
The `topk` aggregation in Axiom Processing Language (APL) allows you to identify the top `k` results based on a specified field. This is especially useful when you want to quickly analyze large datasets and extract the most significant values, such as the top-performing queries, most frequent errors, or highest latency requests.
77

88
Use `topk` to find the most common or relevant entries in datasets, especially in log analysis, telemetry data, and monitoring systems. This aggregation helps you focus on the most important data points, filtering out the noise.
99

apl/introduction.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
---
2-
title: 'Axiom Processing Language (APL)'
3-
description: 'This section explains how to use the Axiom Processing Language to get deeper insights from your data.'
2+
title: "Axiom Processing Language (APL)"
3+
description: "This section explains how to use the Axiom Processing Language to get deeper insights from your data."
44
sidebarTitle: Introduction
55
icon: door-open
66
keywords: ['axiom documentation', 'documentation', 'axiom', 'APL', 'axiom processing language', 'data explorer', 'getiing started guide', 'summarize', 'filter']
77
---
88

99
import Prerequisites from "/snippets/minimal-prerequisites.mdx"
1010

11-
The Axiom Processing Language (APL) is a query language that is perfect for getting deeper insights from your data. Whether logs, events, analytics, or similar, APL provides the flexibility to filter, manipulate, and summarize your data exactly the way you need it.
11+
The Axiom Processing Language (APL) is a query language that’s perfect for getting deeper insights from your data. Whether logs, events, analytics, or similar, APL provides the flexibility to filter, manipulate, and summarize your data exactly the way you need it.
1212

1313
<Prerequisites />
1414

@@ -17,7 +17,7 @@ The Axiom Processing Language (APL) is a query language that is perfect for gett
1717
APL queries consist of the following:
1818

1919
- **Data source:** The most common data source is one of your Axiom datasets.
20-
- **Operators:** Operators filter, manipulate, and summarize your data.
20+
- **Operators:** Operators filter, manipulate, and summarize your data.
2121

2222
Delimit operators with the pipe character (`|`).
2323

@@ -49,7 +49,7 @@ Apart from Axiom datasets, you can use other data sources:
4949

5050
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issue-comment-event'%5D%20%7C%20extend%20isBot%20%3D%20actor%20contains%20'-bot'%20or%20actor%20contains%20'%5Bbot%5D'%20%7C%20where%20isBot%20%3D%3D%20true%20%7C%20summarize%20count()%20by%20bin_auto(_time)%2C%20actor%22%7D)
5151

52-
The query above uses a dataset called `github-issue-comment-event` as its data source. It uses the follwing operators:
52+
The query above uses a dataset called `github-issue-comment-event` as its data source. It uses the following operators:
5353

5454
- [extend](/apl/tabular-operators/extend-operator) adds a new field `isBot` to the query results. It sets the values of the new field to true if the values of the `actor` field in the original dataset contain `-bot` or `[bot]`.
5555
- [where](/apl/tabular-operators/where-operator) filters for the values of the `isBot` field. It only returns rows where the value is true.

apl/scalar-functions/array-functions/array-iff.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ With OpenTelemetry trace data, you can use `array_iff` to filter spans based on
9999
| order by _time desc
100100
| limit 1000
101101
| summarize is_server = make_list(kind == 'server'), duration_list = make_list(duration)
102-
| project server_durations = array_iff(is_server, duration_list, 0)
102+
| project server_durations = array_iff(is_server, duration_list, 0)
103103
```
104104

105105
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20order%20by%20_time%20desc%20%7C%20limit%201000%20%7C%20summarize%20is_server%20%3D%20make_list(kind%20%3D%3D%20'server')%2C%20duration_list%20%3D%20make_list(duration)%20%7C%20project%20%20server_durations%20%3D%20array_iff(is_server%2C%20duration_list%2C%200)%22%7D)

apl/scalar-functions/array-functions/array-index-of.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,6 @@ array_index_of(array, lookup_value, [start], [length], [occurrence])
6060
| length | number | No | Number of values to examine. A value of `-1` means unlimited length. |
6161
| occurrence | number | No | The number of the occurrence. By default `1`. |
6262

63-
6463
### Returns
6564

6665
`array_index_of` returns the zero-based index of the first occurrence of the specified `lookup_value` in `array`. If `lookup_value` doesn’t exist in the array, it returns `-1`.
@@ -101,7 +100,7 @@ In OpenTelemetry traces, you can find the position of a specific `service.name`
101100
```kusto
102101
['otel-demo-traces']
103102
| take 50
104-
| summarize service_array = make_list(['service.name'])
103+
| summarize service_array = make_list(['service.name'])
105104
| extend frontend_index = array_index_of(service_array, 'frontend')
106105
```
107106

apl/scalar-functions/array-functions/array-length.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,6 @@ This query finds spans associated with at least three events.
8686

8787
## List of related functions
8888

89-
9089
- [array_slice](/apl/scalar-functions/array-functions/array-slice): Extracts a subset of elements from an array.
9190
- [array_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.
9291
- [array_shift_left](/apl/scalar-functions/array-functions/array-shift-left): Shifts array elements one position to the left, moving the first element to the last position.

apl/scalar-functions/array-functions/strcat-array.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,6 @@ This query summarizes unique HTTP method and URL combinations into a single, rea
9191

9292
## List of related functions
9393

94-
9594
- [array_length](/apl/scalar-functions/array-functions/array-length): Returns the number of elements in an array.
9695
- [array_index_of](/apl/scalar-functions/array-functions/array-index-of): Finds the index of an element in an array.
9796
- [array_concat](/apl/scalar-functions/array-functions/array-concat): Combines multiple arrays.

apl/scalar-functions/conversion-functions.mdx

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,6 @@ In this example, the value of `newstatus` is the value of `status` because the `
7070

7171
In this example, the query is prepared for a field named `upcoming_field` that is expected to be added to the data soon. By using `ensure_field()`, logic can be written around this future field, and the query will work when the field becomes available.
7272

73-
7473
```kusto
7574
['sample-http-logs']
7675
| extend new_field = ensure_field("upcoming_field", typeof(int))
@@ -374,13 +373,10 @@ isbool("pow") == false
374373
}
375374
```
376375

377-
---
378-
379376
## toint()
380377

381378
Converts the input to an integer value (signed 64-bit) number representation.
382379

383-
384380
### Arguments
385381

386382
- Value: The value to convert to an [integer](/apl/data-types/scalar-data-types#the-int-data-type).
@@ -389,7 +385,6 @@ Converts the input to an integer value (signed 64-bit) number representation.
389385

390386
If the conversion is successful, the result will be an integer. Otherwise, the result will be `null`.
391387

392-
393388
### Examples
394389

395390
```kusto

apl/scalar-functions/ip-functions/format-ipv4.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: format_ipv4
33
description: 'This page explains how to use the format_ipv4 function in APL.'
44
---
55

6-
The `format_ipv4` function in APL converts a numeric representation of an IPv4 address into its standard dotted-decimal format. This function is particularly useful when working with logs or datasets where IP addresses are stored as integers, making them hard to interpret directly.
6+
The `format_ipv4` function in APL converts a numeric representation of an IPv4 address into its standard dotted-decimal format. This function is particularly useful when working with logs or datasets where IP addresses are stored as integers, making them hard to interpret directly.
77

88
You can use `format_ipv4` to enhance log readability, enrich security logs, or convert raw telemetry data for analysis.
99

apl/scalar-functions/ip-functions/ipv4-is-in-any-range.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: ipv4_is_in_any_range
33
description: 'This page explains how to use the ipv4_is_in_any_range function in APL.'
44
---
55

6-
The `ipv4_is_in_any_range` function checks whether a given IPv4 address belongs to any range of IPv4 subnets. You can use it to evaluate whether an IP address falls within a set of CIDR blocks or IP ranges, which is useful for filtering, monitoring, or analyzing network traffic in your datasets.
6+
The `ipv4_is_in_any_range` function checks whether a given IPv4 address belongs to any range of IPv4 subnets. You can use it to evaluate whether an IP address falls within a set of CIDR blocks or IP ranges, which is useful for filtering, monitoring, or analyzing network traffic in your datasets.
77

88
This function is particularly helpful for security monitoring, analyzing log data for specific geolocated traffic, or validating access based on allowed IP ranges.
99

apl/scalar-functions/ip-functions/ipv4-is-match.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ ipv4_is_match(ipaddress1, ipaddress2, prefix)
6464

6565
## Use case example
6666

67-
The `ipv4_is_match` function allows you to identify traffic based on IP addresses, enabling faster identification of traffic patterns and potential issues.
67+
The `ipv4_is_match` function allows you to identify traffic based on IP addresses, enabling faster identification of traffic patterns and potential issues.
6868

6969
**Query**
7070

apl/scalar-functions/metadata-functions/ingestion_time.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: ingestion_time
33
description: 'This page explains how to use the ingestion_time function in APL.'
44
---
55

6-
Use the `ingestion_time` function to retrieve the timestamp of when each record was ingested into Axiom. This function helps you distinguish between the original event time (as captured in the `_time` field) and the time the data was actually received by Axiom.
6+
Use the `ingestion_time` function to retrieve the timestamp of when each record was ingested into Axiom. This function helps you distinguish between the original event time (as captured in the `_time` field) and the time the data was actually received by Axiom.
77

88
You can use `ingestion_time` to:
99

@@ -28,7 +28,7 @@ Splunk provides the `_indextime` field, which represents when an event was index
2828
````
2929

3030
```kusto APL equivalent
31-
...
31+
...
3232
| extend ingest_time = ingestion_time()
3333
```
3434

apl/scalar-functions/type-functions/isstring.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: isstring
33
description: 'This page explains how to use the isstring function in APL.'
44
---
55

6-
Use the `isstring` function to determine whether a value is of type string. This function is especially helpful when working with heterogeneous datasets where field types are not guaranteed, or when ingesting data from sources with loosely structured or mixed schemas.
6+
Use the `isstring` function to determine whether a value is of type string. This function is especially helpful when working with heterogeneous datasets where field types are not guaranteed, or when ingesting data from sources with loosely structured or mixed schemas.
77

88
You can use `isstring` to:
99
- Filter rows based on whether a field is a string.

apl/tabular-operators/project-keep-operator.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: project-keep
33
description: 'This page explains how to use the project-keep operator function in APL.'
44
---
55

6-
The `project-keep` operator in APL is a powerful tool for field selection. It allows you to explicitly keep specific fields from a dataset, discarding any others not listed in the operator's parameters. This is useful when you only need to work with a subset of fields in your query results and want to reduce clutter or improve performance by eliminating unnecessary fields.
6+
The `project-keep` operator in APL is a powerful tool for field selection. It allows you to explicitly keep specific fields from a dataset, discarding any others not listed in the operator's parameters. This is useful when you only need to work with a subset of fields in your query results and want to reduce clutter or improve performance by eliminating unnecessary fields.
77

88
You can use `project-keep` when you need to focus on particular data points, such as in log analysis, security event monitoring, or extracting key fields from traces.
99

apl/tabular-operators/where-operator.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ This query helps filter logs to investigate activity originating from a specific
135135

136136
The `* has` pattern in APL is a dynamic and powerful tool within the `where` operator. It offers you the flexibility to search for specific substrings across all fields in a dataset without the need to specify each field name individually. This becomes especially advantageous when dealing with datasets that have numerous or dynamically named fields.
137137

138-
`where * has` is an expensive operation because it searches all fields. For a more efficient query, explicitly list the fields in which you want to search. For example: `where firstName has "miguel" or lastName has "miguel"`.
138+
`where * has` is an expensive operation because it searches all fields. For a more efficient query, explicitly list the fields in which you want to search. For example: `where firstName has "miguel" or lastName has "miguel"`.
139139

140140
### Basic where * has usage
141141

apl/tutorial.mdx

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ The following query filters the data by `method` and `content_type`:
7070

7171
### project
7272

73-
[project](/apl/tabular-operators/project-operator) selects a subset of columns.
73+
[project](/apl/tabular-operators/project-operator) selects a subset of fields.
7474

7575
```kusto
7676
['sample-http-logs']
@@ -555,7 +555,6 @@ example
555555

556556
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B%27sample-http-logs%27%5D%20%7C%20project%20status%20%3D%20case(isnotnull(status)%20and%20status%20!%3D%20%5C%22%5C%22%2C%20content_type%2C%20%5C%22info%5C%22)%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2230d%22%7D%7D)
557557

558-
559558
**Extract nested payment amount from custom attributes map field**
560559

561560
```kusto
@@ -577,7 +576,6 @@ example
577576

578577
[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'github-issues-event'%5D%20%7C%20extend%20data%20%3D%20tostring(labels)%20%7C%20where%20labels%20contains%20'd73a4a'%22%2C%22queryOptions%22%3A%7B%22quickRange%22%3A%2290d%22%7D%7D)
579578

580-
581579
**Aggregate trace counts by HTTP method attribute in custom map**
582580

583581
```kusto

0 commit comments

Comments
 (0)