Skip to content

Commit eb13cf7

Browse files
authored
Merge branch 'main' into ISSUE-7630
2 parents dbab3f4 + aad04ab commit eb13cf7

File tree

490 files changed

+4989
-3026
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

490 files changed

+4989
-3026
lines changed

.github/actions/fuse_compat/action.yml

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,6 @@ inputs:
1212
runs:
1313
using: "composite"
1414
steps:
15-
- name: Setup Build Tool
16-
uses: ./.github/actions/setup_build_tool
17-
1815
- name: Download artifact
1916
uses: ./.github/actions/artifact_download
2017
with:
@@ -26,8 +23,20 @@ runs:
2623
- name: Test compatibility
2724
shell: bash
2825
run: |
29-
build-tool bash ./tests/fuse-compat/test-fuse-compat.sh 0.7.150
30-
build-tool bash ./tests/fuse-compat/test-fuse-compat.sh 0.7.151
26+
docker run --rm --tty --net=host \
27+
--user $(id -u):$(id -g) \
28+
--env BUILD_PROFILE \
29+
--volume "${PWD}:/workspace" \
30+
--workdir "/workspace" \
31+
datafuselabs/build-tool:sqllogic \
32+
bash ./tests/fuse-compat/test-fuse-compat.sh 0.7.150
33+
docker run --rm --tty --net=host \
34+
--user $(id -u):$(id -g) \
35+
--env BUILD_PROFILE \
36+
--volume "${PWD}:/workspace" \
37+
--workdir "/workspace" \
38+
datafuselabs/build-tool:sqllogic \
39+
bash ./tests/fuse-compat/test-fuse-compat.sh 0.7.151
3140
3241
- name: Upload failure
3342
if: failure()

Cargo.lock

Lines changed: 27 additions & 26 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ members = [
3434
"src/query/pipeline/sinks",
3535
"src/query/pipeline/sources",
3636
"src/query/pipeline/transforms",
37-
"src/query/planners",
37+
"src/query/legacy-planners",
3838
"src/query/settings",
3939
"src/query/storages/fuse",
4040
"src/query/storages/fuse-meta",

docs/doc/30-reference/20-functions/40-string-functions/substring.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ SUBSTRING(str FROM pos FOR len)
2424
| Arguments | Description |
2525
| ----------- | ----------- |
2626
| str | The main string from where the character to be extracted |
27-
| pos | The one-indexed position expression to start at. If negative, counts from the end |
28-
| len | The number expression of characters to extract |
27+
| pos | The position (starting from 1) the substring to start at. If negative, counts from the end |
28+
| len | The maximun length of the substring to extract |
2929

3030
## Return Type
3131

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
11
{
2-
"label": "Cluster Key",
3-
"link": {
4-
"type": "generated-index",
5-
"slug": "/reference/sql/ddl/clusterkey"
6-
}
2+
"label": "Cluster Key"
73
}
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
---
2+
title: What is a Cluster Key?
3+
sidebar_position: 1
4+
slug: ./
5+
---
6+
7+
The cluster key is a data object for tables in Databend. It explicitly tells Databend how to divide and group rows of a table into the storage partitions rather than using the data ingestion order.
8+
9+
A table's cluster key is usually one or more columns or expressions. If you define a cluster key for a table, Databend reorganizes your data based on the cluster key and stores similar rows into the same or adjacent storage partitions.
10+
11+
The benefit of defining a cluster key is optimizing the query performance. The cluster key acts as a link between the metadata in the Databend's Meta Service Layer and the storage partitions. After the cluster key is defined for a table, the table's metadata implements a key-value-like list that shows the correspondences between the column or expression values and their storage partitions. When a query comes, Databend can quickly locate the storage partition by the metadata and fetch the results. To make this work, the cluster key you set must match the way how you filter the data in queries. For example, if you're most likely to query a table that holds all the employees' profile information by their first names, set the cluster key to the first name column.
12+
13+
In Databend, you [SET CLUSTER KEY](dml-set-cluster-key.md) when you create a table, and you can [ALTER CLUSTER KEY](https://databend.rs/doc/reference/sql/ddl/clusterkey/dml-alter-cluster-key) if necessary. A fully-clustered table might become chaotic if it continues to have ingestion or Data Manipulation Language operations (such as INSERT, UPDATE, DELETE), you will need to [RECLUSTER TABLE](./dml-recluster-table.md) to fix the chaos.
14+
15+
It's important to note that, most of the time you do not need to set the cluster key. Clustering or re-clustering a table consumes time and your credits if you're in Databend Cloud. Databend recommends setting cluster keys for large tables with slow query issues.

docs/doc/60-contributing/02-roadmap.md

Lines changed: 31 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -12,41 +12,51 @@ This is Databend Roadmap 2022 :rocket:, sync from the [#3706](https://github.com
1212

1313
# Main tasks
1414

15-
### 1. Query
15+
Roadmap 2021: https://github.com/datafuselabs/databend/issues/746
16+
17+
# Main tasks
18+
19+
### 1. Query
1620

1721

1822
| Task | Status | Release Target | Comments |
1923
| ----------------------------------------------- | --------- | -------------- | --------------- |
20-
| [Query Cluster Track #747](https://github.com/datafuselabs/databend/issues/747) | PROGRESS | | |
21-
| [RBAC Privileges #2793](https://github.com/datafuselabs/databend/issues/2793) | PROGRESS | | |
22-
| [ New Planner Framework #1217](https://github.com/datafuselabs/databend/issues/1218)| PROGRESS | | [RFC](https://databend.rs/doc/contributing/rfcs/new-sql-planner-framework)|
23-
| [ Database Sharing #3430](https://github.com/datafuselabs/databend/issues/3430)| PROGRESS | | |
24-
| [ STAGE Command #2976](https://github.com/datafuselabs/databend/issues/2976)| PROGRESS | | |
25-
| [ COPY Command #4104](https://github.com/datafuselabs/databend/issues/4104)| PROGRESS | | |
24+
| [Query Cluster Track #747](https://github.com/datafuselabs/databend/issues/747) | DONE | | |
25+
| [RBAC Privileges #2793](https://github.com/datafuselabs/databend/issues/2793) | DONE | | |
26+
| [ New Planner Framework #1217](https://github.com/datafuselabs/databend/issues/1218)| DONE | | [RFC](https://databend.rs/doc/contributing/rfcs/new-sql-planner-framework)|
27+
| [ Database Sharing #3430](https://github.com/datafuselabs/databend/issues/3430)| DONE | | |
28+
| [ STAGE Command #2976](https://github.com/datafuselabs/databend/issues/2976)| DONE | | |
29+
| [ COPY Command #4104](https://github.com/datafuselabs/databend/issues/4104)| DONE | | |
2630
| [Index Design #3711](https://github.com/datafuselabs/databend/issues/3711) | PROGRESS | | |
27-
| [Push-Based + Pull-Based processor](https://github.com/datafuselabs/databend/issues/3379)| PROGRESS | | |
28-
| [Semi-structured Data Types #3916](https://github.com/datafuselabs/databend/issues/3916) | PROGRESS | | |
31+
| [Push-Based + Pull-Based processor](https://github.com/datafuselabs/databend/issues/3379)| DONE | | |
32+
| [Semi-structured Data Types #3916](https://github.com/datafuselabs/databend/issues/3916) | DONE | | |
33+
| [Table Cluster Key #4268](https://github.com/datafuselabs/databend/issues/4268) | DONE | | |
34+
| Transactions | DONE | | |
2935
| [Support Fulltext Index #3915](https://github.com/datafuselabs/databend/issues/3915) | PLANNING | | |
30-
| [Table Cluster Key #4268](https://github.com/datafuselabs/databend/issues/4268) | PLANNING | | |
31-
| Tansactions | PLANNING | | |
32-
| Window Functions | PLANNING | | |
33-
| Lambda Functions | PLANNING | | |
34-
| Array Functions | PLANNING | | |
36+
| [Hive External Data Source #4826](https://github.com/datafuselabs/databend/issues/4826) | DONE | | |
37+
| [Window Functions](https://github.com/datafuselabs/databend/issues/4653) | DONE | | |
38+
| Lambda Functions | DONE | | |
39+
| Array Functions | DONE | | |
3540
| Compile Aggregate Functions(JIT) | PLANNING | | |
36-
| Common Table Expressions | PLANNING | | [MySQL CTE](https://dev.mysql.com/doc/refman/8.0/en/with.html#common-table-expressions) |
37-
| External Cache | PLANNING | | |
41+
| [Common Table Expressions #6246](https://github.com/datafuselabs/databend/issues/6246) | DONE | | [MySQL CTE](https://dev.mysql.com/doc/refman/8.0/en/with.html#common-table-expressions) |
42+
| [External Cache](https://github.com/datafuselabs/databend/issues/6786) #6786 | PROGRESS | | |
3843
| External Table | PLANNING | | [Snowflake ET](https://docs.snowflake.com/en/sql-reference/sql/create-external-table.html)|
39-
| Update&Delete | PLANNING | | |
44+
| Delete | DONE | | |
45+
| Update | PROGRESS | | |
4046
| Streaming Ingestion | PLANNING | | |
41-
| Streaming Analytics | PLANNING | | |
47+
| [Resource Quota](https://github.com/datafuselabs/databend/issues/6935) | PROGRESS | | |
48+
| [LakeHouse](https://github.com/datafuselabs/databend/issues/7592) | PROGRESS | | v0.9|
4249

4350

4451
### 2. Testing
4552

4653
| Task | Status | Release Target | Comments |
4754
| ----------------------------------------------- | --------- | -------------- | --------------- |
48-
| [ Continuous Benchmarking #3084](https://github.com/datafuselabs/databend/issues/3084) | PROGRESS | | |
55+
| [ Continuous Benchmarking #3084](https://github.com/datafuselabs/databend/issues/3084) | DONE | | https://perf.databend.rs |
56+
4957

5058
# Releases
51-
- [x] #2525
52-
- [x] #2257
59+
- [x] [Release proposal: Nightly v0.8 #4591](https://github.com/datafuselabs/databend/issues/4591)
60+
- [x] [Release proposal: Nightly v0.7 #3428](https://github.com/datafuselabs/databend/issues/3428)
61+
- [x] [Release proposal: Nightly v0.6 #2525](https://github.com/datafuselabs/databend/issues/2525)
62+
- [x] [Release proposal: Nightly v0.5 #2257](https://github.com/datafuselabs/databend/issues/2257)

docs/doc/60-contributing/03-rfcs/20220809-share.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,8 @@ To support creating and managing shares, Databend provides the following set of
7676
- Describe Share
7777
- Show Grants
7878
- Create Database, Table FROM Share
79+
- Select data from the Share DB tables
80+
- Show shared database tables
7981

8082
### Create Share
8183

@@ -210,6 +212,33 @@ Syntax:
210212
CREATE DATABASE <name> FROM SHARE <provider_tenant>.<share_name>
211213
```
212214

215+
216+
217+
### Select data from the Share DB tables
218+
219+
After tenants have created a database from a shared database, tenants can select data from the shared table like a normal table, before that the providers MUST have permitted access permission to the shared database and table.
220+
221+
Syntax:
222+
223+
```sql
224+
Select * from <share_db_name>.<table_name>
225+
```
226+
227+
228+
229+
### Show shared database tables
230+
231+
After tenants have created a database from a shared database, tenants can show tables from the shared database, it only outputs the tables which have been permitted to access.
232+
233+
Syntax:
234+
235+
```sql
236+
use <share_db_name>;
237+
show tables;
238+
```
239+
240+
241+
213242
## Example of Using Share with SQL
214243

215244
To create a share using SQL:

src/common/tracing/src/logging.rs

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,13 @@ pub fn init_logging(name: &str, cfg: &Config) -> Vec<WorkerGuard> {
105105
.install_batch(opentelemetry::runtime::Tokio)
106106
.expect("install");
107107

108-
jaeger_layer = Some(tracing_opentelemetry::layer().with_tracer(tracer));
108+
// Load filter from `RUST_LOG`. Default to `ERROR`.
109+
let env_filter = EnvFilter::from_default_env();
110+
jaeger_layer = Some(
111+
tracing_opentelemetry::layer()
112+
.with_tracer(tracer)
113+
.with_filter(env_filter),
114+
);
109115
}
110116
let subscriber = subscriber.with(jaeger_layer);
111117

src/meta/api/src/kv_api_key.rs

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,9 @@ pub enum KVApiKeyError {
3030
#[error("Expect {expect} segments, but: '{got}'")]
3131
WrongNumberOfSegments { expect: usize, got: String },
3232

33+
#[error("Expect at least {expect} segments, but {actual} segments found")]
34+
AtleastSegments { expect: usize, actual: usize },
35+
3336
#[error("Invalid id string: '{s}': {reason}")]
3437
InvalidId { s: String, reason: String },
3538
}

0 commit comments

Comments
 (0)