Skip to content

Commit 01bc328

Browse files
Use the CRUD module in the 'Creating a sharded cluster' tutorial (#4338)
1 parent 7916533 commit 01bc328

File tree

6 files changed

+253
-260
lines changed

6 files changed

+253
-260
lines changed

doc/book/admin/instance_config.rst

Lines changed: 11 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The main steps of creating and preparing the application for deployment are:
1717

1818
3. :ref:`admin-instance_config-package-app`.
1919

20-
In this section, a `sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_ application is used as an example.
20+
In this section, a `sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_ application is used as an example.
2121
This cluster includes 5 instances: one router and 4 storages, which constitute two replica sets.
2222

2323
.. image:: /book/admin/admin_instances_dev.png
@@ -82,27 +82,27 @@ In this example, the application's layout is prepared manually and looks as foll
8282
├── distfiles
8383
├── include
8484
├── instances.enabled
85-
│ └── sharded_cluster
85+
│ └── sharded_cluster_crud
8686
│ ├── config.yaml
8787
│ ├── instances.yaml
8888
│ ├── router.lua
89-
│ ├── sharded_cluster-scm-1.rockspec
89+
│ ├── sharded_cluster_crud-scm-1.rockspec
9090
│ └── storage.lua
9191
├── modules
9292
├── templates
9393
└── tt.yaml
9494
9595
96-
The ``sharded_cluster`` directory contains the following files:
96+
The ``sharded_cluster_crud`` directory contains the following files:
9797

9898
- ``config.yaml``: contains the :ref:`configuration <configuration>` of the cluster. This file might include the entire cluster topology or provide connection settings to a centralized configuration storage.
9999
- ``instances.yml``: specifies instances to run in the current environment. For example, on the developer’s machine, this file might include all the instances defined in the cluster configuration. In the production environment, this file includes :ref:`instances to run on the specific machine <admin-instances_to_run>`.
100100
- ``router.lua``: includes code specific for a :ref:`router <vshard-architecture-router>`.
101-
- ``sharded_cluster-scm-1.rockspec``: specifies the required external dependencies (for example, ``vshard``).
101+
- ``sharded_cluster_crud-scm-1.rockspec``: specifies the required external dependencies (for example, ``vshard`` and ``crud``).
102102
- ``storage.lua``: includes code specific for :ref:`storages <vshard-architecture-storage>`.
103103

104104
You can find the full example here:
105-
`sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_.
105+
`sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_.
106106

107107

108108

@@ -116,7 +116,7 @@ Packaging the application
116116
To package the ready application, use the :ref:`tt pack <tt-pack>` command.
117117
This command can create an installable DEB/RPM package or generate ``.tgz`` archive.
118118

119-
The structure below reflects the content of the packed ``.tgz`` archive for the `sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_ application:
119+
The structure below reflects the content of the packed ``.tgz`` archive for the `sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_ application:
120120

121121
.. code-block:: console
122122
@@ -125,18 +125,15 @@ The structure below reflects the content of the packed ``.tgz`` archive for the
125125
├── bin
126126
│ ├── tarantool
127127
│ └── tt
128-
├── include
129128
├── instances.enabled
130-
│ └── sharded_cluster -> ../sharded_cluster
131-
├── modules
132-
├── sharded_cluster
129+
│ └── sharded_cluster_crud -> ../sharded_cluster_crud
130+
├── sharded_cluster_crud
133131
│ ├── .rocks
134132
│ │ └── share
135133
│ │ └── ...
136134
│ ├── config.yaml
137135
│ ├── instances.yaml
138136
│ ├── router.lua
139-
│ ├── sharded_cluster-scm-1.rockspec
140137
│ └── storage.lua
141138
└── tt.yaml
142139
@@ -147,7 +144,7 @@ The application's layout looks similar to the one defined when :ref:`developing
147144

148145
- ``instances.enabled``: contains a symlink to the packed ``sharded_cluster`` application.
149146

150-
- ``sharded_cluster``: a packed application. In addition to files created during the application development, includes the ``.rocks`` directory containing application dependencies (for example, ``vshard``).
147+
- ``sharded_cluster_crud``: a packed application. In addition to files created during the application development, includes the ``.rocks`` directory containing application dependencies (for example, ``vshard`` and ``crud``).
151148

152149
- ``tt.yaml``: a ``tt`` configuration file.
153150

@@ -178,7 +175,7 @@ define instances to run on each machine by changing the content of the ``instanc
178175

179176
``instances.yaml``:
180177

181-
.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster/instances.yaml
178+
.. literalinclude:: /code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud/instances.yaml
182179
:language: yaml
183180
:dedent:
184181

doc/book/admin/start_stop_instance.rst

Lines changed: 43 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ To get more context on how the application's environment might look, refer to :r
1717

1818
.. NOTE::
1919

20-
In this section, a `sharded_cluster <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster>`_ application is used to demonstrate how to start, stop, and manage instances in a cluster.
20+
In this section, a `sharded_cluster_crud <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/sharding/instances.enabled/sharded_cluster_crud>`_ application is used to demonstrate how to start, stop, and manage instances in a cluster.
2121

2222

2323
.. _configuration_run_instance:
@@ -30,20 +30,20 @@ To start Tarantool instances use the :ref:`tt start <tt-start>` command:
3030

3131
.. code-block:: console
3232
33-
$ tt start sharded_cluster
34-
• Starting an instance [sharded_cluster:storage-a-001]...
35-
• Starting an instance [sharded_cluster:storage-a-002]...
36-
• Starting an instance [sharded_cluster:storage-b-001]...
37-
• Starting an instance [sharded_cluster:storage-b-002]...
38-
• Starting an instance [sharded_cluster:router-a-001]...
33+
$ tt start sharded_cluster_crud
34+
• Starting an instance [sharded_cluster_crud:storage-a-001]...
35+
• Starting an instance [sharded_cluster_crud:storage-a-002]...
36+
• Starting an instance [sharded_cluster_crud:storage-b-001]...
37+
• Starting an instance [sharded_cluster_crud:storage-b-002]...
38+
• Starting an instance [sharded_cluster_crud:router-a-001]...
3939
4040
After the cluster has started and worked for some time, you can find its artifacts
4141
in the directories specified in the ``tt`` configuration. These are the default
4242
locations in the local :ref:`launch mode <tt-config_modes>`:
4343

44-
* ``sharded_cluster/var/log/<instance_name>/`` -- instance :ref:`logs <admin-logs>`.
45-
* ``sharded_cluster/var/lib/<instance_name>/`` -- :ref:`snapshots and write-ahead logs <concepts-data_model-persistence>`.
46-
* ``sharded_cluster/var/run/<instance_name>/`` -- control sockets and PID files.
44+
* ``sharded_cluster_crud/var/log/<instance_name>/`` -- instance :ref:`logs <admin-logs>`.
45+
* ``sharded_cluster_crud/var/lib/<instance_name>/`` -- :ref:`snapshots and write-ahead logs <concepts-data_model-persistence>`.
46+
* ``sharded_cluster_crud/var/run/<instance_name>/`` -- control sockets and PID files.
4747

4848
In the system launch mode, artifacts are created in these locations:
4949

@@ -72,21 +72,21 @@ To check the status of instances, execute :ref:`tt status <tt-status>`:
7272

7373
.. code-block:: console
7474
75-
$ tt status sharded_cluster
75+
$ tt status sharded_cluster_crud
7676
INSTANCE STATUS PID MODE
77-
sharded_cluster:storage-a-001 RUNNING 2023 RW
78-
sharded_cluster:storage-a-002 RUNNING 2026 RO
79-
sharded_cluster:storage-b-001 RUNNING 2020 RW
80-
sharded_cluster:storage-b-002 RUNNING 2021 RO
81-
sharded_cluster:router-a-001 RUNNING 2022 RW
77+
sharded_cluster_crud:storage-a-001 RUNNING 2023 RW
78+
sharded_cluster_crud:storage-a-002 RUNNING 2026 RO
79+
sharded_cluster_crud:storage-b-001 RUNNING 2020 RW
80+
sharded_cluster_crud:storage-b-002 RUNNING 2021 RO
81+
sharded_cluster_crud:router-a-001 RUNNING 2022 RW
8282
8383
To check the status of a specific instance, you need to specify its name:
8484

8585
.. code-block:: console
8686
87-
$ tt status sharded_cluster:storage-a-001
87+
$ tt status sharded_cluster_crud:storage-a-001
8888
INSTANCE STATUS PID MODE
89-
sharded_cluster:storage-a-001 RUNNING 2023 RW
89+
sharded_cluster_crud:storage-a-001 RUNNING 2023 RW
9090
9191
9292
.. _admin-start_stop_instance_connect:
@@ -98,18 +98,18 @@ To connect to the instance, use the :ref:`tt connect <tt-connect>` command:
9898

9999
.. code-block:: console
100100
101-
$ tt connect sharded_cluster:storage-a-001
101+
$ tt connect sharded_cluster_crud:storage-a-001
102102
• Connecting to the instance...
103-
• Connected to sharded_cluster:storage-a-001
103+
• Connected to sharded_cluster_crud:storage-a-001
104104
105-
sharded_cluster:storage-a-001>
105+
sharded_cluster_crud:storage-a-001>
106106
107107
In the instance's console, you can execute commands provided by the :ref:`box <box-module>` module.
108108
For example, :ref:`box.info <box_introspection-box_info>` can be used to get various information about a running instance:
109109

110-
.. code-block:: console
110+
.. code-block:: tarantoolsession
111111
112-
sharded_cluster:storage-a-001> box.info.ro
112+
sharded_cluster_crud:storage-a-001> box.info.ro
113113
---
114114
- false
115115
...
@@ -125,15 +125,15 @@ To restart an instance, use :ref:`tt restart <tt-restart>`:
125125

126126
.. code-block:: console
127127
128-
$ tt restart sharded_cluster:storage-a-002
128+
$ tt restart sharded_cluster_crud:storage-a-002
129129
130130
After executing ``tt restart``, you need to confirm this operation:
131131

132132
.. code-block:: console
133133
134-
Confirm restart of 'sharded_cluster:storage-a-002' [y/n]: y
135-
• The Instance sharded_cluster:storage-a-002 (PID = 2026) has been terminated.
136-
• Starting an instance [sharded_cluster:storage-a-002]...
134+
Confirm restart of 'sharded_cluster_crud:storage-a-002' [y/n]: y
135+
• The Instance sharded_cluster_crud:storage-a-002 (PID = 2026) has been terminated.
136+
• Starting an instance [sharded_cluster_crud:storage-a-002]...
137137
138138
139139
.. _admin-start_stop_instance_stop:
@@ -145,18 +145,18 @@ To stop the specific instance, use :ref:`tt stop <tt-stop>` as follows:
145145

146146
.. code-block:: console
147147
148-
$ tt stop sharded_cluster:storage-a-002
148+
$ tt stop sharded_cluster_crud:storage-a-002
149149
150150
You can also stop all the instances at once as follows:
151151

152152
.. code-block:: console
153153
154-
$ tt stop sharded_cluster
155-
• The Instance sharded_cluster:storage-b-001 (PID = 2020) has been terminated.
156-
• The Instance sharded_cluster:storage-b-002 (PID = 2021) has been terminated.
157-
• The Instance sharded_cluster:router-a-001 (PID = 2022) has been terminated.
158-
• The Instance sharded_cluster:storage-a-001 (PID = 2023) has been terminated.
159-
• can't "stat" the PID file. Error: "stat /home/testuser/myapp/instances.enabled/sharded_cluster/var/run/storage-a-002/tt.pid: no such file or directory"
154+
$ tt stop sharded_cluster_crud
155+
• The Instance sharded_cluster_crud:storage-b-001 (PID = 2020) has been terminated.
156+
• The Instance sharded_cluster_crud:storage-b-002 (PID = 2021) has been terminated.
157+
• The Instance sharded_cluster_crud:router-a-001 (PID = 2022) has been terminated.
158+
• The Instance sharded_cluster_crud:storage-a-001 (PID = 2023) has been terminated.
159+
• can't "stat" the PID file. Error: "stat /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/run/storage-a-002/tt.pid: no such file or directory"
160160
161161
.. note::
162162

@@ -172,12 +172,12 @@ The :ref:`tt clean <tt-clean>` command removes instance artifacts (such as logs
172172

173173
.. code-block:: console
174174
175-
$ tt clean sharded_cluster
175+
$ tt clean sharded_cluster_crud
176176
• List of files to delete:
177177
178-
• /home/testuser/myapp/instances.enabled/sharded_cluster/var/log/storage-a-001/tt.log
179-
• /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.snap
180-
• /home/testuser/myapp/instances.enabled/sharded_cluster/var/lib/storage-a-001/00000000000000001062.xlog
178+
• /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/log/storage-a-001/tt.log
179+
• /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/lib/storage-a-001/00000000000000001062.snap
180+
• /home/testuser/myapp/instances.enabled/sharded_cluster_crud/var/lib/storage-a-001/00000000000000001062.xlog
181181
• ...
182182
183183
Confirm [y/n]:
@@ -201,20 +201,20 @@ Tarantool supports loading and running chunks of Lua code before starting instan
201201
To load or run Lua code immediately upon Tarantool startup, specify the ``TT_PRELOAD``
202202
environment variable. Its value can be either a path to a Lua script or a Lua module name:
203203

204-
* To run the Lua script ``preload_script.lua`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows:
204+
* To run the Lua script ``preload_script.lua`` from the ``sharded_cluster_crud`` directory, set ``TT_PRELOAD`` as follows:
205205

206206
.. code-block:: console
207207
208-
$ TT_PRELOAD=preload_script.lua tt start sharded_cluster
208+
$ TT_PRELOAD=preload_script.lua tt start sharded_cluster_crud
209209
210210
Tarantool runs the ``preload_script.lua`` code, waits for it to complete, and
211211
then starts instances.
212212

213-
* To load the ``preload_module`` from the ``sharded_cluster`` directory, set ``TT_PRELOAD`` as follows:
213+
* To load the ``preload_module`` from the ``sharded_cluster_crud`` directory, set ``TT_PRELOAD`` as follows:
214214

215215
.. code-block:: console
216216
217-
$ TT_PRELOAD=preload_module tt start sharded_cluster
217+
$ TT_PRELOAD=preload_module tt start sharded_cluster_crud
218218
219219
.. note::
220220

@@ -226,7 +226,7 @@ by semicolons:
226226

227227
.. code-block:: console
228228
229-
$ TT_PRELOAD="preload_script.lua;preload_module" tt start sharded_cluster
229+
$ TT_PRELOAD="preload_script.lua;preload_module" tt start sharded_cluster_crud
230230
231231
If an error happens during the execution of the preload script or module, Tarantool
232232
reports the problem and exits.
Lines changed: 61 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,70 @@
11
# Sharded cluster
22

3-
A sample application created in the [Creating a sharded cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/) tutorial.
3+
A sample application demonstrating how to configure a [sharded](https://www.tarantool.io/en/doc/latest/concepts/sharding/) cluster.
44

55
## Running
66

7-
To learn how to run the cluster, see the [Working with the cluster](https://www.tarantool.io/en/doc/latest/how-to/vshard_quick/#working-with-the-cluster) section.
7+
To run the cluster, go to the `sharding` directory in the terminal and perform the following steps:
88

9+
1. Install dependencies defined in the `*.rockspec` file:
910

10-
## Packaging
11+
```console
12+
$ tt build sharded_cluster
13+
```
14+
15+
2. Run the cluster:
1116

12-
To package an application into a `.tgz` archive, use the `tt pack` command:
17+
```console
18+
$ tt start sharded_cluster
19+
```
1320

14-
```console
15-
$ tt pack tgz --app-list sharded_cluster
16-
```
21+
3. Connect to the router:
22+
23+
```console
24+
$ tt connect sharded_cluster:router-a-001
25+
```
26+
27+
4. Call `vshard.router.bootstrap()` to perform the initial cluster bootstrap:
28+
29+
```console
30+
sharded_cluster:router-a-001> vshard.router.bootstrap()
31+
---
32+
- true
33+
...
34+
```
35+
36+
5. Insert test data:
37+
38+
```console
39+
sharded_cluster:router-a-001> insert_data()
40+
---
41+
...
42+
```
43+
44+
6. Connect to storages in different replica sets to see how data is distributed across nodes:
45+
46+
a. `storage-a-001`:
47+
48+
```console
49+
sharded_cluster:storage-a-001> box.space.bands:select()
50+
---
51+
- - [1, 614, 'Roxette', 1986]
52+
- [2, 986, 'Scorpions', 1965]
53+
- [5, 755, 'Pink Floyd', 1965]
54+
- [7, 998, 'The Doors', 1965]
55+
- [8, 762, 'Nirvana', 1987]
56+
...
57+
```
58+
59+
b. `storage-b-001`:
60+
61+
```console
62+
sharded_cluster:storage-b-001> box.space.bands:select()
63+
---
64+
- - [3, 11, 'Ace of Base', 1987]
65+
- [4, 42, 'The Beatles', 1960]
66+
- [6, 55, 'The Rolling Stones', 1962]
67+
- [9, 299, 'Led Zeppelin', 1968]
68+
- [10, 167, 'Queen', 1970]
69+
...
70+
```

0 commit comments

Comments
 (0)