Skip to content

Commit fd6d222

Browse files
committed
doc: add bits about IPC message service
This adds documentation for the newly introduced IPC message service APIs. Signed-off-by: Daniel Leung <daniel.leung@intel.com>
1 parent d1b2b1d commit fd6d222

File tree

2 files changed

+258
-0
lines changed

2 files changed

+258
-0
lines changed

doc/services/ipc/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,4 @@ Interprocessor Communication (IPC)
77
:maxdepth: 1
88

99
ipc_service/ipc_service.rst
10+
ipc_msg_service/ipc_msg_service.rst
Lines changed: 257 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,257 @@
1+
.. _ipc_msg_service:
2+
3+
IPC Message Service
4+
###################
5+
6+
.. contents::
7+
:local:
8+
:depth: 2
9+
10+
The IPC message service API provides an interface to facilitate the following
11+
functions between two domains or CPUs:
12+
13+
* Exchanging messages, where these messages are strongly typed,
14+
* Query status for associated domains or CPUs, and
15+
* Receiving events pertaining to associated domains or CPUs.
16+
17+
Overview
18+
========
19+
20+
A communication channel consists of one instance and one or several endpoints
21+
associated with the instance.
22+
23+
An instance is the external representation of a physical communication channel
24+
between two domains or CPUs. The actual implementation and internal
25+
representation of the instance is peculiar to each backend.
26+
27+
An individual instance is not used to send messages between domains/CPUs.
28+
To send and receive messages, the user must create (register) an endpoint in
29+
the instance. This allows for the connection of the two domains of interest.
30+
31+
It is possible to have zero or multiple endpoints for one single instance,
32+
possibly with different priorities, and to use each to exchange messages.
33+
Endpoint prioritization and multi-instance ability highly depend on the backend
34+
used and its implementation.
35+
36+
The endpoint is an entity the user must use to send and receive messages between
37+
two domains (connected by the instance). An endpoint is always associated to an
38+
instance.
39+
40+
The creation of the instances is left to the backend, usually at init time.
41+
The registration of the endpoints is left to the user, usually at run time.
42+
43+
The API does not mandate a way for the backend to create instances but it is
44+
strongly recommended to use the devicetree to retrieve the configuration
45+
parameters for an instance. Currently, each backend defines its own
46+
DT-compatible configuration that is used to configure the interface at boot
47+
time.
48+
49+
Exchanging messages
50+
===================
51+
52+
To send messages between domains or CPUs, an endpoint must be registered onto
53+
an instance.
54+
55+
See the following example:
56+
57+
.. note::
58+
59+
Before registering an endpoint, the instance must be opened using the
60+
:c:func:`ipc_msg_service_open_instance` function.
61+
62+
63+
.. code-block:: c
64+
65+
#include <zephyr/ipc/ipc_msg_service.h>
66+
67+
static void bound_cb(void *priv)
68+
{
69+
/* Endpoint bounded */
70+
}
71+
72+
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
73+
{
74+
/* Data received */
75+
76+
return 0;
77+
}
78+
79+
static struct ipc_msg_ept_cfg ept0_cfg = {
80+
.name = "ept0",
81+
.cb = {
82+
.bound = bound_cb,
83+
.received = recv_cb,
84+
},
85+
};
86+
87+
int main(void)
88+
{
89+
const struct device *inst0;
90+
struct ipc_ept ept0;
91+
struct ipc_msg_type_cmd message;
92+
int ret;
93+
94+
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
95+
ret = ipc_msg_service_open_instance(inst0);
96+
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
97+
98+
/* Wait for endpoint bound (bound_cb called) */
99+
100+
message.cmd = 0x01;
101+
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
102+
}
103+
104+
Querying status
105+
===============
106+
107+
The API provides a way to perform query on the backend regarding instance
108+
and endpoint.
109+
110+
See the following example for querying if the endpoint is ready for
111+
exchanging messages:
112+
113+
.. code-block:: c
114+
115+
#include <zephyr/ipc/ipc_msg_service.h>
116+
117+
static void bound_cb(void *priv)
118+
{
119+
/* Endpoint bounded */
120+
}
121+
122+
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
123+
{
124+
/* Data received */
125+
126+
return 0;
127+
}
128+
129+
static struct ipc_msg_ept_cfg ept0_cfg = {
130+
.name = "ept0",
131+
.cb = {
132+
.bound = bound_cb,
133+
.received = recv_cb,
134+
},
135+
};
136+
137+
int main(void)
138+
{
139+
const struct device *inst0;
140+
struct ipc_ept ept0;
141+
struct ipc_msg_type_cmd message;
142+
int ret;
143+
144+
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
145+
ret = ipc_msg_service_open_instance(inst0);
146+
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
147+
148+
/* Wait for endpoint bound (bound_cb called) */
149+
150+
/* Check if endpoint is ready. */
151+
ret = ipc_msg_service_query(&ept0, IPC_MSG_QUERY_IS_READY, NULL, NULL);
152+
if (ret != 0) {
153+
/* Endpoint is not ready */
154+
}
155+
156+
message.cmd = 0x01;
157+
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
158+
}
159+
160+
Events
161+
======
162+
163+
The backend can also do a callback when certain events come in through
164+
the instance or endpoint.
165+
166+
See the following example on adding an event callback:
167+
168+
.. code-block:: c
169+
170+
#include <zephyr/ipc/ipc_msg_service.h>
171+
172+
static void bound_cb(void *priv)
173+
{
174+
/* Endpoint bounded */
175+
}
176+
177+
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
178+
{
179+
/* Data received */
180+
181+
return 0;
182+
}
183+
184+
static int evt_cb(uint16_t evt_type, const void *evt_data, void *priv)
185+
{
186+
/* Event received */
187+
188+
return 0;
189+
}
190+
191+
static struct ipc_msg_ept_cfg ept0_cfg = {
192+
.name = "ept0",
193+
.cb = {
194+
.bound = bound_cb,
195+
.event = evt_cb,
196+
.received = recv_cb,
197+
},
198+
};
199+
200+
int main(void)
201+
{
202+
const struct device *inst0;
203+
struct ipc_ept ept0;
204+
struct ipc_msg_type_cmd message;
205+
int ret;
206+
207+
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
208+
ret = ipc_msg_service_open_instance(inst0);
209+
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
210+
211+
/* Wait for endpoint bound (bound_cb called) */
212+
213+
/* Check if endpoint is ready. */
214+
ret = ipc_msg_service_query(&ept0, IPC_MSG_QUERY_IS_READY, NULL, NULL);
215+
if (ret != 0) {
216+
/* Endpoint is not ready */
217+
}
218+
219+
message.cmd = 0x01;
220+
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
221+
}
222+
223+
Backends
224+
========
225+
226+
The requirements needed for implementing backends give flexibility to the IPC
227+
message service. These allow for the addition of dedicated backends having only
228+
a subsets of features for specific use cases.
229+
230+
The backend must support at least the following:
231+
232+
* The init-time creation of instances.
233+
* The run-time registration of an endpoint in an instance.
234+
235+
Additionally, the backend can also support the following:
236+
237+
* The run-time deregistration of an endpoint from the instance.
238+
* The run-time closing of an instance.
239+
* The run-time querying of an endpoint or instance status.
240+
241+
Each backend can have its own limitations and features that make the backend
242+
unique and dedicated to a specific use case. The IPC message service API can be
243+
used with multiple backends simultaneously, combining the pros and cons of each
244+
backend.
245+
246+
API Reference
247+
=============
248+
249+
IPC Message Service API
250+
***********************
251+
252+
.. doxygengroup:: ipc_msg_service_api
253+
254+
IPC Message Service Backend API
255+
*******************************
256+
257+
.. doxygengroup:: ipc_msg_service_backend

0 commit comments

Comments
 (0)