Skip to content

Commit 190f98c

Browse files
committed
doc: add bits about IPC message service
This adds documentation for the newly introduced IPC message service APIs. Signed-off-by: Daniel Leung <daniel.leung@intel.com>
1 parent ff353ae commit 190f98c

File tree

2 files changed

+254
-0
lines changed

2 files changed

+254
-0
lines changed

doc/services/ipc/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,4 @@ Interprocessor Communication (IPC)
77
:maxdepth: 1
88

99
ipc_service/ipc_service.rst
10+
ipc_msg_service/ipc_msg_service.rst
Lines changed: 253 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,253 @@
1+
.. _ipc_msg_service:
2+
3+
IPC Message Service
4+
###################
5+
6+
.. contents::
7+
:local:
8+
:depth: 2
9+
10+
The IPC message service API provides an interface to exchange data between two
11+
domains or CPUs.
12+
13+
Overview
14+
========
15+
16+
A communication channel consists of one instance and one or several endpoints
17+
associated with the instance.
18+
19+
An instance is the external representation of a physical communication channel
20+
between two domains or CPUs. The actual implementation and internal
21+
representation of the instance is peculiar to each backend.
22+
23+
An individual instance is not used to send data between domains/CPUs. To send
24+
and receive the data, the user must create (register) an endpoint in the
25+
instance. This allows for the connection of the two domains of interest.
26+
27+
It is possible to have zero or multiple endpoints for one single instance,
28+
possibly with different priorities, and to use each to exchange data.
29+
Endpoint prioritization and multi-instance ability highly depend on the backend
30+
used and its implementation.
31+
32+
The endpoint is an entity the user must use to send and receive data between
33+
two domains (connected by the instance). An endpoint is always associated to an
34+
instance.
35+
36+
The creation of the instances is left to the backend, usually at init time.
37+
The registration of the endpoints is left to the user, usually at run time.
38+
39+
The API does not mandate a way for the backend to create instances but it is
40+
strongly recommended to use the devicetree to retrieve the configuration
41+
parameters for an instance. Currently, each backend defines its own
42+
DT-compatible configuration that is used to configure the interface at boot
43+
time.
44+
45+
Simple data exchange
46+
====================
47+
48+
To send data between domains or CPUs, an endpoint must be registered onto
49+
an instance.
50+
51+
See the following example:
52+
53+
.. note::
54+
55+
Before registering an endpoint, the instance must be opened using the
56+
:c:func:`ipc_msg_service_open_instance` function.
57+
58+
59+
.. code-block:: c
60+
61+
#include <zephyr/ipc/ipc_msg_service.h>
62+
63+
static void bound_cb(void *priv)
64+
{
65+
/* Endpoint bounded */
66+
}
67+
68+
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
69+
{
70+
/* Data received */
71+
72+
return 0;
73+
}
74+
75+
static struct ipc_msg_ept_cfg ept0_cfg = {
76+
.name = "ept0",
77+
.cb = {
78+
.bound = bound_cb,
79+
.received = recv_cb,
80+
},
81+
};
82+
83+
int main(void)
84+
{
85+
const struct device *inst0;
86+
struct ipc_ept ept0;
87+
struct ipc_msg_type_cmd message;
88+
int ret;
89+
90+
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
91+
ret = ipc_msg_service_open_instance(inst0);
92+
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
93+
94+
/* Wait for endpoint bound (bound_cb called) */
95+
96+
message.cmd = 0x01;
97+
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
98+
}
99+
100+
Querying status
101+
===============
102+
103+
The API provides a way to perform query on the backend regarding instance
104+
and endpoint.
105+
106+
See the following example for querying if the endpoint is ready for
107+
data exchange:
108+
109+
.. code-block:: c
110+
111+
#include <zephyr/ipc/ipc_msg_service.h>
112+
113+
static void bound_cb(void *priv)
114+
{
115+
/* Endpoint bounded */
116+
}
117+
118+
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
119+
{
120+
/* Data received */
121+
122+
return 0;
123+
}
124+
125+
static struct ipc_msg_ept_cfg ept0_cfg = {
126+
.name = "ept0",
127+
.cb = {
128+
.bound = bound_cb,
129+
.received = recv_cb,
130+
},
131+
};
132+
133+
int main(void)
134+
{
135+
const struct device *inst0;
136+
struct ipc_ept ept0;
137+
struct ipc_msg_type_cmd message;
138+
int ret;
139+
140+
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
141+
ret = ipc_msg_service_open_instance(inst0);
142+
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
143+
144+
/* Wait for endpoint bound (bound_cb called) */
145+
146+
/* Check if endpoint is ready. */
147+
ret = ipc_msg_service_query(&ept0, IPC_MSG_QUERY_IS_READY, NULL, NULL);
148+
if (ret != 0) {
149+
/* Endpoint is not ready */
150+
}
151+
152+
message.cmd = 0x01;
153+
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
154+
}
155+
156+
Events
157+
======
158+
159+
The backend can also do a callback when certain events come in through
160+
the instance or endpoint.
161+
162+
See the following example on adding an event callback:
163+
164+
.. code-block:: c
165+
166+
#include <zephyr/ipc/ipc_msg_service.h>
167+
168+
static void bound_cb(void *priv)
169+
{
170+
/* Endpoint bounded */
171+
}
172+
173+
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
174+
{
175+
/* Data received */
176+
177+
return 0;
178+
}
179+
180+
static int evt_cb(uint16_t evt_type, const void *evt_data, void *priv)
181+
{
182+
/* Event received */
183+
184+
return 0;
185+
}
186+
187+
static struct ipc_msg_ept_cfg ept0_cfg = {
188+
.name = "ept0",
189+
.cb = {
190+
.bound = bound_cb,
191+
.event = evt_cb,
192+
.received = recv_cb,
193+
},
194+
};
195+
196+
int main(void)
197+
{
198+
const struct device *inst0;
199+
struct ipc_ept ept0;
200+
struct ipc_msg_type_cmd message;
201+
int ret;
202+
203+
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
204+
ret = ipc_msg_service_open_instance(inst0);
205+
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
206+
207+
/* Wait for endpoint bound (bound_cb called) */
208+
209+
/* Check if endpoint is ready. */
210+
ret = ipc_msg_service_query(&ept0, IPC_MSG_QUERY_IS_READY, NULL, NULL);
211+
if (ret != 0) {
212+
/* Endpoint is not ready */
213+
}
214+
215+
message.cmd = 0x01;
216+
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
217+
}
218+
219+
Backends
220+
========
221+
222+
The requirements needed for implementing backends give flexibility to the IPC
223+
message service. These allow for the addition of dedicated backends having only
224+
a subsets of features for specific use cases.
225+
226+
The backend must support at least the following:
227+
228+
* The init-time creation of instances.
229+
* The run-time registration of an endpoint in an instance.
230+
231+
Additionally, the backend can also support the following:
232+
233+
* The run-time deregistration of an endpoint from the instance.
234+
* The run-time closing of an instance.
235+
* The run-time querying of an endpoint or instance status.
236+
237+
Each backend can have its own limitations and features that make the backend
238+
unique and dedicated to a specific use case. The IPC message service API can be
239+
used with multiple backends simultaneously, combining the pros and cons of each
240+
backend.
241+
242+
API Reference
243+
=============
244+
245+
IPC Message Service API
246+
***********************
247+
248+
.. doxygengroup:: ipc_msg_service_api
249+
250+
IPC Message Service Backend API
251+
*******************************
252+
253+
.. doxygengroup:: ipc_msg_service_backend

0 commit comments

Comments
 (0)