Skip to content

IPC: introduce new IPC message service and rework the Intel Audio DSP host IPC driver #91606

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 13 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/services/ipc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ Interprocessor Communication (IPC)
:maxdepth: 1

ipc_service/ipc_service.rst
ipc_msg_service/ipc_msg_service.rst
257 changes: 257 additions & 0 deletions doc/services/ipc/ipc_msg_service/ipc_msg_service.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,257 @@
.. _ipc_msg_service:

IPC Message Service
###################

.. contents::
:local:
:depth: 2

The IPC message service API provides an interface to facilitate the following
functions between two domains or CPUs:

* Exchanging messages, where these messages are strongly typed,
* Query status for associated domains or CPUs, and
* Receiving events pertaining to associated domains or CPUs.

Overview
========

A communication channel consists of one instance and one or several endpoints
associated with the instance.

An instance is the external representation of a physical communication channel
between two domains or CPUs. The actual implementation and internal
representation of the instance is peculiar to each backend.

An individual instance is not used to send messages between domains/CPUs.
To send and receive messages, the user must create (register) an endpoint in
the instance. This allows for the connection of the two domains of interest.

It is possible to have zero or multiple endpoints for one single instance,
possibly with different priorities, and to use each to exchange messages.
Endpoint prioritization and multi-instance ability highly depend on the backend
used and its implementation.

The endpoint is an entity the user must use to send and receive messages between
two domains (connected by the instance). An endpoint is always associated to an
instance.

The creation of the instances is left to the backend, usually at init time.
The registration of the endpoints is left to the user, usually at run time.

The API does not mandate a way for the backend to create instances but it is
strongly recommended to use the devicetree to retrieve the configuration
parameters for an instance. Currently, each backend defines its own
DT-compatible configuration that is used to configure the interface at boot
time.

Exchanging messages
===================

To send messages between domains or CPUs, an endpoint must be registered onto
an instance.

See the following example:

.. note::

Before registering an endpoint, the instance must be opened using the
:c:func:`ipc_msg_service_open_instance` function.


.. code-block:: c
#include <zephyr/ipc/ipc_msg_service.h>
static void bound_cb(void *priv)
{
/* Endpoint bounded */
}
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
{
/* Data received */
return 0;
}
static struct ipc_msg_ept_cfg ept0_cfg = {
.name = "ept0",
.cb = {
.bound = bound_cb,
.received = recv_cb,
},
};
int main(void)
{
const struct device *inst0;
struct ipc_ept ept0;
struct ipc_msg_type_cmd message;
int ret;
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
ret = ipc_msg_service_open_instance(inst0);
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
/* Wait for endpoint bound (bound_cb called) */
message.cmd = 0x01;
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
}
Querying status
===============

The API provides a way to perform query on the backend regarding instance
and endpoint.

See the following example for querying if the endpoint is ready for
exchanging messages:

.. code-block:: c
#include <zephyr/ipc/ipc_msg_service.h>
static void bound_cb(void *priv)
{
/* Endpoint bounded */
}
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
{
/* Data received */
return 0;
}
static struct ipc_msg_ept_cfg ept0_cfg = {
.name = "ept0",
.cb = {
.bound = bound_cb,
.received = recv_cb,
},
};
int main(void)
{
const struct device *inst0;
struct ipc_ept ept0;
struct ipc_msg_type_cmd message;
int ret;
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
ret = ipc_msg_service_open_instance(inst0);
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
/* Wait for endpoint bound (bound_cb called) */
/* Check if endpoint is ready. */
ret = ipc_msg_service_query(&ept0, IPC_MSG_QUERY_IS_READY, NULL, NULL);
if (ret != 0) {
/* Endpoint is not ready */
}
message.cmd = 0x01;
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
}
Events
======

The backend can also do a callback when certain events come in through
the instance or endpoint.

See the following example on adding an event callback:

.. code-block:: c
#include <zephyr/ipc/ipc_msg_service.h>
static void bound_cb(void *priv)
{
/* Endpoint bounded */
}
static int recv_cb(uint16_t msg_type, const void *msg_data, void *priv)
{
/* Data received */
return 0;
}
static int evt_cb(uint16_t evt_type, const void *evt_data, void *priv)
{
/* Event received */
return 0;
}
static struct ipc_msg_ept_cfg ept0_cfg = {
.name = "ept0",
.cb = {
.bound = bound_cb,
.event = evt_cb,
.received = recv_cb,
},
};
int main(void)
{
const struct device *inst0;
struct ipc_ept ept0;
struct ipc_msg_type_cmd message;
int ret;
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
ret = ipc_msg_service_open_instance(inst0);
ret = ipc_msg_service_register_endpoint(inst0, &ept0, &ept0_cfg);
/* Wait for endpoint bound (bound_cb called) */
/* Check if endpoint is ready. */
ret = ipc_msg_service_query(&ept0, IPC_MSG_QUERY_IS_READY, NULL, NULL);
if (ret != 0) {
/* Endpoint is not ready */
}
message.cmd = 0x01;
ret = ipc_msg_service_send(&ept0, IPC_MSG_TYPE_CMD, &message);
}
Backends
========

The requirements needed for implementing backends give flexibility to the IPC
message service. These allow for the addition of dedicated backends having only
a subsets of features for specific use cases.

The backend must support at least the following:

* The init-time creation of instances.
* The run-time registration of an endpoint in an instance.

Additionally, the backend can also support the following:

* The run-time deregistration of an endpoint from the instance.
* The run-time closing of an instance.
* The run-time querying of an endpoint or instance status.

Each backend can have its own limitations and features that make the backend
unique and dedicated to a specific use case. The IPC message service API can be
used with multiple backends simultaneously, combining the pros and cons of each
backend.

API Reference
=============

IPC Message Service API
***********************

.. doxygengroup:: ipc_msg_service_api

IPC Message Service Backend API
*******************************

.. doxygengroup:: ipc_msg_service_backend
1 change: 0 additions & 1 deletion drivers/ipm/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ zephyr_library_sources_ifdef(CONFIG_IPM_MHU ipm_mhu.c)
zephyr_library_sources_ifdef(CONFIG_IPM_STM32_IPCC ipm_stm32_ipcc.c)
zephyr_library_sources_ifdef(CONFIG_IPM_NRFX ipm_nrfx_ipc.c)
zephyr_library_sources_ifdef(CONFIG_IPM_STM32_HSEM ipm_stm32_hsem.c)
zephyr_library_sources_ifdef(CONFIG_IPM_CAVS_HOST ipm_cavs_host.c)
zephyr_library_sources_ifdef(CONFIG_IPM_SEDI ipm_sedi.c)
zephyr_library_sources_ifdef(CONFIG_IPM_IVSHMEM ipm_ivshmem.c)
zephyr_library_sources_ifdef(CONFIG_ESP32_SOFT_IPM ipm_esp32.c)
Expand Down
1 change: 0 additions & 1 deletion drivers/ipm/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,6 @@ config IPM_MBOX
source "drivers/ipm/Kconfig.nrfx"
source "drivers/ipm/Kconfig.imx"
source "drivers/ipm/Kconfig.stm32"
source "drivers/ipm/Kconfig.intel_adsp"
source "drivers/ipm/Kconfig.ivshmem"
source "drivers/ipm/Kconfig.sedi"

Expand Down
51 changes: 0 additions & 51 deletions drivers/ipm/Kconfig.intel_adsp

This file was deleted.

8 changes: 8 additions & 0 deletions drivers/ipm/Kconfig.sedi
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,11 @@ config IPM_SEDI
This option enables the Intel SEDI IPM(IPC) driver.
This driver is simply a shim driver built upon the SEDI
bare metal IPC driver in the hal-intel module

config IPM_CALLBACK_ASYNC
bool "Deliver callbacks asynchronously"
help
When selected, the driver supports "asynchronous" command
delivery. Commands will stay active after the ISR returns,
until the application expressly "completes" the command
later.
Loading
Loading