👋 The TrustyAI Service is intended to be a hub for all kinds of Responsible AI workflows, such as explainability, drift, and Large Language Model (LLM) evaluation. Designed as a REST server wrapping a core Python library, the TrustyAI service is intended to be a tool that can operate in a local environment, a Jupyter Notebook, or in Kubernetes.
- Fourier Maximum Mean Discrepancy (FourierMMD)
- Jensen-Shannon
- Approximate Kolmogorov–Smirnov Test
- Kolmogorov–Smirnov Test (KS-Test)
- Meanshift
- Statistical Parity Difference
- Disparate Impact Ratio
- Average Odds Ratio (WIP)
- Average Predictive Value Difference (WIP)
- Individual Consistency (WIP)
uv pip install ".[$EXTRAS]"
podman build -t $IMAGE_NAME --build-arg EXTRAS="$EXTRAS" .
Pass these extras as a comma separated list, e.g., "mariadb,protobuf"
protobuf
: To process model inference data from ModelMesh models, you can install withprotobuf
support. Otherwise, only KServe models will be supported.eval
: To enable the Language Model Evaluation servers, install witheval
support.mariadb
(If installing locally, install the MariaDB Connector/C first.)
uv pip install ".[mariadb,protobuf,eval]"
podman build -t $IMAGE_NAME --build-arg EXTRAS="mariadb,protobuf,eval" .
uv run uvicorn src.main --host 0.0.0.0 --port 8080
podman run -t $IMAGE_NAME -p 8080:8080 .
To run all tests in the project:
python -m pytest
Or with more verbose output:
python -m pytest -v
To run tests with coverage reporting:
python -m pytest --cov=src
To process model inference data from ModelMesh models, you can install protobuf support. Otherwise, only KServe models will be supported.
After installing dependencies, generate Python code from the protobuf definitions:
# From the project root
bash scripts/generate_protos.sh
Run the tests for the protobuf implementation:
# From the project root
python -m pytest tests/service/data/test_modelmesh_parser.py -v
When the service is running, visit localhost:8080/docs
to see the OpenAPI documentation!