👋 The TrustyAI Service is intended to be a hub for all kinds of Responsible AI workflows, such as explainability, drift, and Large Language Model (LLM) evaluation. Designed as a REST server wrapping a core Python library, the TrustyAI service is intended to be a tool that can operate in a local environment, a Jupyter Notebook, or in Kubernetes.
- Fourier Maximum Mean Discrepancy (FourierMMD)
- Jensen-Shannon
- Approximate Kolmogorov–Smirnov Test
- Kolmogorov–Smirnov Test (KS-Test)
- Meanshift
- Statistical Parity Difference
- Disparate Impact Ratio
- Average Odds Ratio (WIP)
- Average Predictive Value Difference (WIP)
- Individual Consistency (WIP)
uv pip install ".[$EXTRAS]"podman build -t $IMAGE_NAME --build-arg EXTRAS="$EXTRAS" .Pass these extras as a comma separated list, e.g., "mariadb,protobuf"
protobuf: To process model inference data from ModelMesh models, you can install withprotobufsupport. Otherwise, only KServe models will be supported.eval: To enable the Language Model Evaluation servers, install withevalsupport.mariadb(If installing locally, install the MariaDB Connector/C first.)
uv pip install ".[mariadb,protobuf,eval]"
podman build -t $IMAGE_NAME --build-arg EXTRAS="mariadb,protobuf,eval" .uv run uvicorn src.main --host 0.0.0.0 --port 8080podman run -t $IMAGE_NAME -p 8080:8080 .The service supports TLS encryption and automatically detects certificates at startup:
- With TLS certificates: Runs on port 4443 (HTTPS)
- Without TLS certificates: Runs on port 8080 (HTTP)
Certificate locations (configurable via environment variables):
- Certificate:
/etc/tls/internal/tls.crt(orTLS_CERT_FILE) - Private key:
/etc/tls/internal/tls.key(orTLS_KEY_FILE)
Environment variables:
TLS_CERT_FILE: Path to TLS certificate fileTLS_KEY_FILE: Path to TLS private key fileSSL_PORT: HTTPS port (default: 4443)HTTP_PORT: HTTP port (default: 8080)
The TLS implementation is fully compatible with the TrustyAI operator for seamless Kubernetes deployment.
To run all tests in the project:
python -m pytestOr with more verbose output:
python -m pytest -vTo run tests with coverage reporting:
python -m pytest --cov=srcTo process model inference data from ModelMesh models, you can install protobuf support. Otherwise, only KServe models will be supported.
After installing dependencies, generate Python code from the protobuf definitions:
# From the project root
bash scripts/generate_protos.shRun the tests for the protobuf implementation:
# From the project root
python -m pytest tests/service/data/test_modelmesh_parser.py -vWhen the service is running, visit localhost:8080/docs to see the OpenAPI documentation!