XAIport is designed to deliver interpretable AI model predictions through a microservice architecture, allowing users to understand the underlying decision-making processes of AI models better. The architecture includes a User Interface, Coordination Center, Core Microservices such as Data Processing, AI Model, XAI Method, and Evaluation Services, along with a Data Persistence layer.
- Python 3.8 or later
- FastAPI
- httpx
- uvicorn
- Dependencies as listed in
requirements.txt
-
Environment Setup: Ensure Python is installed on your system. It's recommended to use a virtual environment for Python projects:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install Dependencies: Install the necessary Python libraries with pip:
pip install -r requirements.txt
-
Clone the Repository: Clone the repository to get the latest codebase:
git clone https://github.com/ZeruiW/XAIport.git cd XAIport
Before running the system, configure all necessary details such as API endpoints, database connections, and other service-related configurations in a JSON format. Adjust the config.json
file as needed.
Example config.json
:
{
"upload_config": {
"server_url": "http://localhost:8000",
"datasets": {
"dataset1": {
"local_zip_path": "/path/to/dataset1.zip"
}
}
},
"model_config": {
"base_url": "http://model-service-url",
"models": {
"model1": {
"model_name": "ResNet50",
"perturbation_type": "noise",
"severity": 2
}
}
}
}
Run the FastAPI application using Uvicorn as an ASGI server with the following command:
uvicorn main:app --host 0.0.0.0 --port 8000
The system provides several RESTful APIs to support operations such as data upload, model prediction, XAI method execution, and evaluation tasks. Here are some examples of how to use these APIs:
-
Upload Dataset:
curl -X POST "http://localhost:8000/upload-dataset/dataset1" -F "file=@/path/to/dataset.zip"
-
Execute XAI Task:
curl -X POST "http://localhost:8000/cam_xai" -H "Content-Type: application/json" -d '{"dataset_id": "dataset1", "algorithms": ["GradCAM", "SmoothGrad"]}'
Configure appropriate logging policies to record key operations and errors within the system. This can be achieved by setting up Python's logging module to handle different log levels and outputs.
It is recommended to use monitoring tools like Prometheus and Grafana to track system performance and health indicators.
Check if the target server is reachable and ensure that the file paths in the configuration file are correctly specified.
Modify the API endpoints directly in the JSON configuration file and restart the service to apply changes.