AI File Processor is a web-based tool that uses large language models (LLMs) to generate Python code for processing uploaded files. It features a secure, containerized execution environment and supports multi-file workflows.
-
LLM-Powered Code Generation
Processes files using Python code generated by LLMs. A lightweight Retrieval-Augmented Generation (RAG) approach provides context to the model by including filenames and file previews (e.g., the first few lines of text documents). -
Secure, Isolated Execution
Each request is executed in a separate Podman container to sandbox generated code and enhance security. -
Multi-File Input & Output
Accepts multiple files in a single prompt and returns results either as individual files or as a.zip
archive. -
Retry Mechanism
Automatically retries code generation if the model fails to produce valid Python (default: 2 retries). -
Modern Tech Stack
Built with a React frontend and a Python/Django backend. -
Flexible LLM Backend
Uses Groq by default for LLM requests, but can be easily configured to use OpenAI or other providers. -
Optional Dependency Detection
Includes (disabled by default) support for inferring required Python libraries from the generated code to improve automation.
The installation instructions are tested on Ubuntu 24 using Python 3.12 and may differ depending on your system
Navigate to the backend directory and
- Generate a secret key for Django and put it into backend/adp/settings.py
python -c 'from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())'
- Put your Groq API key in backend/adp/settings.py
- Install podman
sudo apt install podman
- Create virtual environment
python3 -m venv venv
- Activate virtual environment
source venv/bin/activate
- Install requirements
pip install -r requirements.txt
- Run the server
python manage.py runserver
Navigate to the frontend directory and
- Install dependencies:
npm install
- Run the frontend using
npm start