An Advanced, Dynamic AI Conversational Interface for Enterprise Data Platforms.
The Trusted Data Agent represents a paradigm shift in how developers, analysts, and architects interact with complex data ecosystems. It is a sophisticated web application designed not only to showcase AI-powered interaction with a Teradata database but to serve as a powerful, fully transparent "study buddy" for mastering the integration of Large Language Models (LLMs) with enterprise data.
This solution provides unparalleled, real-time insight into the complete conversational flow between the user, the AI agent, the Teradata Model Context Protocol (MCP) server, and the underlying database, establishing a new standard for clarity and control in AI-driven data analytics.
- Overview: A Superior Approach
- How It Works: Architecture
- Key Features
- Installation and Setup Guide
- Running the Application
- User Guide
- Troubleshooting
- Author & Contributions
The Trusted Data Agent transcends typical data chat applications by placing ultimate control and understanding in the hands of the user. It provides a seamless natural language interface to your Teradata system, empowering you to ask complex questions and receive synthesized, accurate answers without writing a single line of SQL.
Its core superiority lies in its unmatched transparency and dynamic configurability:
- Deep Insight: The Live Status panel is more than a log; it's a real-time window into the AI's mind, revealing its reasoning, tool selection, and the raw data it receives. This makes it an indispensable tool for debugging, learning, and building trust in AI systems.
- Unprecedented Flexibility: Unlike static applications, the Trusted Data Agent allows you to dynamically configure your LLM provider, select specific models, and even edit the core System Prompt that dictates the agent's behavior—all from within the UI.
This combination of power and transparency makes it the definitive tool for anyone serious about developing or deploying enterprise-grade AI data agents.
The application operates on a sophisticated client-server model, ensuring a clean separation of concerns and robust performance.
+-----------+ +-------------------------+ +------------------+ +----------------------+ +------------------+
| | | | | | | | | |
| End User | <--> | Frontend (index.html) | <--> | Backend (Python) | <--> | Large Language Model | <--> | Teradata MCP |
| | | (HTML, JS, CSS) | | (Quart Server) | | (Reasoning Engine) | | Server (Tools) |
| | | | | | | | | |
+-----------+ +-------------------------+ +------------------+ +----------------------+ +------------------+
- Frontend (
index.html
): A sleek, single-page application built with HTML, Tailwind CSS, and vanilla JavaScript. It captures user input and uses Server-Sent Events (SSE) to render real-time updates from the backend. - Backend (
mcp_web_client.py
): A high-performance asynchronous web server built with Quart. It serves the frontend, manages user sessions, and orchestrates the entire AI workflow. - Large Language Model (LLM): The reasoning engine. The backend dynamically initializes the connection to the selected LLM provider (e.g., Google) based on user-provided credentials and sends structured prompts to the model's API.
- Teradata MCP Server: The Model Context Protocol (MCP) server acts as the secure, powerful bridge to the database, exposing functionalities as a well-defined API of "tools" for the AI agent.
- Dynamic LLM Configuration: Configure your LLM provider, API key, and select from a list of available models directly within the application's UI.
- Live Model Refresh: Fetch an up-to-date list of supported models from your provider with the click of a button.
- System Prompt Editor: Take full control of the agent's behavior. Edit, save, and reset the core system prompt for each model, with changes persisting across sessions.
- Intuitive Conversational UI: Ask questions in plain English to query and analyze your database.
- Complete Transparency: The Live Status panel provides a real-time stream of the agent's thought process, actions, and tool outputs.
- Dynamic Capability Loading: Automatically discovers and displays all available Tools, Prompts, and Resources from the connected MCP Server.
- Rich Data Rendering: Intelligently formats and displays various data types, including query results in interactive tables and SQL DDL in highlighted code blocks.
- Optional Charting Engine: Enable data visualization capabilities to render charts based on query results by using a runtime parameter.
- Persistent Session History: Keeps a record of your conversations, allowing you to switch between different lines of inquiry.
- Python 3.8+ and
pip
. - Access to a running Teradata MCP Server.
- An API Key from a supported LLM provider. The initial validated provider is Google. You can obtain a Gemini API key from the Google AI Studio.
git clone [https://github.com/rgeissen/teradata-trusted-data-agent.git](https://github.com/rgeissen/teradata-trusted-data-agent.git)
cd teradata-trusted-data-agent
It is highly recommended to use a Python virtual environment.
-
Create and activate a virtual environment:
# For macOS/Linux python3 -m venv venv source venv/bin/activate # For Windows python -m venv venv .\venv\Scripts\activate
-
Install the required packages from
requirements.txt
:pip install -r requirements.txt
You can either enter your API key in the UI at runtime or, for convenience during development, create a .env
file in the project root:
GEMINI_API_KEY="YOUR_GEMINI_API_KEY_HERE"
For standard operation with the certified model (gemini-1.5-flash-8b-latest
):
python mcp_web_client.py
To enable all discovered models for testing and development purposes, start the server with the --all-models
flag. This bypasses the certification check and allows you to experiment with a wider range of LLMs.
python mcp_web_client.py --all-models
To enable the data visualization capabilities, start the server with the --charting
flag. This activates the charting engine configuration in the UI and allows the agent to generate charts from query results.
python mcp_web_client.py --charting
You can also combine flags for a full development environment:
python mcp_web_client.py --all-models --charting
The first time you launch, a configuration modal will appear.
- MCP Server: Enter the Host, Port, and Path for your running MCP Server.
- LLM Provider: Select your desired provider (currently Google is enabled, with more to be validated based on market demand).
- API Key: Enter the corresponding API Key for the selected provider.
- Model: Click the "Refresh" button to fetch available models. The certified model will be selectable by default.
- Connect and Load: Click the button to validate both connections and load all available capabilities.
- Charting Engine (Optional): If you started the application with the
--charting
flag, the configuration panel for the Charting Engine will be enabled. Enter the connection details for your Chart MCP server to activate data visualization.
- System Prompt (Menu Bar): Once configured, this button becomes active. Click it to open the System Prompt Editor for the currently selected model.
- Capabilities Panel (Top): Browse available Tools and Prompts discovered from the MCP server.
- Chat Window (Center): Your primary conversational area.
- Live Status Panel (Right): Your window into the agent's mind.
- History Panel (Left): Manage and switch between chat sessions.
This powerful feature allows you to fine-tune the agent's core instructions.
- Editing: The text area contains the prompt that will be sent to the LLM at the start of every new session. You can modify it to change the agent's persona, rules, or focus.
- Saving: Click "Save" to store your custom prompt in your browser's local storage. It will be automatically loaded the next time you configure the application with this model.
- Resetting: Click "Reset to Default" to fetch and restore the original, hardcoded system prompt for that model.
Type your question into the input box. The agent will now follow the instructions defined in your active system prompt.
- Stale UI on Startup: If the configuration dialog doesn't appear, check the browser's developer console for JavaScript errors. Ensure your
index.html
file is complete and up-to-date. - Connection Errors: Double-check all host, port, path, and API key information. Ensure no firewalls are blocking the connection.
- "Failed to fetch models": This usually indicates an invalid API key or a network issue preventing connection to the provider's API.
This project is licensed under the GNU Affero General Public License v3.0. The full license text is available in the LICENSE
file in the root of this repository.
Under the AGPLv3, you are free to use, modify, and distribute this software. However, if you run a modified version of this software on a network server and allow other users to interact with it, you must also make the source code of your modified version available to those users.
- Author/Initiator: Rainer Geissendoerfer, World Wide Data Architecture, Teradata.
- Source Code & Contributions: The Trusted Data Agent is licensed under the GNU Affero General Public License v3.0. Contributions are highly welcome. Please visit the main Git repository to report issues or submit pull requests.
- Git Repository: https://github.com/rgeissen/teradata-trusted-data-agent.git