This demo showcases an AI-powered introspection into network logs. Context for the LLM is build by means of executing MongoDB aggregation pipelines, which in turn are defined by the same LLM, in the first stage of processing each question in the chat.
export OPENAI_API_KEY='<key>'
export MONGODB_TELCO_CHAT='<connection string>'
export MONGODB_TELCO_CHAT_DATABASE='<database name>'
export MONGODB_TELCO_CHAT_COLLECTION='<collection name>'
The network (webserver) logs that I am using have the following format. This is currently hardcoded into the demo:
{
_id: ObjectId('668551649101ed2d266dd505'),
timestamp: ISODate('2024-07-03T13:25:56.391Z'),
path: '/backstage',
ip: '104.30.134.186',
city: 'Copenhagen',
country: 'DK'
}
brew install python@3.11
export PATH="$(brew --prefix)/opt/python@3.11/libexec/bin:$PATH"
python3 -m venv <dir>
cd <dir>; source ./bin/activate
pip install -r telco-ai-ops/requirements.txt
cd telco-ai-ops
python app.py
If all goes well, you can access the app from your browser at localhost:9494.
I want to thank Genevieve Broadhead and Boris Bialek for giving me the opportunity to build this demo - it is so much fun!