Create a shared network (for use between different docker compose instances)
docker network create shared-network
Self-hosted AI Starter Kit is an open-source Docker Compose template designed to swiftly initialize a comprehensive local AI and low-code development environment. Curated by https://github.com/n8n-io.
[!TIP] > Read the announcement
✅ Self-hosted n8n - Low-code platform with over 400 integrations and advanced AI components
✅ Qdrant - Open-source, high performance vector store with an comprehensive API
✅ PostgreSQL - Workhorse of the Data Engineering world, handles large amounts of data safely.
⭐️ AI Agents for scheduling appointments
⭐️ Summarize Company PDFs securely without data leaks
⭐️ Smarter Slack Bots for enhanced company communications and IT operations
⭐️ Private Financial Document Analysis at minimal cost
The core of the Self-hosted AI Starter Kit is a Docker Compose file, pre-configured with network and storage settings, minimizing the need for additional installations. After completing the installation steps above, simply follow the steps below to get started.
- Open http://localhost:5678/ in your browser to set up n8n. You’ll only have to do this once.
- Open the included workflow: http://localhost:5678/workflow/srOnR8PAY3u4RSwb
- Click the Chat button at the bottom of the canvas, to start running the workflow.
- If this is the first time you’re running the workflow, you may need to wait until Ollama finishes downloading Llama3.2. You can inspect the docker console logs to check on the progress.
To open n8n at any time, visit http://localhost:5678/ in your browser.
With your n8n instance, you’ll have access to over 400 integrations and a suite of basic and advanced AI nodes such as AI Agent, Text classifier, and Information Extractor nodes. To keep everything local, just remember to use the Ollama node for your language model and Qdrant as your vector store.
Note
This starter kit is designed to help you get started with self-hosted AI workflows. While it’s not fully optimized for production environments, it combines robust components that work well together for proof-of-concept projects. You can customize it to meet your specific needs
docker compose pull
docker compose create && docker compose up -d
n8n is full of useful content for getting started quickly with its AI concepts and nodes. If you run into an issue, go to support.
- AI agents for developers: from theory to practice with n8n
- Tutorial: Build an AI workflow in n8n
- Langchain Concepts in n8n
- Demonstration of key differences between agents and chains
- What are vector databases?
For more AI workflow ideas, visit the official n8n AI template gallery. From each workflow, select the Use workflow button to automatically import the workflow into your local n8n instance.
- AI Agent Chat
- AI chat with any data source (using the n8n workflow too)
- Chat with OpenAI Assistant (by adding a memory)
- Use an open-source LLM (via Hugging Face)
- Chat with PDF docs using AI (quoting sources)
- AI agent that can scrape webpages
- Tax Code Assistant
- Breakdown Documents into Study Notes with MistralAI and Qdrant
- Financial Documents Assistant using Qdrant and Mistral.ai
- Recipe Recommendations with Qdrant and Mistral
The self-hosted AI starter kit will create a shared folder (by default,
located in the same directory) which is mounted to the n8n container and
allows n8n to access files on disk. This folder within the n8n container is
located at /data/shared
-- this is the path you’ll need to use in nodes that
interact with the local filesystem.
Nodes that interact with the local filesystem
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Join the conversation in the n8n Forum.