Let your local AI be your creative muse.
Idea Weaver is a multi-agentic application designed to be your creative partner. It leverages a team of specialized AI agents, powered by Large Language Models (LLMs) via Ollama or Google Gemini API, configurable through an environment variable. to transform a simple story premise into a well-structured narrative concept. This multi-agent architecture allows for modularity, where each agent brings specialized expertise to a specific stage of story development, enhancing robustness and scalability. The entire creative process is orchestrated by CrewAI and is fully observable through LangSmith, giving you a transparent look into the AI's reasoning process.
The application employs a sophisticated multi-agent pipeline. The Idea Weaver Master Agent first engages the user to collect and validate all necessary inputs. Once collected, these inputs are then passed to a sequential pipeline where specialized agents build upon each other's work, creating a comprehensive story scaffold.
graph TD
A[User Input] --> B(Idea Weaver Master Agent);
B --> C{Collected & Validated Inputs};
C --> D{Story Generation Trigger};
D --> E(World Builder Agent);
D --> F(Character Name Generator Agent);
D --> G(Title Generator Agent);
E --> H{World Description};
F --> I{Generated Character Names};
G --> J{Generated Title};
H & I --> K(Character Creator Agent);
K --> L{Character Profiles};
H & L --> M(Narrative Nudger Agent);
M --> N{Plot Twist};
H & L & N --> O(Summary Writer Agent);
O --> P{Final Story Summary};
P & J --> Q[Output: Markdown File];
This application does not write a complete, finished story. Instead, it acts as a creative partner to rapidly develop a detailed story concept or scaffold from a simple idea.
Think of it as an automated, high-speed brainstorming session with a team of AI specialists (agents). Its main goal is to take you from a single spark of an idea (e.g., "a hobbit and a wizard on a journey") to a structured, well-defined plan that you, the writer, can then use to write the actual story.
- Initial Interaction with the Master Agent (Human-in-the-Loop): The process begins with you interacting with the Idea Weaver Master Agent through an intuitive chat interface. This agent intelligently guides you through providing all necessary details: a basic premise, target audience, title choice (generate or provide your own), the number of characters, and their names (which can be provided sequentially or generated by AI).
- Master Agent Orchestration & Input Validation: The Master Agent is responsible for understanding your intent and guiding the conversation. It uses a stateful, tool-based approach to ask clarifying questions, validate your inputs (e.g., ensuring character counts are correct), and confirm all required information is collected in a logical order. This ensures the interaction is both natural and error-resistant.
- Core Story Generation: Once all inputs are collected and validated by the Master Agent, the application assembles a team of specialized AI agents (World Builder, Character Creator, Narrative Nudger, Summary Writer). These agents then collaboratively brainstorm and develop the story concept based on your provided details.
- The Final Output: The application compiles all the generated outputs into a single Markdown file, which serves as your story blueprint, and also displays a clean, formatted version in the UI.
Idea Weaver features a user-friendly, interactive chat interface built with Streamlit. This UI provides a conversational experience, allowing you to easily input your story details and receive the generated concepts in a clear, readable format.
- Intelligent Input Handling: Smartly handles user input for titles and character names, including sequential input for multiple characters, orchestrated by the Master Agent.
- Robust Error Handling: Gracefully manages backend errors, displaying user-friendly messages without exposing code details.
- Clean Output Formatting: Provides separate, clean text formatting for the UI output, distinct from the Markdown file.
- Master Agent Orchestration: A dedicated agent manages the entire input collection and validation process, making the interaction more natural and robust.
- Stateful Conversational Management: The Master Agent now uses explicit state tracking to guide the conversation, ensuring a more robust and logical flow of questions and validations.
- World Builder → builds out rich world details
- Character Creator → generates character archetypes and quirks
- AI-Generated Title Option → provides an option to have the AI generate a story title
- Summary Writer → writes a short, engaging summary of the story
- LangSmith Tracing → logs LLM interactions for observability
- Local file output → saves final result using story title
idea-weaver/
├── backend/
│ ├── agents/
│ │ ├── character_creator.py
│ │ ├── character_name_generator.py
│ │ ├── idea_weaver_master.py
│ │ ├── narrative_nudger.py
│ │ ├── summary_writer.py
│ │ ├── title_generator.py
│ │ └── world_builder.py
│ ├── api.py
│ ├── main.py
│ ├── prompts/
│ │ ├── character_creator_prompt.py
│ │ ├── character_name_generator_prompt.py
│ │ ├── master_agent_follow_up_prompt.py
│ │ ├── master_agent_initial_prompt.py
│ │ ├── narrative_nudger_prompt.py
│ │ ├── story_summary_prompt.py
│ │ ├── title_generator_prompt.py
│ │ └── world_builder_prompt.py
│ └── utils/
│ ├── llm_loader.py
│ ├── markdown_builder.py
│ ├── master_agent_tools.py
│ ├── save_to_markdown.py
│ └── startup_checker.py
├── frontend/
│ ├── api_client.py
│ ├── app.py
│ └── ui.py
├── outputs/
│ └── *.md
Create a .env
file in the root directory and add your LangSmith and Ollama details:
# LangSmith Configuration
LANGSMITH_TRACING_V2=<true_or_false>
LANGSMITH_ENDPOINT=<YOUR_LANGSMITH_ENDPOINT>
LANGSMITH_API_KEY=<YOUR_LANGSMITH_API_KEY>
LANGSMITH_PROJECT=<YOUR_LANGSMITH_PROJECT_NAME>
# LLM Provider Configuration
# Set LLM_PROVIDER to either "OLLAMA" or "GEMINI"
LLM_PROVIDER="OLLAMA" # or "GEMINI"
# --- Ollama Configuration (if LLM_PROVIDER="OLLAMA") ---
OLLAMA_BASE_URL=<YOUR_OLLAMA_BASE_URL>
OLLAMA_MODEL=<YOUR_OLLAMA_MODEL_NAME>
# --- Gemini API Configuration (if LLM_PROVIDER="GEMINI") ---
GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>
GEMINI_MODEL=<YOUR_GEMINI_MODEL_NAME> # e.g., "gemini-pro", "gemini-1.5-pro-latest", "gemini-1.5-flash-latest"
- Create a virtual environment using
uv
:uv venv
- Activate the virtual environment:
source .venv/bin/activate
- Install the project in editable mode:
uv pip install -e .
In one terminal, run the following command to start the backend server:
uvicorn backend.main:app --reload
In a second terminal, run the following command to start the frontend application:
streamlit run frontend/app.py