Making the AI write with you, not for you.
An AI-powered writing assistant that transforms how you create structured documents. This application combines intelligent content generation with intuitive document management, enabling writers to craft professional documents with contextually-aware AI assistance that understands your style, audience, and objectives.
Built on the TalkPipe framework, this tool helps you:
- Break writer's block: Generate initial drafts and ideas for any section
- Maintain consistency: AI understands your document's context, style, and tone across all sections
- Iterate quickly: Multiple generation modes (rewrite, improve, proofread, ideas) let you refine content efficiently
- Stay organized: Structure documents into sections with main points and supporting text
- Work offline: Use local LLMs via Ollama or cloud-based models via OpenAI, Anthropic, and more
- Multi-User Support: JWT-based authentication with per-user document isolation
- Structured Document Creation: Organize your writing into sections with main points and user text
- AI-Powered Generation: Generate contextually-aware paragraph content using advanced language models
- Multiple Generation Modes:
- Rewrite: Complete rewrite with new ideas and improved clarity
- Improve: Polish existing text while maintaining structure
- Proofread: Fix grammar and spelling errors only
- Ideas: Get specific suggestions for enhancement
- Real-time Editing: Dynamic web interface for seamless writing and editing
- Document Management: Save, load, and manage multiple documents with automatic snapshots
- User Preferences: Per-user AI settings, writing style, and environment variables
- Customizable Metadata: Configure writing style, tone, audience, and generation parameters
- Flexible AI Backend: Support for OpenAI (GPT-4, GPT-4o), Anthropic (Claude 3.5 Sonnet, Claude 3 Opus), and Ollama (llama3, mistral, etc.)
- Database Storage: SQLite database with configurable location for easy backup and deployment
- Async Processing: Efficient queuing system for AI generation requests
- Python 3.11 or higher
- An AI backend: OpenAI, Anthropic, or Ollama (local)
pip install talkpipe-writing-assistantAfter installation, you can start the application immediately:
writing-assistantThen navigate to http://localhost:8001 in your browser. See the Quick Start section below for next steps.
git clone https://github.com/sandialabs/talkpipe-writing-assistant.git
cd talkpipe-writing-assistant
pip install -e .git clone https://github.com/sandialabs/talkpipe-writing-assistant.git
cd talkpipe-writing-assistant
pip install -e .[dev]# Production deployment
docker-compose up talkpipe-writing-assistant
# Development with live reload
docker-compose --profile dev up talkpipe-writing-assistant-devTL;DR: After pip install talkpipe-writing-assistant, just run writing-assistant and open http://localhost:8001 in your browser!
After installing with pip, follow these steps to get started:
writing-assistantThe server will start on http://localhost:8001 and display:
π Writing Assistant Server - Multi-User Edition
π Access your writing assistant at: http://localhost:8001/
π Register a new account at: http://localhost:8001/register
π Login at: http://localhost:8001/login
π API documentation: http://localhost:8001/docs
πΎ Database: /home/user/.writing_assistant/writing_assistant.db
- Open your browser and navigate to
http://localhost:8001/register - Enter your email address and password
- Click "Register" to create your account
You need to configure one of the supported AI backends:
Option A: OpenAI (Cloud)
- Get an API key from OpenAI Platform
- Set your API key:
export OPENAI_API_KEY="sk-your-api-key-here"
- In the web interface: Settings β AI Settings β Set Source to
openaiand Model to your model of choice.
Option B: Anthropic (Cloud)
- Get an API key from Anthropic Console
- Set your API key:
export ANTHROPIC_API_KEY="sk-ant-your-api-key-here"
- In the web interface: Settings β AI Settings β Set Source to
anthropicand Model to your model of choice.
Option C: Ollama (Local, Free)
- Install Ollama from ollama.com
- Pull a model:
ollama pull [model name] - Start Ollama:
ollama serve - In the web interface: Settings β AI Settings β Set Source to
ollamaand Model to [model name]
- Click "Create New Document"
- Add a title and sections, leaving a blank line between sections.
- Click "Generate" on any section to create AI-assisted content
- Save your work with the "Save Document" button
That's it! You're ready to use the AI writing assistant.
# Default: http://localhost:8001
writing-assistant
# Custom port
writing-assistant --port 8080
# Custom host and port
writing-assistant --host 0.0.0.0 --port 8080
# Enable auto-reload for development
writing-assistant --reload
# Custom database location
writing-assistant --db-path /path/to/database.db
# Disable custom environment variables from UI (security)
writing-assistant --disable-custom-env-vars
# Initialize database without starting server
writing-assistant --init-db
# You can also use environment variables
WRITING_ASSISTANT_PORT=8080 writing-assistant
WRITING_ASSISTANT_RELOAD=true writing-assistant
WRITING_ASSISTANT_DB_PATH=/path/to/database.db writing-assistantWhen the server starts, it will display:
- The URL to access the application
- Registration and login URLs
- API documentation URL
- Database location
Authentication: The application uses JWT-based multi-user authentication with FastAPI Users. Each user has their own account with secure password storage. New users can register through the web interface at /register, and existing users log in at /login.
Configure the application with these environment variables:
| Variable | Description | Default |
|---|---|---|
WRITING_ASSISTANT_HOST |
Server host address | localhost |
WRITING_ASSISTANT_PORT |
Server port number | 8001 |
WRITING_ASSISTANT_RELOAD |
Enable auto-reload (development) | false |
WRITING_ASSISTANT_DB_PATH |
Database file location | ~/.writing_assistant/writing_assistant.db |
WRITING_ASSISTANT_SECRET |
JWT secret key for authentication | Auto-generated (change in production) |
TALKPIPE_OLLAMA_BASE_URL |
Ollama server URL for local models | http://localhost:11434 |
Security Options:
--disable-custom-env-vars: Prevents users from configuring environment variables through the browser interface- Use this for shared deployments or when you want centralized credential management
- Environment variables must be set at the server level (via shell environment)
- The Environment Variables section will be hidden in the UI
Configure document metadata:
- AI Source:
openai,anthropic, orollama - Model: e.g.,
gpt-4,claude-3-5-sonnet-20241022, orllama3.1:8b - Writing style: formal, casual, technical, etc.
- Target audience: general public, experts, students, etc.
- Tone: neutral, persuasive, informative, etc.
- Word limit: approximate words per paragraph
Documents are stored in an SQLite database with multi-user isolation:
Default Location: ~/.writing_assistant/writing_assistant.db
Custom Location: Use --db-path or WRITING_ASSISTANT_DB_PATH to specify an alternative location
Features:
- Per-user document isolation (users only see their own documents)
- Automatic snapshot management (keeps 10 most recent versions)
- User-specific preferences (AI settings, writing style, etc.)
- Cascade deletion (removing a user deletes all their documents)
Backup: Simply copy the database file to create a backup. The database can be moved to a different location using the --db-path option.
src/writing_assistant/
βββ __init__.py # Package initialization and version
βββ core/ # Core business logic
β βββ __init__.py
β βββ callbacks.py # AI text generation functionality
β βββ definitions.py # Data models (Metadata)
β βββ segments.py # TalkPipe segment registration
βββ app/ # Web application
βββ __init__.py
βββ main.py # FastAPI application and API endpoints
βββ server.py # Application entry point
βββ static/ # CSS and JavaScript assets
βββ templates/ # Jinja2 HTML templates
- Metadata: Configuration for writing style, audience, tone, and AI settings
- Section: Individual document sections with async text generation and queuing
- Document: Complete document with sections, metadata, and snapshot management
- Callbacks: AI text generation using TalkPipe with context-aware prompting
"Port already in use"
- Change the port:
writing-assistant --port 8080 - Or kill the process using the port
"Cannot save document" or "Database error"
- Check write permissions to the database directory (default:
~/.writing_assistant/) - Ensure the directory exists:
mkdir -p ~/.writing_assistant - Try a different database location:
writing-assistant --db-path /tmp/test.db - Initialize the database manually:
writing-assistant --init-db
"Authentication failed" or "Invalid credentials"
- Double-check your email and password
- Register a new account if you haven't already
- The database may have been reset - check the database location
"Cannot connect to database"
- Verify the database file exists and is not corrupted
- Check file permissions on the database file
- Try initializing a new database:
writing-assistant --db-path /tmp/new.db --init-db
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Built with TalkPipe, a flexible framework for AI pipeline construction developed at Sandia National Laboratories.

