An intelligent agent for time series analysis and forecasting that combines LLM capabilities with statistical forecasting tools.
- Interactive command-line interface
- Automatic data analysis and visualization
- Time series forecasting with multiple models
- Dynamic code generation and execution
- Extensible tools system
- Conversation memory and context awareness
- Automatic dependency management
- Python 3.12+
- Ollama (for LLM support)
- Git
- Clone the repository:
git clone https://github.com/codeloop/forecasting-agent.git
cd forecasting-agent
- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install pandas numpy prophet darts tabulate langchain-experimental
- Install Ollama and start the service:
# Follow Ollama installation instructions from: https://ollama.ai/
ollama serve
- Start the agent:
python main.py
-
Select an LLM model when prompted
-
Available commands:
analyze <csv_path> <target_column> <series_id_column>
- Load and analyze a datasethelp
- Show available commands/bye
- Exit the program
- Natural language queries:
- "Build a forecasting model for next 10 timesteps"
- "Show me the trend analysis"
- "Generate visualizations for each series"
- "Export forecasts to CSV"
- Fix command:
- If execution produces empty/incorrect results, use:
fix <instructions>
(e.g., "fix write results to csv")
analyze <csv_path> <target_column> <series_id_column> # Load and analyze a dataset
help # Show available commands
/bye # Exit the program
- "Build a forecasting model for next 10 timesteps"
- "Show me the trend analysis"
- "Generate visualizations for each series"
- "Export forecasts to CSV"
- "Compare performance between series"
- "Show statistical summary"
When code execution produces empty or incorrect results:
fix <instructions>
Examples:
fix write results to csv
fix add error handling
fix handle missing values
fix add data validation
When code is generated, you can:
- Type
yes
to execute the code - Type
no
to skip execution - Type
quit
to cancel the operation
# Load and analyze a dataset
analyze /path/to/data.csv target_column series_id_column
# Example
analyze sales_data.csv sales_amount store_id
Natural language examples:
- "Forecast next 10 timesteps for all series"
- "Generate hourly forecast for next week"
- "Predict monthly values with confidence intervals"
- "Create forecast with seasonal decomposition"
Natural language examples:
- "Plot time series for each store"
- "Show trend comparison between series"
- "Generate forecast plots with confidence intervals"
- "Create seasonal decomposition plots"
Natural language examples:
- "Export forecasts to CSV"
- "Save analysis results to file"
- "Export visualizations as PNG"
- "Generate PDF report"
-
ForecastingAgent (
src/agent.py
)- Main agent orchestrating all components
- Handles user interactions
- Manages conversation context
- Coordinates code generation and execution
-
MemoryManager (
src/memory_manager.py
)- Manages conversation history
- Stores analysis results
- Maintains execution context
- Persists sessions to disk
-
ToolsManager (
src/tools_manager.py
)- Handles code execution
- Manages dependencies
- Provides analysis tools
- Formats output
-
OllamaManager (
src/ollama_manager.py
)- Manages LLM connection
- Handles model selection
- Provides retry logic
- User Input → Agent
- Agent → LLM (for query understanding)
- LLM → Code Generation
- Code → Tools Manager
- Results → Memory Manager
- Formatted Output → User
The system uses LLM to generate Python code based on user queries:
- Automatic import handling
- Dynamic dependency installation
- Error recovery and fixes
- Context-aware code generation
Supports multiple forecasting approaches:
- Prophet
- ARIMA
- Exponential Smoothing
- Custom models
- Automatic dependency detection
- Package installation prompts
- Execution error recovery
- Code fix suggestions
- Conversation history tracking
- Context preservation
- Session persistence
- Analysis caching
You can extend the ToolsManager with custom tools:
class ToolsManager:
def add_custom_tool(self, name, func):
setattr(self, name, func)
Sessions are automatically saved:
- Session Files Location:
sessions/
├── YYYYMMDD_HHMMSS/
│ └── memory.json
- Session Data Contains:
- Conversation history
- Analysis results
- Generated code
- Execution results
# Ollama Configuration
export OLLAMA_HOST=localhost # Default
export OLLAMA_PORT=11434 # Default
# Memory Management
export FC_AGENT_MEMORY_SIZE=1000 # Number of interactions to keep
export FC_AGENT_SESSION_DIR=/path/to/sessions
- Data Preparation
- Ensure consistent date format
- Handle missing values before analysis
- Check for duplicate timestamps
- Verify data types
- Forecasting
- Start with simple models
- Validate forecasts with test data
- Consider seasonality
- Check for outliers
- Code Generation
- Review generated code before execution
- Use fix command for refinements
- Save successful code for reuse
- Add error handling
- Memory Management
- Regular session saves
- Clear old sessions
- Export important results
- Document modifications
- Code Execution Issues
- Check package installations
- Verify data format
- Review error messages
- Use fix command with specific instructions
- Memory Issues
- Clear python cache
- Restart agent
- Export results frequently
- Use smaller data chunks
- LLM Connection
- Check Ollama service
- Verify model availability
- Check network connection
- Review API responses
- Build the package:
# Install build tools
pip install --upgrade build twine
# Build package
python -m build
- Test on TestPyPI first:
# Upload to TestPyPI
python -m twine upload --repository testpypi dist/*
# Test installation
pip install --index-url https://test.pypi.org/simple/ fc-agent
- Publish to PyPI:
# Upload to PyPI
python -m twine upload dist/*
- Installation after publishing:
pip install fc-agent
- Version Updates:
- Update version in
setup.py
andpyproject.toml
- Create new build
- Upload to PyPI
pip install -r requirements.txt
pip install -e ".[dev]"
python -m build