Skip to content

AgenticDepthSearch is a Python console application designed for advanced, in-depth research using Large Language Models (LLMs) and Embeddings.

Notifications You must be signed in to change notification settings

ZeroWH9/AgenticDepthSearch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AgenticDepthSearch

AgenticDepthSearch is a Python console application designed to perform in-depth research on a given topic. Inspired by systems like those from OpenAI and Google Gemini, it uses local or remote Large Language Models (LLMs), the Tavily Search API, and the Langchain library to generate sub-queries, analyze results, synthesize information, and present a final summary to the user.

Key Features

  • In-depth Research: Generates and executes multiple sub-queries to cover different aspects of the main topic.
  • Narrative Analysis: Uses an LLM to generate a qualitative and narrative analysis of search results, focusing on relevance, source landscape, key themes, and potential biases.
  • Final Synthesis: Generates a cohesive summary in natural language based on the performed analysis.
  • Flexible LLM Integration: Supports different types of LLMs (OpenAI, Anthropic, LM Studio, Ollama, OpenRouter) through a configurable factory.
  • Tavily Search Integration: Utilizes the Tavily API for efficient web searches.
  • Rich Console Interface: Uses the rich library for an interactive and visually appealing user interface in the terminal.
  • Research Memory: Maintains a history of searches (raw results) for future queries (basic implementation using ChromaDB).
  • Configuration via .env: Easy configuration of API keys, models, and research parameters.
  • Modular Prompt Templates: Uses specific and organized prompts for different tasks (strategy generation, result evaluation, final summary, etc.).

Getting Started

Prerequisites

  • Python 3.12+ (Recommended)
  • pip (Python package manager)
  • Access to an LLM API (OpenAI, Anthropic, OpenRouter) or a local LLM server (LM Studio, Ollama, etc.)
  • Tavily Search API Key

Installation

  1. Clone the repository:
    git clone https://github.com/ZeroWH9/AgenticDepthSearch.git
    cd deep-research-local
  2. Install dependencies:
    pip install -r requirements.txt

Configuration

  1. Rename the example file: Rename (or copy) .env.example to .env.
  2. Edit the .env file: Open the .env file in a text editor and fill in the necessary variables:
    • LLM: Configure the type (LLM_MODEL_TYPE), name (LLM_MODEL_NAME), API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, OPENROUTER_API_KEY, LOCAL_API_KEY), and other parameters according to the model you will use (OpenAI, Anthropic, OpenRouter, LM Studio, Ollama, etc.).
    • Embedding: Define the type (EMBEDDING_TYPE), model name (EMBEDDING_MODEL_NAME), and dimensions (EMBEDDING_DIMENSIONS). Can be openai or local. If local, configure EMBEDDING_BASE_URL and EMBEDDING_API_KEY (these point to an OpenAI-compatible API endpoint for embeddings).
    • Tavily: Insert your Tavily API key into TAVILY_API_KEY.
    • Other Settings: Review the Agent (AGENT_*), Research (RESEARCH_*), Memory (MEMORY_*), Logging (LOG_LEVEL), etc., settings and adjust as needed.

How to Use

  1. Navigate to the project's root directory in your terminal.
  2. Run the main script:
    python src/main.py
  3. Follow the instructions in the console:
    • Enter your search query.
    • Define the Depth (detail level, 1-5) and Breadth (number of sub-queries, 1-3) parameters.
    • Wait for the search to complete (a spinner will indicate progress).
    • The results will be displayed:
      • Table of generated Sub-queries.
      • Table of search Results (Title, URL, Score).
      • Analysis panel containing the narrative analysis generated by the LLM.
      • Final Answer panel with the final summary.
    • Answer if you want to perform another search (y/n).
    • Type sair, exit, or quit to end the program.

Detailed Configuration (.env)

The .env file controls the application's behavior. The main sections are:

  • # LLM Settings: Type, name, temperature, max tokens of the main model.
  • # OpenAI Settings, # Anthropic Settings, # OpenRouter Settings, # Local Model Settings: Specific settings for each type of LLM provider (API Key, Base URL).
  • # Embedding Settings: Configuration for the embedding model used for memory search (can be OpenAI or local/custom OpenAI-compatible endpoint).
  • # Agent Settings: Parameters for the Langchain agent (max iterations, verbosity via AGENT_VERBOSE).
  • # Research Settings: Parameters for the research process (max sources, etc.).
  • # Tavily Search Configuration: Tavily API key.
  • # Vector Store Configuration: Settings for the vector store (used by memory, chroma type by default).
  • # Memory Settings: Enabling and size of short-term and long-term memory.
  • # Logging Configuration: Log level (LOG_LEVEL) to control console output verbosity (DEBUG, INFO, WARNING, ERROR, CRITICAL).
  • # Feedback System, # Performance: Additional settings (currently less relevant).

About

AgenticDepthSearch is a Python console application designed for advanced, in-depth research using Large Language Models (LLMs) and Embeddings.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages