Skip to content

Webscout is the all-in-one search and AI toolkit you need. Discover insights with Yep.com, DuckDuckGo, and Phind; access cutting-edge AI models; transcribe YouTube videos; generate temporary emails and phone numbers; perform text-to-speech conversions; and much more!

License

Notifications You must be signed in to change notification settings

OEvortex/Webscout

Repository files navigation

WebScout Logo

Webscout

Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More

Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces – all through one unified library.

PyPI Version Monthly Downloads Total Downloads Python Version Ask DeepWiki


πŸ“‹ Table of Contents


Important

Webscout supports three types of compatibility:

  • Native Compatibility: Webscout's own native API for maximum flexibility
  • OpenAI Compatibility: Use providers with OpenAI-compatible interfaces
  • Local LLM Compatibility: Run local models with Inferno, an OpenAI-compatible server (now a standalone package)

Choose the approach that best fits your needs! For OpenAI compatibility, check the OpenAI Providers README or see the OpenAI-Compatible API Server section below.

Note

Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.

Telegram Group Developer Telegram YouTube LinkedIn Instagram Buy Me A Coffee


πŸš€ Features

Search & AI

  • Comprehensive Search: Leverage Google, DuckDuckGo, and Yep for diverse search results
  • AI Powerhouse: Access and interact with various AI models through three compatibility options:
    • Native API: Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more
    • OpenAI-Compatible Providers: Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces
    • Local LLMs with Inferno: Run local models with an OpenAI-compatible server (now available as a standalone package)
  • AI Search: AI-powered search engines with advanced capabilities

Media & Content Tools

  • YouTube Toolkit: Advanced YouTube video and transcript management with multi-language support
  • Text-to-Speech (TTS): Convert text into natural-sounding speech using multiple AI-powered providers
  • Text-to-Image: Generate high-quality images using a wide range of AI art providers
  • Weather Tools: Retrieve detailed weather information for any location

Developer Tools

  • GitAPI: Powerful GitHub data extraction toolkit without authentication requirements for public data
  • SwiftCLI: A powerful and elegant CLI framework for beautiful command-line interfaces
  • LitPrinter: Styled console output with rich formatting and colors
  • LitLogger: Simplified logging with customizable formats and color schemes
  • LitAgent: Modern user agent generator that keeps your requests undetectable
  • Scout: Advanced web parsing and crawling library with intelligent HTML/XML parsing
  • Inferno: Run local LLMs with an OpenAI-compatible API and interactive CLI (now a standalone package: pip install inferno-llm)
  • GGUF Conversion: Convert and quantize Hugging Face models to GGUF format

Privacy & Utilities

  • Tempmail & Temp Number: Generate temporary email addresses and phone numbers
  • Awesome Prompts: Curated collection of system prompts for specialized AI personas


βš™οΈ Installation

Webscout supports multiple installation methods to fit your workflow:

πŸ“¦ Standard Installation

# Install from PyPI
pip install -U webscout

# Install with API server dependencies
pip install -U "webscout[api]"

# Install with development dependencies
pip install -U "webscout[dev]"

⚑ UV Package Manager (Recommended)

UV is a fast Python package manager. Webscout has full UV support:

# Install UV first (if not already installed)
pip install uv

# Install Webscout with UV
uv add webscout

# Install with API dependencies
uv add "webscout[api]"

# Run Webscout directly with UV (no installation needed)
uv run webscout --help

# Run with API dependencies
uv run webscout --extra api webscout-server

# Install as a UV tool for global access
uv tool install webscout

# Use UV tool commands
webscout --help
webscout-server

πŸ”§ Development Installation

# Clone the repository
git clone https://github.com/OEvortex/Webscout.git
cd Webscout

# Install in development mode with UV
uv sync --extra dev --extra api

# Or with pip
pip install -e ".[dev,api]"

🐳 Docker Installation

# Pull and run the Docker image
docker pull oevortex/webscout:latest
docker run -it oevortex/webscout:latest

πŸ“± Quick Start Commands

After installation, you can immediately start using Webscout:

# Check version
webscout version

# Search the web
webscout text -k "python programming"

# Start API server
webscout-server

# Get help
webscout --help

πŸ–₯️ Command Line Interface

Webscout provides a powerful command-line interface for quick access to its features. You can use it in multiple ways:

πŸš€ Direct Commands (Recommended)

After installing with uv tool install webscout or pip install webscout:

# Get help
webscout --help

# Start API server
webscout-server

πŸ”§ UV Run Commands (No Installation Required)

# Run directly with UV (downloads and runs automatically)
uv run webscout --help
uv run --extra api webscout-server

πŸ“¦ Python Module Commands

# Traditional Python module execution
python -m webscout --help
python -m webscout-server
πŸ” Web Search Commands

Command Description Example
webscout text -k "query" Perform a text search webscout text -k "python programming"
webscout answers -k "query" Get instant answers webscout answers -k "what is AI"
webscout images -k "query" Search for images webscout images -k "nature photography"
webscout videos -k "query" Search for videos webscout videos -k "python tutorials"
webscout news -k "query" Search for news articles webscout news -k "technology trends"
webscout maps -k "query" Perform a maps search webscout maps -k "restaurants near me"
webscout translate -k "text" Translate text webscout translate -k "hello world"
webscout suggestions -k "query" Get search suggestions webscout suggestions -k "how to"
webscout weather -l "location" Get weather information webscout weather -l "New York"
webscout version Display the current version webscout version

Google Search Commands:

Command Description Example
webscout google_text -k "query" Google text search webscout google_text -k "machine learning"
webscout google_news -k "query" Google news search webscout google_news -k "AI breakthrough"
webscout google_suggestions -q "query" Google suggestions webscout google_suggestions -q "python"

Yep Search Commands:

Command Description Example
webscout yep_text -k "query" Yep text search webscout yep_text -k "web development"
webscout yep_images -k "query" Yep image search webscout yep_images -k "landscapes"
webscout yep_suggestions -q "query" Yep suggestions webscout yep_suggestions -q "javascript"

Inferno LLM Commands

Inferno is now a standalone package. Install it separately with:

pip install inferno-llm

After installation, you can use its CLI for managing and using local LLMs:

inferno --help
Command Description
inferno pull <model> Download a model from Hugging Face
inferno list List downloaded models
inferno serve <model> Start a model server with OpenAI-compatible API
inferno run <model> Chat with a model interactively
inferno remove <model> Remove a downloaded model
inferno version Show version information

For more information, visit the Inferno GitHub repository or PyPI package page.

Note

Hardware requirements for running models with Inferno:

  • Around 2 GB of RAM for 1B models
  • Around 4 GB of RAM for 3B models
  • At least 8 GB of RAM for 7B models
  • 16 GB of RAM for 13B models
  • 32 GB of RAM for 33B models
  • GPU acceleration is recommended for better performance
πŸ”„ OpenAI-Compatible API Server

Webscout includes an OpenAI-compatible API server that allows you to use any supported provider with tools and applications designed for OpenAI's API.

Starting the API Server

From Command Line (Recommended)

# Start with default settings (port 8000)
webscout-server

# Start with custom port
webscout-server --port 8080

# Start with API key authentication
webscout-server --api-key "your-secret-key"

# Start in no-auth mode using command line flag (no API key required)
webscout-server --no-auth

# Start in no-auth mode using environment variable
$env:WEBSCOUT_NO_AUTH='true'; webscout-server

# Specify a default provider
webscout-server --default-provider "Claude"

# Run in debug mode
webscout-server --debug

# Get help for all options (includes authentication options)
webscout-server --help

Alternative Methods

# Using UV (no installation required)
uv run --extra api webscout-server

# Using Python module
python -m webscout.auth.server

Environment Variables

Webscout server supports configuration through environment variables:

# Start server in no-auth mode (no API key required)
$env:WEBSCOUT_NO_AUTH='true'; webscout-server

# Disable rate limiting
$env:WEBSCOUT_NO_RATE_LIMIT='true'; webscout-server

# Start with custom port using environment variable
$env:WEBSCOUT_PORT='7860'; webscout-server

For a complete list of supported environment variables and Docker deployment options, see DOCKER.md.

From Python Code

Recommended:
Use start_server from webscout.client for the simplest programmatic startup.
For advanced control (custom host, debug, etc.), use run_api.

# Method 1: Using the helper function (recommended)
from webscout.client import start_server

# Start with default settings
start_server()

# Start with custom settings
start_server(port=8080, api_key="your-secret-key", default_provider="Claude")

# Start in no-auth mode (no API key required)
start_server(no_auth=True)

# Method 2: Advanced usage with run_api
from webscout.client import run_api

run_api(
    host="0.0.0.0",
    debug=True
)

Using the API

Once the server is running, you can use it with any OpenAI client library or tool:

# Using the OpenAI Python client
from openai import OpenAI

client = OpenAI(
    api_key="your-secret-key",  # Only needed if you set an API key
    base_url="http://localhost:8000/v1"  # Point to your local server
)

# Chat completion
response = client.chat.completions.create(
    model="gpt-4",  # This can be any model name registered with Webscout
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, how are you?"}
    ]
)

print(response.choices[0].message.content)

Using with cURL

# Basic chat completion request
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-secret-key" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'

# List available models
curl http://localhost:8000/v1/models \
  -H "Authorization: Bearer your-secret-key"

Available Endpoints

  • GET /v1/models - List all available models
  • GET /v1/models/{model_name} - Get information about a specific model
  • POST /v1/chat/completions - Create a chat completion


πŸ” Search Engines

Webscout provides multiple search engine interfaces for diverse search capabilities.

YepSearch - Yep.com Interface

from webscout import YepSearch

# Initialize YepSearch
yep = YepSearch(
    timeout=20,  # Optional: Set custom timeout
    proxies=None,  # Optional: Use proxies
    verify=True   # Optional: SSL verification
)

# Text Search
text_results = yep.text(
    keywords="artificial intelligence",
    region="all",           # Optional: Region for results
    safesearch="moderate",  # Optional: "on", "moderate", "off"
    max_results=10          # Optional: Limit number of results
)

# Image Search
image_results = yep.images(
    keywords="nature photography",
    region="all",
    safesearch="moderate",
    max_results=10
)

# Get search suggestions
suggestions = yep.suggestions("hist")

GoogleSearch - Google Interface

from webscout import GoogleSearch

# Initialize GoogleSearch
google = GoogleSearch(
    timeout=10,  # Optional: Set custom timeout
    proxies=None,  # Optional: Use proxies
    verify=True   # Optional: SSL verification
)

# Text Search
text_results = google.text(
    keywords="artificial intelligence",
    region="us",           # Optional: Region for results
    safesearch="moderate",  # Optional: "on", "moderate", "off"
    max_results=10          # Optional: Limit number of results
)
for result in text_results:
    print(f"Title: {result.title}")
    print(f"URL: {result.url}")
    print(f"Description: {result.description}")

# News Search
news_results = google.news(
    keywords="technology trends",
    region="us",
    safesearch="moderate",
    max_results=5
)

# Get search suggestions
suggestions = google.suggestions("how to")

# Legacy usage is still supported
from webscout import search
results = search("Python programming", num_results=5)

πŸ¦† DuckDuckGo Search with WEBS and AsyncWEBS

Webscout provides powerful interfaces to DuckDuckGo's search capabilities through the WEBS and AsyncWEBS classes.

Synchronous Usage with WEBS

from webscout import WEBS

# Use as a context manager for proper resource management
with WEBS() as webs:
    # Simple text search
    results = webs.text("python programming", max_results=5)
    for result in results:
        print(f"Title: {result['title']}\nURL: {result['url']}")

Asynchronous Usage with AsyncWEBS

import asyncio
from webscout import AsyncWEBS

async def search_multiple_terms(search_terms):
    async with AsyncWEBS() as webs:
        # Create tasks for each search term
        tasks = [webs.text(term, max_results=5) for term in search_terms]
        # Run all searches concurrently
        results = await asyncio.gather(*tasks)
        return results

async def main():
    terms = ["python", "javascript", "machine learning"]
    all_results = await search_multiple_terms(terms)

    # Process results
    for i, term_results in enumerate(all_results):
        print(f"Results for '{terms[i]}':\n")
        for result in term_results:
            print(f"- {result['title']}")
        print("\n")

# Run the async function
asyncio.run(main())

Tip

Always use these classes with a context manager (with statement) to ensure proper resource management and cleanup.


πŸ’» WEBS API Reference

The WEBS class provides comprehensive access to DuckDuckGo's search capabilities through a clean, intuitive API.

Available Search Methods

Method Description Example
text() General web search webs.text('python programming')
answers() Instant answers webs.answers('population of france')
images() Image search webs.images('nature photography')
videos() Video search webs.videos('documentary')
news() News articles webs.news('technology')
maps() Location search webs.maps('restaurants', place='new york')
translate() Text translation webs.translate('hello', to='es')
suggestions() Search suggestions webs.suggestions('how to')
weather() Weather information webs.weather('london')
Example: Text Search

from webscout import WEBS

with WEBS() as webs:
    results = webs.text(
        'artificial intelligence',
        region='wt-wt',        # Optional: Region for results
        safesearch='off',      # Optional: 'on', 'moderate', 'off'
        timelimit='y',         # Optional: Time limit ('d'=day, 'w'=week, 'm'=month, 'y'=year)
        max_results=10         # Optional: Limit number of results
    )

    for result in results:
        print(f"Title: {result['title']}")
        print(f"URL: {result['url']}")
        print(f"Description: {result['body']}\n")

Example: News Search with Formatting

from webscout import WEBS
import datetime

def fetch_formatted_news(keywords, timelimit='d', max_results=20):
    """Fetch and format news articles"""
    with WEBS() as webs:
        # Get news results
        news_results = webs.news(
            keywords,
            region="wt-wt",
            safesearch="off",
            timelimit=timelimit,  # 'd'=day, 'w'=week, 'm'=month
            max_results=max_results
        )

        # Format the results
        formatted_news = []
        for i, item in enumerate(news_results, 1):
            # Format the date
            date = datetime.datetime.fromisoformat(item['date']).strftime('%B %d, %Y')

            # Create formatted entry
            entry = f"{i}. {item['title']}\n"
            entry += f"   Published: {date}\n"
            entry += f"   {item['body']}\n"
            entry += f"   URL: {item['url']}\n"

            formatted_news.append(entry)

        return formatted_news

# Example usage
news = fetch_formatted_news('artificial intelligence', timelimit='w', max_results=5)
print('\n'.join(news))

Example: Weather Information

from webscout import WEBS

with WEBS() as webs:
    # Get weather for a location
    weather = webs.weather("New York")

    # Access weather data
    if weather:
        print(f"Location: {weather.get('location', 'Unknown')}")
        print(f"Temperature: {weather.get('temperature', 'N/A')}")
        print(f"Conditions: {weather.get('condition', 'N/A')}")


πŸ€– AI Models and Voices

Webscout provides easy access to a wide range of AI models and voice options.

LLM Models

Access and manage Large Language Models with Webscout's model utilities.

from webscout import model
from rich import print

# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")

# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
    print(f"  {provider}: {count} models")

# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
    for i, model_name in enumerate(available_models, 1):
        print(f"  {i}. {model_name}")
else:
    print(f"  {available_models}")

TTS Voices

Access and manage Text-to-Speech voices across multiple providers.

from webscout import model
from rich import print

# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")

# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
    print(f"  {provider}: {count} voices")

# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
    for voice_name, voice_id in list(available_voices.items())[:5]:  # Show first 5 voices
        print(f"  - {voice_name}: {voice_id}")
    if len(available_voices) > 5:
        print(f"  ... and {len(available_voices) - 5} more")


πŸ’¬ AI Chat Providers

Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.

Popular AI Providers

Provider Description Key Features
OPENAI OpenAI's models GPT-3.5, GPT-4, tool calling
GEMINI Google's Gemini models Web search capabilities
Meta Meta's AI assistant Image generation, web search
GROQ Fast inference platform High-speed inference, tool calling
LLAMA Meta's Llama models Open weights models
DeepInfra Various open models Multiple model options
Cohere Cohere's language models Command models
PerplexityLabs Perplexity AI Web search integration
YEPCHAT Yep.com's AI Streaming responses
ChatGPTClone ChatGPT-like interface Multiple model options
TypeGPT TypeChat models Multiple model options
Example: Using Meta AI

from webscout import Meta

# For basic usage (no authentication required)
meta_ai = Meta()

# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)

# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="your_email@example.com", fb_password="your_password")

# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])

# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
    print(media["url"])

Example: GROQ with Tool Calling

from webscout import GROQ, WEBS
import json

# Initialize GROQ client
client = GROQ(api_key="your_api_key")

# Define helper functions
def calculate(expression):
    """Evaluate a mathematical expression"""
    try:
        result = eval(expression)
        return json.dumps({"result": result})
    except Exception as e:
        return json.dumps({"error": str(e)})

def search(query):
    """Perform a web search"""
    try:
        results = WEBS().text(query, max_results=3)
        return json.dumps({"results": results})
    except Exception as e:
        return json.dumps({"error": str(e)})

# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)

# Define tool specifications
tools = [
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Evaluate a mathematical expression",
            "parameters": {
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "The mathematical expression to evaluate"
                    }
                },
                "required": ["expression"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "search",
            "description": "Perform a web search",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query"
                    }
                },
                "required": ["query"]
            }
        }
    }
]

# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)

response = client.chat("Find information about quantum computing", tools=tools)
print(response)

GGUF Model Conversion

Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.

from webscout.Extra.gguf import ModelConverter

# Create a converter instance
converter = ModelConverter(
    model_id="mistralai/Mistral-7B-Instruct-v0.2",  # Hugging Face model ID
    quantization_methods="q4_k_m"                  # Quantization method
)

# Run the conversion
converter.convert()

Available Quantization Methods

Method Description
fp16 16-bit floating point - maximum accuracy, largest size
q2_k 2-bit quantization (smallest size, lowest accuracy)
q3_k_l 3-bit quantization (large) - balanced for size/accuracy
q3_k_m 3-bit quantization (medium) - good balance for most use cases
q3_k_s 3-bit quantization (small) - optimized for speed
q4_0 4-bit quantization (version 0) - standard 4-bit compression
q4_1 4-bit quantization (version 1) - improved accuracy over q4_0
q4_k_m 4-bit quantization (medium) - balanced for most models
q4_k_s 4-bit quantization (small) - optimized for speed
q5_0 5-bit quantization (version 0) - high accuracy, larger size
q5_1 5-bit quantization (version 1) - improved accuracy over q5_0
q5_k_m 5-bit quantization (medium) - best balance for quality/size
q5_k_s 5-bit quantization (small) - optimized for speed
q6_k 6-bit quantization - highest accuracy, largest size
q8_0 8-bit quantization - maximum accuracy, largest size

Command Line Usage

python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"


🀝 Contributing

Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:

  1. Fork the repository
  2. Create a new branch for your feature or bug fix
  3. Make your changes and commit them with descriptive messages
  4. Push your branch to your forked repository
  5. Submit a pull request to the main repository

πŸ™ Acknowledgments

  • All the amazing developers who have contributed to the project
  • The open-source community for their support and inspiration

Made with ❀️ by the Webscout team

About

Webscout is the all-in-one search and AI toolkit you need. Discover insights with Yep.com, DuckDuckGo, and Phind; access cutting-edge AI models; transcribe YouTube videos; generate temporary emails and phone numbers; perform text-to-speech conversions; and much more!

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

Contributors 14

Languages