Skip to content

AiondaDotCom/gpt-httpd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

gpt-httpd

A Tech Demo: HTTP Server Simulated by AI

Overview

GPT-HTTPd is an experimental tech demo that demonstrates how an AI model can simulate an HTTP/1.1 server. Unlike traditional HTTP servers that have predefined response logic, gpt-httpd uses an AI model to dynamically generate HTTP responses in real-time.

How It Works

  1. A basic shell script uses netcat (ncat) to listen for incoming HTTP/1.1 requests
  2. The entire raw HTTP request is forwarded to a local Ollama LLM (Large Language Model)
  3. The AI model interprets the HTTP request and generates a complete, protocol-compliant HTTP/1.1 response
  4. This response is sent back to the client exactly as generated by the AI

This project showcases a novel application of AI: rather than implementing HTTP server logic in code, the server's behavior is entirely determined by the language model's understanding of HTTP protocols.

Technical Concept

This project explores a unique paradigm where:

  • Instead of writing code to handle specific HTTP requests, the server delegates all protocol handling to an AI model
  • The AI model must understand HTTP specifications, request methods, headers, and generate appropriate responses
  • The entire interaction is handled through simple shell scripts, without any traditional web server frameworks

Requirements

  • Bash shell environment
  • netcat (preferably ncat from the Nmap project for persistent connections)
  • curl for API requests
  • jq for JSON processing
  • Ollama installed locally (https://ollama.com)
  • A compatible LLM model (default: llama3)

Setup

  1. Make the scripts executable:

    chmod +x gpt-httpd.sh gpt-httpd-handler.sh
    
  2. Install and set up Ollama:

    • Install Ollama from https://ollama.com
    • Start the Ollama service: ollama serve
    • The script will automatically check if the required model is available and pull it if needed
  3. Configuration options (edit in gpt-httpd.sh):

    • PORT=8080 - Change the listening port
    • OLLAMA_MODEL="llama3" - Select a different Ollama model
    • LOG_FILE="gpt-httpd.log" - Change log location

Usage

  1. Start the server:

    ./gpt-httpd.sh
    
  2. Access the server with any HTTP client:

    curl http://localhost:8080
    
  3. Try different HTTP methods and paths:

    # GET request with path
    curl http://localhost:8080/about
    
    # POST request
    curl -X POST -d "name=test" http://localhost:8080
    
  4. The server logs all requests and responses to gpt-httpd.log.

Screenshots

Server Handler in Action

Handler Processing HTTP Request

Browser Interaction

Browser Accessing the AI HTTP Server

How It Differs From Traditional Servers

Unlike traditional HTTP servers that:

  • Have predefined routing logic
  • Use hardcoded responses for specific endpoints
  • Implement HTTP protocol handling explicitly

This AI-based server:

  • Has no predefined routes or endpoints
  • Generates responses dynamically based on the AI model's understanding of HTTP
  • Can adapt to various request types without explicit programming
  • May produce unexpected or creative responses based on the LLM's training

Architecture

The project consists of several key components:

  1. Main Server Script (gpt-httpd.sh)

    • Sets up the TCP listener on the specified port
    • Uses ncat (preferred) or basic nc for handling connections
    • Manages server lifecycle and connection handling
  2. AI Handler Script (gpt-httpd-handler.sh)

    • Receives raw HTTP requests from the listener
    • Formats and sends requests to the Ollama API
    • Processes AI-generated responses
    • Ensures proper HTTP response formatting with correct headers and line endings
  3. Basic Handler (handler.sh)

    • A simple non-AI handler that returns a static "Hello, World!" response
    • Useful for testing and as a fallback
  4. Testing Script (test-gpt-httpd.sh)

    • Helps verify that the server is functioning correctly

Technical Challenges

This project addresses several interesting technical challenges:

  • Raw HTTP Parsing: Working directly with the HTTP/1.1 protocol at the socket level
  • AI Prompt Engineering: Crafting prompts that instruct the AI model to generate valid HTTP responses
  • Connection Management: Ensuring proper handling of persistent connections and response formatting
  • JSON Escaping: Properly escaping request data for inclusion in JSON payloads to the AI model
  • Content-Length Handling: Ensuring responses have accurate Content-Length headers

Future Possibilities

This tech demo could be extended in several interesting ways:

  • AI-Driven API Simulation: Simulate complex APIs without implementing them
  • Protocol Learning: Explore how well different AI models understand networking protocols
  • Testing Tool: Use as a mock server for testing client applications
  • Educational Tool: Demonstrate HTTP protocol details in an interactive way

Limitations

  • Tech Demo Only: This is an experimental concept not suitable for production use
  • Performance: Response generation is limited by AI inference speed (typically several seconds)
  • Reliability: The quality of responses depends entirely on the AI model's understanding of HTTP
  • Security: No built-in security protections or rate limiting
  • Connections: The netcat implementation handles one request at a time
  • No HTTPS: No SSL/TLS support for secure connections

Conclusion

GPT-HTTPd demonstrates a novel approach to server development where the server's behavior is defined by an AI model's understanding of protocols rather than explicit code. While not practical for production use, it illustrates the potential for AI to interpret and implement technical specifications directly.

About

HTTP Server Simulated by AI

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages