Skip to content

docker/hello-genai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hello-genai

A simple chatbot web application built in Go, Python and Node.js that connects to a local LLM service (llama.cpp) to provide AI-powered responses.

Environment Variables

The application uses the following environment variables defined in the .env file:

  • LLM_BASE_URL: The base URL of the LLM API
  • LLM_MODEL_NAME: The model name to use

To change these settings, simply edit the .env file in the root directory of the project.

Quick Start

  1. Clone the repository:

    git clone https://github.com/docker/hello-genai
    cd hello-genai
  2. Run the application using the script:

    ./run.sh
  3. Open your browser and visit the following links:

    http://localhost:8080 for the GenAI Application in Go

    http://localhost:8081 for the GenAI Application in Python

    http://localhost:8082 for the GenAI Application in Node

    http://localhost:8083 for the GenAI Application in Rust

Requirements

  • macOS (recent version)
  • Either:
    • Docker and Docker Compose (preferred)
    • Go 1.21 or later
  • Local LLM server

If you're using a different LLM server configuration, you may need to modify the.env file.

About

Very simple GenAI application to try the Docker Model Runner

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published