Skip to content

Ghostbird/local-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docker compose local AI

This repo has branches with different docker compose set-ups that I use to locally run AI and try things.

Requirements

  • NVIDIA Graphics card: I have set this up to leverage NVIDIA CUDA.
  • Debian based OS: The supporting scripts are written for Debian.
  • Docker repo added to APT

Getting started

  1. Clone this repo
  2. Check out the branch you want to use
  3. Run ./set-up.sh
  4. Run docker compose up -d

Ollama

This branch runs only Ollama. Run docker exec -t ollama ollama run gemma3:1b to start a very simple model and chat with it.

Run nvtop to monitor your GPU and evaluate whether the model is properly running on the GPU.

Run docker exec -t ollama ollama ps to see how which models are running, how much memory they use, and how this is distributed across CPU and GPU.

About

Docker compose files to quickly spin up local AI set-ups for fun and learning

Topics

Resources

License

Stars

Watchers

Forks

Languages