Skip to content

A modular, AI-powered assistant platform that provides real-time chat, role-based access, and conversation tracking, designed for self-hosted or small organizational deployments.

Notifications You must be signed in to change notification settings

connellr023/tokenbase

Repository files navigation

tokenbase

A modular, AI-powered assistant platform that provides real-time chat, role-based access, and conversation tracking, designed for self-hosted or small organizational deployments.

Note

Built for SENG 513 at the Univerity of Calgary in 2025.

Next JS TypeScript SCSS JWT Jest ESLint Go SurrealDB Ollama Redis Bash Docker GitHub Actions

CI Status

Backend CI Formatter CI Linter CI

Contributing

Pull Requests

Before merging a pull request, all CI jobs must pass. If all checks pass, ensure Squash and Merge is selected and then merge the pull request.

Code Formatting

The following code formatters are required for this project:

If these formatters are not used, the CI pipeline will create a commit with the properly formatted code.

Code Linting

The following code linters are required for this project:

If these linters are not used, the CI pipeline will fail if linting errors are found.

Structure

  • build/package/
    • Contains dockerfiles for all services
  • cmd/tokenbase/
    • Entry point for the main backend service
  • schemas/
    • Contains all database schema scripts
  • scripts/
    • Contains general scripts that can also be executed in a container
  • test/
    • Contains all unit tests for the main backend service
  • internal/
    • Contains all business logic for the main backend service
  • web/
    • Contains the frontend source code for the main frontend service

Services

  • Backend service (Go)
  • Cache service (Redis)
  • Database service (SurrealDB)
  • LLM service (Ollama)
  • Frontend service (Next.js)

Adding Models

To add a new model, simply add it to configs/models.txt. The model will be automatically downloaded and added to the ollama container.

Development

With an NVIDIA GPU

Linux Systems

First, install the NVIDIA Container Toolkit by following the instructions here.

Windows Systems

First, install GPU support. On Windows it is only available in Docker Desktop via WSL 2 backend Paravirtualization, to enable it, follow the instructions here

Starting

Then, run:

docker-compose up --build

or

docker-compose up

To restart a specific service, run:

docker-compose restart <service>

To rebuild a specific service, run:

docker-compose up --build -d <service>

To stop all services, run:

docker-compose down

Without an NVIDIA GPU

You will have to run the LLM off of the CPU. To do this, use the other docker-compose file:

docker-compose -f docker-compose-cpu.yaml up --build

To restart a specific service, run:

docker-compose -f docker-compose-cpu.yaml restart <service>

To stop all services, run:

docker-compose -f docker-compose-cpu.yaml down

Sample .env file

JWT_SECRET=dev_secret
SURREALDB_USERNAME=root
SURREALDB_PASSWORD=root
REDIS_PASSWORD=password

Starting the frontend service

Ensure all dependencies are installed by running in the web directory:

npm i

To start the frontend service, goto the web directory and run:

npm run dev

Entering SurrealDB SQL CLI

Run:

./surreal sql --user root --pass root --ns tokenbaseNS --db tokenbaseDB

Within the surrealdb container.

Production

To run the optimized production environment, use the production docker-compose file:

docker-compose -f docker-compose-prod.yaml up --build

To build and run the production frontend service, run the following commands in the web directory:

npm run build
npm run start

About

A modular, AI-powered assistant platform that provides real-time chat, role-based access, and conversation tracking, designed for self-hosted or small organizational deployments.

Topics

Resources

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •