Skip to content

Sinapsis-AI/sinapsis-chatbots

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

26 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation



sinapsis-chatbots

A comprehensive monorepo for building and deploying AI-driven chatbots with support for multiple LLMs

🐍 Installation β€’ πŸ“¦ Packages β€’ πŸ“š Usage example β€’ 🌐 Webapps πŸ“™ Documentation β€’ πŸ” License

The sinapsis-chatbots module is a powerful toolkit designed to simplify the development of AI-driven chatbots and Retrieval-Augmented Generation (RAG) systems. It provides ready-to-use templates and utilities for configuring and running LLM applications, enabling developers to integrate a wide range of LLM models with ease for natural, intelligent interactions.

Important

We now include support for Llama4 models!

To use them, install the dependency (if you have not installed sinapsis-llama-cpp[all])

  uv pip install sinapsis-llama-cpp[llama-four] --extra-index-url https://pypi.sinapsis.tech

You need a HuggingFace token. See the official instructions and set it using

  export HF_TOKEN=<token-provided-by-hf>

and test it through the cli or the webapp by changing the AGENT_CONFIG_PATH

Note

Llama 4 requires large GPUs to run the models. Nonetheless, running on smaller consumer-grade GPUs is possible, although a single inference may take hours

🐍 Installation

This mono repo has support for the llama-cpp framework through:

  • sinapsis-chatbots-base
  • sinapsis-llama-cpp
  • sinapsis-llama-index

Install using your package manager of choice. We encourage the use of uv

Example with uv:

  uv pip install sinapsis-llama-cpp --extra-index-url https://pypi.sinapsis.tech

or with raw pip:

  pip install sinapsis-llama-cpp --extra-index-url https://pypi.sinapsis.tech

Note

Change the name of the package accordingly

Important

Templates in each package may require extra dependencies. For development, we recommend installing the package with all the optional dependencies:

with uv:

  uv pip install sinapsis-llama-cpp[all] --extra-index-url https://pypi.sinapsis.tech

or with raw pip:

  pip install sinapsis-llama-cpp[all] --extra-index-url https://pypi.sinapsis.tech

Note

Change the name of the package accordingly

Tip

You can also install all the packages within this project:

  uv pip install sinapsis-chatbots[all] --extra-index-url https://pypi.sinapsis.tech

πŸ“¦ Packages

  • Sinapsis Llama CPP

    Package with support for various llama-index modules for text completion. This includes making calls to llms, processing and generating embeddings and Nodes, etc.

Tip

Use CLI command sinapsis info --all-template-names to show a list with all the available Template names installed with Sinapsis Data Tools.

Tip

Use CLI command sinapsis info --example-template-config TEMPLATE_NAME to produce an example Agent config for the Template specified in TEMPLATE_NAME.

For example, for LlaMATextCompletion use sinapsis info --example-template-config LlaMATextCompletion to produce the following example config:

agent:
  name: my_first_chatbot
  description: Agent with a template to pass a text through a LLM and return a response
templates:
- template_name: InputTemplate
  class_name: InputTemplate
  attributes: {}
- template_name: LLaMATextCompletion
  class_name: LLaMATextCompletion
  template_input: InputTemplate
  attributes:
    llm_model_name: 'bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF'
    llm_model_file: 'DeepSeek-R1-Distill-Qwen-7B-Q5_K_S.gguf'
    n_ctx: 9000
    max_tokens: 10000
    role: assistant
    system_prompt: 'You are an AI expert'
    chat_format: chatml
    context_max_len: 6
    pattern: null
    keep_before: true
    temperature: 0.5
    n_threads: 4
    n_gpu_layers: 8

πŸ“š Usage example

The following agent passes a text message through a TextPacket and retrieves a response from a LLM
Config
agent:
  name: chat_completion
  description: Agent with a chatbot that makes a call to the LLM model using a context uploaded from a file

templates:
- template_name: InputTemplate
  class_name: InputTemplate
  attributes: { }

- template_name: TextInput
  class_name: TextInput
  template_input: InputTemplate
  attributes:
    text: what is AI?
- template_name: LLaMATextCompletion
  class_name: LLaMATextCompletion
  template_input: TextInput
  attributes:
    llm_model_name: bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF
    llm_model_file: DeepSeek-R1-Distill-Qwen-7B-Q5_K_S.gguf
    n_ctx: 9000
    max_tokens: 10000
    temperature: 0.7
    n_threads: 8
    n_gpu_layers: 29
    chat_format: chatml
    system_prompt : "You are a python and AI agents expert and you provided reasoning behind every answer you give."
    keep_before: True

🌐 Webapps

This module includes a webapp to interact with the model

Important

To run the app you first need to clone this repository:

git clone git@github.com:Sinapsis-ai/sinapsis-chatbots.git
cd sinapsis-chatbots

Note

If you'd like to enable external app sharing in Gradio, export GRADIO_SHARE_APP=True

Important

You can change the model name and the number of gpu_layers used by the model in case you have an Out of Memory (OOM) error

🐳 Docker

IMPORTANT This docker image depends on the sinapsis-nvidia:base image. Please refer to the official sinapsis instructions to Build with Docker.

  1. Build the sinapsis-chatbots image:
docker compose -f docker/compose.yaml build
  1. Start the container
docker compose -f docker/compose_apps.yaml up sinapsis-simple-chatbot -d
  1. Check the status:
docker logs -f sinapsis-simple-chatbot

NOTE: You can also deploy the service for the RAG chatbot using

docker compose -f docker/compose_apps.yaml up sinapsis-rag-chatbot -d
  1. The logs will display the URL to access the webapp, e.g.,:
Running on local URL:  http://127.0.0.1:7860

NOTE: The url may be different, check the logs 4. To stop the app:

docker compose -f docker/compose_apps.yaml down

To use a different chatbot configuration (e.g. OpenAI-based chat), update the AGENT_CONFIG_PATH environmental variable to point to the desired YAML file.

For example, to use OpenAI chat:

environment:
 AGENT_CONFIG_PATH: webapps/configs/openai_simple_chat.yaml
 OPENAI_API_KEY: your_api_key
πŸ’» UV
  1. Export the environment variable to install the python bindings for llama-cpp
export CMAKE_ARGS="-DGGML_CUDA=on"
export FORCE_CMAKE="1"
  1. export CUDACXX:
export CUDACXX=$(command -v nvcc)
  1. Create the virtual environment and sync dependencies:
uv sync --frozen
  1. Install the wheel:
uv pip install sinapsis-chatbots[all] --extra-index-url https://pypi.sinapsis.tech
  1. Run the webapp:
uv run webapps/llama_cpp_simple_chatbot.py

NOTE: To use OpenAI for the simple chatbot, set your API key and specify the correct configuration file

export AGENT_CONFIG_PATH=webapps/configs/openai_simple_chat.yaml
export OPENAI_API_KEY=your_api_key

and run step 5 again

NOTE: You can also deploy the service for the RAG chatbot using

uv run webapps/llama_index_rag_chatbot.py
  1. The terminal will display the URL to access the webapp, e.g.:

NOTE: The url can be different, check the output of the terminal

Running on local URL:  http://127.0.0.1:7860

πŸ“™ Documentation

Documentation for this and other sinapsis packages is available on the sinapsis website

Tutorials for different projects within sinapsis are available at sinapsis tutorials page

πŸ” License

This project is licensed under the AGPLv3 license, which encourages open collaboration and sharing. For more details, please refer to the LICENSE file.

For commercial use, please refer to our official Sinapsis website for information on obtaining a commercial license.

About

Monorepo for sinapsis templates supporting LLM based Agents

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •