Skip to content

ElixirMentor/prompt_vault

Repository files navigation

PromptVault

Hex.pm Hex Docs CI Coverage Status

A toolkit for managing and processing prompts with context, templates, and token counting for LLM applications.

PromptVault provides an immutable, token-aware context management system that helps you build robust LLM prompt pipelines with features like:

  • Context Management: Immutable context with message history
  • Token Counting: Built-in token counting with pluggable tokenizers
  • Template Support: EEx and Liquid template engines
  • Message Types: Support for text, tool calls, and media messages
  • Compaction Strategies: Automatic context compaction when approaching token limits
  • Type Safety: Full Elixir typespecs and documentation

Installation

Add prompt_vault to your list of dependencies in mix.exs:

def deps do
  [
    {:prompt_vault, "~> 0.1.0"},
    # Optional: For accurate OpenAI token counting
    {:tiktoken, "~> 0.4.1"}
  ]
end

The tiktoken dependency is optional and only needed if you want to use the TiktokenTokenizer for precise OpenAI token counting.

Quick Start

# Create a new context
context = PromptVault.new(
  model: "gpt-4",
  temperature: 0.7,
  token_counter: PromptVault.TokenCounter.TiktokenTokenizer
)

# Add messages
{:ok, context} = PromptVault.add_message(context, :system, "You are a helpful assistant")
{:ok, context} = PromptVault.add_message(context, :user, "Hello!")

# Count tokens
{:ok, token_count} = PromptVault.token_count(context)

# Render to final prompt
rendered = PromptVault.render(context)

Features

Message Types

Text Messages

{:ok, context} = PromptVault.add_message(context, :user, "What's the weather like?")

Tool Calls

{:ok, context} = PromptVault.add_tool_call(
  context, 
  :get_weather, 
  %{city: "New York"}, 
  %{type: "object", properties: %{temperature: %{type: "number"}}}
)

Media Messages

{:ok, context} = PromptVault.add_media(
  context, 
  "image/jpeg", 
  "https://example.com/image.jpg"
)

Templates

Use templates with assigns for dynamic content:

{:ok, context} = PromptVault.add_message(
  context, 
  :user, 
  "Hello <%= @name %>!", 
  template: true, 
  assigns: %{name: "World"}
)

Token Counting

PromptVault supports multiple tokenizers for accurate token counting:

PretendTokenizer (default, estimation-based):

context = PromptVault.new(
  token_counter: PromptVault.TokenCounter.PretendTokenizer
)

TiktokenTokenizer (accurate, using OpenAI's tiktoken):

context = PromptVault.new(
  model: "gpt-4",
  token_counter: PromptVault.TokenCounter.TiktokenTokenizer
)

The TiktokenTokenizer supports all major OpenAI models including GPT-4, GPT-3.5-turbo, and text-davinci models. It provides precise token counts that match OpenAI's billing.

Context Compaction

Automatically compact context when approaching token limits:

context = PromptVault.new(
  compaction_strategy: PromptVault.Compaction.SummarizeHistory,
  token_counter: PromptVault.TokenCounter.TiktokenTokenizer
)

{:ok, compacted_context} = PromptVault.compact(context)

Configuration

Configure your context with various options:

context = PromptVault.new(
  model: :gpt4,                    # LLM model
  temperature: 0.7,                # Model temperature
  max_tokens: 4000,                # Token limit
  token_counter: MyTokenCounter,   # Custom token counter
  compaction_strategy: MyStrategy  # Custom compaction strategy
)

Documentation

Full documentation is available at https://hexdocs.pm/prompt_vault.

License

MIT License. See LICENSE.md for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages