Skip to content

Sapphire Alpha v0.13.3 - Unleashing GPT-2-mini into emergence.

Pre-release
Pre-release

Choose a tag to compare

@oldwalls oldwalls released this 17 Jul 11:35
· 28 commits to main since this release
73cfa0f

image

📦 Sapphire Alpha v0.13.3 – Release Notes

Release Date: 2025-07-19
Author: Remy Szyndler
Branch: main


codeline upgraded - bug fixes issued for MAJOR_BUG_UPDATE.md

MANUAL.md is now included.


Core Module: sapphire_core.py


✳️ Added

  • Prompt Constructor System
    Introduced prompt_constr parameter allowing configurable prompt layouts using components:

    • prompt, tail, memory
      Example:

    memory;prompt;memory;tail;prompt;memory;prompt;
    
  • Chrono-Semantic Memory Ranking
    Integrated CSCSR memory engine (Chronologically Sorted + Context Similarity Ranked) using:

    • SBERT + lexical hybrid similarity

    • Time decay (tau, sigma)

    • Salience bias scaling (weight, lam)

  • Soft-Logit Injection Engine
    Soft attention biasing applied at logit level to reinforce memory relevance during generation.

  • Sampling & Rerank Pipeline
    Introduced n_sieve inference + sieve_rank_mem rerank depth (0–2 modes):

    • 0 = prompt only

    • 1 = prompt + memory

    • 2 = prompt + memory + candidate completions


🛠️ Changed

  • Memory Format
    Internal memory references (tail, memory) now truncated to 1024 tokens for GPT-2 compatibility.

  • Inference Stability
    Aligned input ids and labels in scoring loop. Prevents CUDA errors during cross_entropy() loss.

  • Loss Evaluation
    Improved handling of pad tokens and ignore index (-100) during evaluation scoring.


⚙️ Configurable Parameters

Key Description
prompt_constr Prompt layout definition string
temp Sampling temperature
top_k / top_p Sampling filters
tau, sigma Time decay factors for memory ranking
lam / weight Logit bias strength scaling
top_n, top_t Memory stack & tail length
n_sieve Number of completions before rerank
sieve_rank_mem Reranking depth control (0–2)
max_forward_tokens Max output tokens per inference

Use config CLI command to view or update live parameters.


🧪 Known Stable Environment

  • Base model: DialoGPT-small (124M)

  • Local GPU: 4GB VRAM minimum

  • OS: Windows 10 / Linux-compatible

  • Python 3.10+ with transformers, torch, sentence-transformers


🧰 CLI Enhancements

config set key value        # Modify param live
config load preset_name     # Load preset
config saveas name          # Save current config
tail                        # Show current dialog tail
umb                         # Display active memory
clean [param1] [param2]     # Reset UMB
cloud                       # Render wordcloud of memory stack

✅ Stability Status

  • All known CUDA assert errors resolved

  • Cross-entropy scoring stabilized

  • Prompt overflow truncated properly

  • Config + memory systems pass integrity checks


🔜 Upcoming (v0.14.x)

  • Ontology mode injection (symbol / axiom / closure)

  • Multi-core rerank optimization

  • Episodic vs declarative memory separation

  • Latency reduction under multi-sieve inference


Full Changelog: https://github.com/oldwalls/sapphire/commits/v0.13.3