Releases: oldwalls/sapphire
Sapphire Alpha v0.13.3 - Unleashing GPT-2-mini into emergence.

📦 Sapphire Alpha v0.13.3 – Release Notes
Release Date: 2025-07-19
Author: Remy Szyndler
Branch: main
codeline upgraded - bug fixes issued for MAJOR_BUG_UPDATE.md
MANUAL.md is now included.
Core Module: sapphire_core.py
✳️ Added
-
Prompt Constructor System
Introducedprompt_constr
parameter allowing configurable prompt layouts using components:-
prompt
,tail
,memory
Example:
memory;prompt;memory;tail;prompt;memory;prompt;
-
-
Chrono-Semantic Memory Ranking
Integrated CSCSR memory engine (Chronologically Sorted + Context Similarity Ranked) using:-
SBERT + lexical hybrid similarity
-
Time decay (
tau
,sigma
) -
Salience bias scaling (
weight
,lam
)
-
-
Soft-Logit Injection Engine
Soft attention biasing applied at logit level to reinforce memory relevance during generation. -
Sampling & Rerank Pipeline
Introducedn_sieve
inference +sieve_rank_mem
rerank depth (0–2 modes):-
0 = prompt only
-
1 = prompt + memory
-
2 = prompt + memory + candidate completions
-
🛠️ Changed
-
Memory Format
Internal memory references (tail
,memory
) now truncated to 1024 tokens for GPT-2 compatibility. -
Inference Stability
Aligned inputids
andlabels
in scoring loop. Prevents CUDA errors duringcross_entropy()
loss. -
Loss Evaluation
Improved handling of pad tokens and ignore index (-100
) during evaluation scoring.
⚙️ Configurable Parameters
Key | Description |
---|---|
prompt_constr | Prompt layout definition string |
temp | Sampling temperature |
top_k / top_p | Sampling filters |
tau, sigma | Time decay factors for memory ranking |
lam / weight | Logit bias strength scaling |
top_n, top_t | Memory stack & tail length |
n_sieve | Number of completions before rerank |
sieve_rank_mem | Reranking depth control (0–2) |
max_forward_tokens | Max output tokens per inference |
Use config
CLI command to view or update live parameters.
🧪 Known Stable Environment
-
Base model:
DialoGPT-small
(124M) -
Local GPU: 4GB VRAM minimum
-
OS: Windows 10 / Linux-compatible
-
Python 3.10+ with
transformers
,torch
,sentence-transformers
🧰 CLI Enhancements
config set key value # Modify param live
config load preset_name # Load preset
config saveas name # Save current config
tail # Show current dialog tail
umb # Display active memory
clean [param1] [param2] # Reset UMB
cloud # Render wordcloud of memory stack
✅ Stability Status
-
All known CUDA assert errors resolved
-
Cross-entropy scoring stabilized
-
Prompt overflow truncated properly
-
Config + memory systems pass integrity checks
🔜 Upcoming (v0.14.x)
-
Ontology mode injection (
symbol
/axiom
/closure
) -
Multi-core rerank optimization
-
Episodic vs declarative memory separation
-
Latency reduction under multi-sieve inference
Full Changelog: https://github.com/oldwalls/sapphire/commits/v0.13.3