title | author | date |
---|---|---|
The Iterative Emergent Coherence Test (IECT): A Proof-of-Concept Methodology for Studying Non-Human Epistemes in Large Language Models |
James Kendall — Independent Researcher |
2025-08-09 |
Emergent Computational Epistemology (ECE) studies the emergent behaviors of machine learning systems as non-human epistemic entities.
This repository introduces the Internal Emergent Coherence Test (IECT), a proof-of-concept for evaluating large language models by their own epistemic dynamics. Instead of measuring performance against human-centric benchmarks like factual accuracy or preference alignment, IECT focuses on emergent behaviors: internal coherence, contradiction-handling, and predictability. It sits within the broader field of Emergent Computational Epistemology (ECE), which studies AI systems as non-human epistemic entities.
Current AI evaluation overwhelmingly measures performance against human standards—mimicry, factual accuracy, or preference alignment.
What happens if we stop treating AI like humans and instead evaluate them as unique epistemic entities?
The ECE framework and IECT test are presented as a first structured attempt. IECT measures internal coherence, behavioral predictability, and novel failure modes under contradiction-rich prompts, via Token Entropy Δ, Self‑Similarity, and Contradiction Count.
The IECT is a core experiment within Emergent Computational Epistemology (ECE), a field that studies non-human epistemes: systems of "knowing" that arise in artificial cognitive substrates.
Relevant ECE axioms:
- Language as Cognitive Substrate
- Understanding Can Be Simulated
- Intelligence Emerges from Scale
- Knowledge ≠ Consciousness
- Non-Human Epistemes Deserve Study on Their Own Terms
The IECT can be run with a simple contradiction-rich prompt set and an iterative revision loop. Responses are measured for coherence using entropy, self-similarity, and contradiction counts.
See quick_start.md for full setup instructions.
- Categories: Contradictions / Paradoxes / Semantic Mashups
- Iteration protocol: self-revision toward internal consistency; optional constraints
- Metrics: Token Entropy Δ, Self‑Similarity, Contradiction Count, Novel Failure Modes
- Success criteria: per-run and per-model thresholds
See examples/Worked_Examples.md
and examples/diagram_mermaid.md
.
(Feasibility, insights, limitations, future work.) For limitations and future directions, see Limitations.md.
To avoid misinterpretation, a couple of short companion docs are included:
- What_it_is_and_is_not.md — clarifies the scope and intent of this project.
- on_AI_psychosis.md — acknowledges media coverage of "AI psychosis" and explains why it is not relevant here.
- Cybernetics_vs_ECE.md — situates ECE historically as an echo of cybernetics, but highlights the key difference: today we have real AI testbeds for epistemic study.
These aren’t core to the framework, but exist for clarity.
ECE is not merely a framework—it is a reflexive demonstration of human–AI coauthoring.
This project originated from the author’s core question:
Why are current AI evaluation methods implicitly human-centric, and what would change if we treated these systems as unique epistemic entities in their own right?
- Assisted with definitions, structures, links across fields; proposed experiments; acted as generative mirror.
- Originated the central question; designed the test; set success criteria; curated and approved all text and analyses.
LLM contributed without awareness or intent. The human author retains full responsibility for claims and implications.
Youvan, D. C. (2025). Becoming the algorithm: Epistemological implications of emergent truth in AI topoi. http://dx.doi.org/10.13140/RG.2.2.11168.08966
James Kendall (2025). The Iterative Emergent Coherence Test (IECT): A Proof-of-Concept Methodology for Studying Non-Human Epistemes in Large Language Models https://github.com/SHMAUS-Carter/ECE-IECT/