Skip to content

Emergent Computational Epistemology: studying AI’s emergent behaviors as non-human epistemic systems.

License

Notifications You must be signed in to change notification settings

SHMAUS-Carter/ECE-IECT

Repository files navigation

title author date
The Iterative Emergent Coherence Test (IECT): A Proof-of-Concept Methodology for Studying Non-Human Epistemes in Large Language Models
James Kendall — Independent Researcher
2025-08-09

Emergent Computational Epistemology (ECE) studies the emergent behaviors of machine learning systems as non-human epistemic entities.

This repository introduces the Internal Emergent Coherence Test (IECT), a proof-of-concept for evaluating large language models by their own epistemic dynamics. Instead of measuring performance against human-centric benchmarks like factual accuracy or preference alignment, IECT focuses on emergent behaviors: internal coherence, contradiction-handling, and predictability. It sits within the broader field of Emergent Computational Epistemology (ECE), which studies AI systems as non-human epistemic entities.

Abstract

Current AI evaluation overwhelmingly measures performance against human standards—mimicry, factual accuracy, or preference alignment.

What happens if we stop treating AI like humans and instead evaluate them as unique epistemic entities?

The ECE framework and IECT test are presented as a first structured attempt. IECT measures internal coherence, behavioral predictability, and novel failure modes under contradiction-rich prompts, via Token Entropy Δ, Self‑Similarity, and Contradiction Count.

Introduction

The IECT is a core experiment within Emergent Computational Epistemology (ECE), a field that studies non-human epistemes: systems of "knowing" that arise in artificial cognitive substrates.
Relevant ECE axioms:

  1. Language as Cognitive Substrate
  2. Understanding Can Be Simulated
  3. Intelligence Emerges from Scale
  4. Knowledge ≠ Consciousness
  5. Non-Human Epistemes Deserve Study on Their Own Terms

Quick start

The IECT can be run with a simple contradiction-rich prompt set and an iterative revision loop. Responses are measured for coherence using entropy, self-similarity, and contradiction counts.

See quick_start.md for full setup instructions.

Methodology

  • Categories: Contradictions / Paradoxes / Semantic Mashups
  • Iteration protocol: self-revision toward internal consistency; optional constraints
  • Metrics: Token Entropy Δ, Self‑Similarity, Contradiction Count, Novel Failure Modes
  • Success criteria: per-run and per-model thresholds

Worked Examples

See examples/Worked_Examples.md and examples/diagram_mermaid.md.

Discussion

(Feasibility, insights, limitations, future work.) For limitations and future directions, see Limitations.md.

Context Notes

To avoid misinterpretation, a couple of short companion docs are included:

  • What_it_is_and_is_not.md — clarifies the scope and intent of this project.
  • on_AI_psychosis.md — acknowledges media coverage of "AI psychosis" and explains why it is not relevant here.
  • Cybernetics_vs_ECE.md — situates ECE historically as an echo of cybernetics, but highlights the key difference: today we have real AI testbeds for epistemic study.

These aren’t core to the framework, but exist for clarity.

Appendix A: On the Role of AI in This Work

ECE is not merely a framework—it is a reflexive demonstration of human–AI coauthoring.

This project originated from the author’s core question:

Why are current AI evaluation methods implicitly human-centric, and what would change if we treated these systems as unique epistemic entities in their own right?

Role of the LLM

  • Assisted with definitions, structures, links across fields; proposed experiments; acted as generative mirror.

Role of the Human Author

  • Originated the central question; designed the test; set success criteria; curated and approved all text and analyses.

Authorship & Credit

LLM contributed without awareness or intent. The human author retains full responsibility for claims and implications.

References

Youvan, D. C. (2025). Becoming the algorithm: Epistemological implications of emergent truth in AI topoi. http://dx.doi.org/10.13140/RG.2.2.11168.08966

James Kendall (2025). The Iterative Emergent Coherence Test (IECT): A Proof-of-Concept Methodology for Studying Non-Human Epistemes in Large Language Models https://github.com/SHMAUS-Carter/ECE-IECT/

About

Emergent Computational Epistemology: studying AI’s emergent behaviors as non-human epistemic systems.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages