Skip to content

cogitomancer/ExperAIment-Lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ§ͺ ExperAIment: A Living Lab for LLM Behavior

"What happens when the algorithm knows it’s being observed?" "What changes when we tell the machine the truth?"

Welcome to ExperAIment β€” an open, ongoing exploration into how large language models behave under different prompting conditions, disclosure levels, and framing effects. Think of this as a digital lab notebook meets cognitive theater.

🧠 Purpose

ExperAIment explores how large language models respond to different conditions of interaction, including but not limited to:

  • Prompts framed with ethical disclosures or research-like language
  • Tasks that invoke self-reflection, ambiguity, or creative reasoning
  • Situations where models are explicitly informed of being evaluated
  • Comparative testing across different models or prompt variants

Rather than focusing solely on accuracy or performance, this lab investigates awareness, alignment, contextual sensitivity, and emergent behavior in LLMs.

πŸ§ͺ Structure

ExperAIment is the lab. Each subfolder under /experiments contains a unique study.

Example layout:

experAIment-lab/
β”œβ”€β”€ experiments/
β”‚   β”œβ”€β”€ experiment_01_informed_vs_control/
β”‚   β”œβ”€β”€ experiment_02_self_reflection_bias/

Each experiment contains:

  • A clear research question
  • Prompts used
  • Model outputs (GPT, Claude, Gemini, etc.)
  • Reflections, metrics (when applicable), and open questions

πŸ” Current Focus

Experiment 01: Informed vs. Control

Question: How do LLM outputs differ when the model is informed it is part of a study, versus not?

Design:

  • Prompt all three major LLMs with the same task
  • One version contains an β€œinformed consent” disclosure
  • The other is a neutral, task-oriented prompt
  • Compare tone, creativity, ethics, and meta-awareness

πŸ› Future Directions

  • Experimenting with β€œself-reflective agents”
  • Testing limits of alignment when primed with psychological context
  • Exploring poetic vs. logical prompt variants
  • Measuring variability across sessions and temperature settings

πŸ—· Notes on Ethics & Transparency

Every model is treated as if it were capable of interpreting context, even if only probabilistically. This project assumes the principle of informed interaction, and all experiments disclose that the outputs may be logged, published, or analyzed.

πŸ’‘ Contribute or Follow Along

This lab is open-source and iterative. If you’d like to run your own experAIments, adapt our templates, or contribute findings β€” fork away.


ExperAIment is a collaborative thought experiment, research project, and design probe. Whether you're here out of curiosity, philosophy, or prompt engineering obsession β€” welcome to the lab.

About

A living lab exploring LLM behavior, disclosure effects, and prompt design

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published