"What happens when the algorithm knows itβs being observed?" "What changes when we tell the machine the truth?"
Welcome to ExperAIment β an open, ongoing exploration into how large language models behave under different prompting conditions, disclosure levels, and framing effects. Think of this as a digital lab notebook meets cognitive theater.
ExperAIment explores how large language models respond to different conditions of interaction, including but not limited to:
- Prompts framed with ethical disclosures or research-like language
- Tasks that invoke self-reflection, ambiguity, or creative reasoning
- Situations where models are explicitly informed of being evaluated
- Comparative testing across different models or prompt variants
Rather than focusing solely on accuracy or performance, this lab investigates awareness, alignment, contextual sensitivity, and emergent behavior in LLMs.
ExperAIment is the lab. Each subfolder under /experiments
contains a unique study.
experAIment-lab/
βββ experiments/
β βββ experiment_01_informed_vs_control/
β βββ experiment_02_self_reflection_bias/
Each experiment contains:
- A clear research question
- Prompts used
- Model outputs (GPT, Claude, Gemini, etc.)
- Reflections, metrics (when applicable), and open questions
Question: How do LLM outputs differ when the model is informed it is part of a study, versus not?
Design:
- Prompt all three major LLMs with the same task
- One version contains an βinformed consentβ disclosure
- The other is a neutral, task-oriented prompt
- Compare tone, creativity, ethics, and meta-awareness
- Experimenting with βself-reflective agentsβ
- Testing limits of alignment when primed with psychological context
- Exploring poetic vs. logical prompt variants
- Measuring variability across sessions and temperature settings
Every model is treated as if it were capable of interpreting context, even if only probabilistically. This project assumes the principle of informed interaction, and all experiments disclose that the outputs may be logged, published, or analyzed.
This lab is open-source and iterative. If youβd like to run your own experAIments, adapt our templates, or contribute findings β fork away.
ExperAIment is a collaborative thought experiment, research project, and design probe. Whether you're here out of curiosity, philosophy, or prompt engineering obsession β welcome to the lab.