This system generates authentic, engaging TED talks by combining your brainstormed ideas with powerful visuals, all while preserving and evolving a unique speaking voice. Using the Contextual DNA paradigm, every generated talk makes the system smarter about what resonates with audiences.
- Voice Preservation: Captures and maintains Science Stanley's (or any speaker's) authentic voice
- Visual Storytelling: Builds talks around 10 compelling images for maximum retention
- Adaptive Intelligence: Reorganizes content for optimal narrative flow
- Multi-Perspective Evaluation: Tests content against diverse audience profiles
- Continuous Evolution: Every talk improves the system's understanding
- Interactive Development: Natural conversation with Claude-Code guides the entire process
When you run claude-code
in this directory, it immediately:
- Loads System Context - Understands the TED Talk DNA architecture
- Reads Brand Voice - Absorbs Science Stanley's personality markers
- Scans Content - Checks for existing brainstorms and talks
- Enters Dialogue Mode - Begins interactive session with you
Example first interaction:
$ claude-code
Claude: "Welcome to the TED Talk DNA System! I've loaded:
✓ System architecture and pipelines
✓ Science Stanley brand voice
✓ 12 evaluator profiles
✓ 3 existing brainstorms found
I'm ready to help you create amazing talks. Would you like to:
1. Generate a new talk from existing brainstorms
2. Create a new brainstorm together
3. Review and improve previous talks
4. Customize the system settings
What interests you most?"
# Navigate to project root
cd ted-talk-system/
# Initialize Claude-Code
claude-code
# Claude-Code will read context files and greet you:
# "Hello! I've loaded the TED Talk DNA System. I can help you:
# • Generate new talk versions
# • Modify your brainstorm or brand voice
# • Adjust evaluator profiles
# • Analyze existing talks
# What would you like to do?"
You: "Let's generate a new talk about quantum biology"
Claude: "I'll help you create a talk about quantum biology. First, let me check:
✓ Found brainstorm: quantum_biology.md (10 images, 47 topics)
✓ Brand voice loaded: Science Stanley
✓ 12 evaluator profiles active
Would you like me to:
1. Generate a complete talk with current settings
2. Review/modify the brainstorm first
3. Adjust Science Stanley's voice parameters
4. Add new evaluator perspectives"
You: "Generate a complete talk"
Claude: "Starting generation pipeline...
- Reorganizing topics for optimal flow
- Generating 5 versions of each segment
- Running multi-perspective evaluations
- Assembling top-rated segments
[Progress updates...]
Generation complete! Here are the results:
• Version 1: 8.7/10 average (strong opening, weak middle)
• Version 2: 9.1/10 average (consistent throughout)
• Version 3: 8.9/10 average (powerful ending)
Would you like me to:
1. Show detailed feedback from evaluators
2. Generate more versions
3. Optimize the top performer
4. Adjust parameters and regenerate"
Claude-Code will guide you through iterations:
- After each generation: "I noticed patterns that could improve scores. Should I update the context?"
- When scores plateau: "Would you like to add new evaluator perspectives or modify the brand voice?"
- On discovering patterns: "Segments with personal stories score 15% higher. Should I update generation prompts?"
The system learns through your choices:
Claude: "Based on 5 generation cycles, I've discovered:
• Image 3 works better as the opener (tried in version 4)
• 75-second segments optimal for complex topics
• Science Stanley's humor lands better with 'But wait!' transitions
Should I update the system context with these insights?"
Brainstorm → Topic Organization → Segment Generation → Evaluation → Assembly → Optimization
↓ ↓ ↓ ↓ ↓ ↓
Ideas Smart Reordering 60-90 sec chunks Multi-view Full talk Polish
Each 60-90 second segment:
- Anchors to one visual image
- Covers 1-3 related topics
- Maintains narrative connection
- Includes speaker cues and timing
Every segment evaluated by:
- Audience Profiles: general public, experts, skeptics, students
- Professional Coaches: speech, storytelling, engagement specialists
- Technical Metrics: pacing, clarity, visual alignment
├── brand_bible_context.md # Voice DNA (START HERE)
├── pipeline_context.md # AI agent instructions
├── README.md # You are here
│
├── brainstorms/ # Your input ideas
├── segment_prompts/ # Generation instructions
├── generated_segments/ # Individual pieces
├── complete_talks/ # Assembled presentations
└── editor_prompt_bank/ # Evaluation perspectives
- Clear visual connection (8.5+ visual alignment score)
- Authentic voice (9+ authenticity score)
- Engaging narrative (8+ across all audiences)
- Optimal pacing (respects image complexity)
- Compelling arc (builds to revelation)
- Visual journey (images enhance understanding)
- Consistent voice (sounds like one person throughout)
- Universal appeal (8+ average across all evaluators)
Claude-Code acts as your creative partner:
You: "The talks feel too technical"
Claude: "I can help adjust the technical level. Would you like me to:
1. Modify Science Stanley's voice to use more everyday analogies
2. Add a 'general_public_simplified' evaluator profile
3. Regenerate with complexity constraints
4. Show examples of technical vs. accessible segments"
You: "Let's see examples first"
Claude: "Here's a technical version vs. accessible version of the quantum tunneling explanation:
Technical (current): 'Quantum tunneling allows particles to traverse energy barriers
through probabilistic wave function collapse...'
Accessible (proposed): 'Imagine throwing a ball at a wall and sometimes it just
appears on the other side. That's quantum tunneling, and your DNA does it
billions of times per second...'
The accessible version scores +2.3 points with general audiences.
Should I update the voice parameters?"
Claude-Code helps you evolve the system:
- Brainstorm Enhancement: "I notice you have 3 images about cellular processes. Want to add topics about [suggested themes]?"
- Voice Refinement: "Science Stanley uses 'fascinating' 12 times. Should we diversify the wonder words?"
- Evaluator Gaps: "No evaluator represents visual learners. Should I create one?"
Each iteration provides actionable insights:
Claude: "Generation cycle 3 complete. Key findings:
• Image 7 (enzyme animation) consistently causes energy dips
• Segments mentioning 'your body' score higher than 'the body'
• 82-second segments have optimal retention
Recommendations:
1. Move enzyme discussion earlier when energy is high
2. Update prompts to use personal pronouns
3. Set default target to 82 seconds
Which would you like to implement?"
Claude: "I see this is your first talk. Let's set up together:
1. Do you have a brainstorm document ready?"
You: "Yes, quantum_biology.md"
Claude: "Great! I'll analyze it... Found 10 images and 52 topics.
The brand bible shows Science Stanley voice. Should I generate
your first talk or would you like to adjust anything first?"
You: "The last talk scored 8.2, can we do better?"
Claude: "Let me analyze what held back the score:
- Minutes 6-8 had engagement dips (too technical)
- Image 4 transition was abrupt
- Missing personal connection moments
I can:
1. Regenerate with simpler language in the middle
2. Reorder images 4 and 5 for smoother flow
3. Add 'personal story' requirements to prompts
What's your priority?"
You: "I want to adapt this for younger audiences"
Claude: "I'll help you tune for younger audiences:
1. Add youth-focused evaluator profiles
2. Modify Science Stanley's references (TikTok over Twitter)
3. Adjust pacing for shorter attention spans
4. Include more interactive moments
Should I implement all changes or walk through each?"
- Pattern Recognition: Identifies what works across talks
- Context Updates: Successful elements update voice DNA
- Prompt Evolution: Generation prompts improve over time
- Evaluator Expansion: New perspectives added based on gaps
- Run talks on diverse topics to discover patterns
- Add new evaluator profiles for missing perspectives
- Document audience reactions in feedback files
- Share successful modifications to voice DNA
- Replace
brand_bible_context.md
with your voice markers - Update example phrases and energy patterns
- Adjust humor style and cultural references
- Modify evaluation weights for your audience
Add evaluators to editor_prompt_bank/
for your specific needs:
- Industry-specific experts
- Regional audience variations
- Age-group perspectives
- Cultural context reviewers
When you initialize Claude-Code in this directory:
- Reads
pipeline_context.md
for system understanding - Loads
brand_bible_context.md
for voice parameters - Scans all directories for existing content
- Presents interactive menu based on current state
Claude-Code operates in several modes:
Generation Mode
- Guides you through creating new talks
- Reports scores and feedback in real-time
- Suggests optimizations based on results
Modification Mode
- Helps edit brainstorms with topic suggestions
- Refines brand voice based on performance
- Creates new evaluator profiles
Analysis Mode
- Compares talk performances
- Identifies successful patterns
- Tracks system evolution
Learning Mode
- Proposes context updates
- Documents new patterns
- Evolves generation prompts
Claude-Code maintains context throughout your session:
Claude: "In our last 3 generations, talks about 'personal discovery'
scored highest. Should I prioritize these themes in the next iteration?"
Different team members interact naturally with Claude-Code:
Content Creators:
You: "I have new ideas about quantum consciousness"
Claude: "I'll help you add those to the brainstorm. Should they
connect to existing images or do you envision new visuals?"
Voice Coaches:
You: "Science Stanley sounds too formal lately"
Claude: "I'll analyze recent talks... You're right, formality
increased 23%. Want to review specific examples and adjust
the brand bible together?"
Presenters:
You: "The audience loved the enzyme joke but the DNA section dragged"
Claude: "Thanks for the real-world feedback! I'll update evaluation
weights and flag DNA explanations for simplification. Should I
regenerate that section with more energy?"
When you discover effective patterns:
- Document in system observations
- Update relevant context files
- Share successful prompts
- Contribute evaluator profiles
- Inconsistent Voice: Review and update brand_bible_context.md
- Poor Visual Alignment: Check image descriptions clarity
- Low Engagement Scores: Add more diverse evaluator profiles
- Repetitive Patterns: Increase generation variation parameters
Claude-Code provides contextual assistance:
You: "Why are scores dropping?"
Claude: "Let me diagnose... I see three potential issues:
1. Brand voice drift - recent generations less 'Science Stanley'
2. Evaluator fatigue - same profiles may be too familiar
3. Topic complexity increase in latest brainstorms
I can show specific examples and help fix any of these.
What would you like to explore first?"
This system represents a new paradigm in content creation where:
- Interaction drives improvement through natural conversation
- Quality emerges from dialogue between human creativity and AI analysis
- Voice remains authentic while continuously evolving
- Learning happens transparently with every choice you make
Every conversation with Claude-Code teaches the system about your preferences. Every decision refines its understanding. Every generation builds on past insights.
Welcome to the future of intelligent content creation - where you and AI collaborate through natural dialogue to create presentations that improve with every interaction.
"The best talk emerges from the conversation between human vision and AI capability."
- v1.0: Initial visual-driven architecture
- v1.1: Added flexible segment timing (60-90s)
- v1.2: Implemented topic reorganization intelligence
- v1.3: Enhanced multi-perspective evaluation
- Current: Continuous pattern learning active
This README evolves with the system. Last updated by: [human/agent] on [date]