We're moving from traditional PRDs to Agent Development Briefs - a streamlined approach that treats AI as a development partner from day one.
An Agent Development Brief replaces traditional Product Requirements Documents with a collaborative framework designed for AI-human development teams. Instead of trying to predict everything upfront, ADBs define the problem space and let implementation drive specification refinement.
Key Differences from Traditional PRDs:
- Minimal upfront specification - Start building sooner with less documentation debt
- Living documentation - Documents evolve through implementation learnings
- Agent collaboration - AI contributes to architecture and technical decisions
- Outcome-focused - Behavioral success metrics instead of feature checklists
The documents in this ADB workflow are living, not static. They will be continuously updated as you and your team of agents and humans build, deploy and test the system you're building.
This approach acknowledges that most detailed upfront specifications are wrong and embraces learning through building.
- main.md - Orchestrates the review process and sets expectations
- product-requirements.md - Problem definition, user stories, success metrics, scope boundaries
- agent-operating-guidelines.md - Decision-making framework, development philosophy, authority levels
- decision-log.md - Timestamped record of all architectural and product decisions
- technical-guidelines.md - Technology stack, constraints, and requirements
- design-requirements.md - Design goals and user experience guidelines
- PROJECT_STATUS.md - Current implementation plan and progress tracking
- Others as your team sees fit...
decision-log.md
This file contains all decisions made about the project with a timestamp and short description. The agent maintains this log in real-time as architectural and product decisions are made during development.
- Agent reads main.md → understands the project scope and process
- Agent reads product-requirements.md → understands the problem and scope
- Agent reads agent-operating-guidelines.md → knows how to make decisions and ask questions
- Agent asks clarifying questions → refines understanding and updates documentation
- Agent generates supporting docs as needed → technical specs, setup guides, etc.
- Agent creates PROJECT_STATUS.md → actionable implementation plan
- Iterative building and documentation updates → continuous refinement through implementation
Each project should define decision-making authority using standardized categories:
- Architecture decisions: [Agent-Decides-Human-Review]
- UX decisions: [Agent-Proposes-Human-Decides]
- Implementation details: [Agent-Autonomous]
- Scope changes: [Human-Decides]
- Technology choices: [Collaborative]
Available Authority Levels:
[Human-Decides]
- Human makes the call, agent implements[Agent-Proposes-Human-Decides]
- Agent suggests options, human chooses[Agent-Decides-Human-Review]
- Agent decides and implements, human can override[Agent-Autonomous]
- Agent has full authority, no review needed[Collaborative]
- Both parties discuss until consensus reached[Already-Decided]
- Pre-established in project docs
- Copy the template files from this repository
- Fill out the three core documents (main.md, product-requirements.md, agent-operating-guidelines.md)
- Configure decision authority levels for your team
- Hand to your AI agent and begin collaborative development
/agent-development-brief/
├── main.md
├── product-requirements.md
├── agent-operating-guidelines.md
├── decision-log.md
├── technical-guidelines.md (optional)
├── design-requirements.md (optional)
├── <your-own-custom-project-docs.md> (optional)
└── PROJECT_STATUS.md (generated by agent)
Best for:
- Greenfield projects with AI development partners
- Small, agile teams that can iterate quickly
- Projects where requirements are likely to evolve
- Teams comfortable with emergent architecture
Not ideal for:
- Heavily regulated environments requiring extensive upfront documentation
- Large teams needing detailed coordination protocols
- Projects with completely fixed requirements
- Teams without access to capable AI development tools
An effective ADB should enable:
- Faster time-to-first-implementation (days, not weeks)
- More accurate final products (less gap between specification and reality)
- Reduced documentation debt (docs reflect what was built, not what was predicted)
- Improved human-AI collaboration (agent as partner, not just executor)
See the /examples
folder for complete ADB implementations:
- CatGram - Instagram for cats (social media platform)
This framework is actively evolving. Share your experiences, improvements, and adaptations to help refine the ADB approach for different team sizes and project types.