A Python-based CLI tool designed to summarize, review, and validate technical lab and demo content. Primarily intended for use in technical enablement and customer demonstration scenarios by technical sellers.
The tool processes content provided as one or more AsciiDoc files typically stored in remote antora
formatted repositories or local git clones. Labs and demos are made available via a Catalog called RHDP and its associated repository which contains YAML defined Catalog Items (CIs).
- AI-Powered Analysis: Generate summaries, reviews, and catalog descriptions using LLM
- Repository Processing: Clone and analyze showroom repositories with intelligent caching
- Content Analysis: Parse AsciiDoc modules and extract structured data
- Multi-Provider LLM Support: Works with Gemini, OpenAI, and local LLM servers
- Structured Outputs: JSON and verbose output modes for automation and human consumption
- Smart Caching: Avoid repeated clones with automatic cache invalidation
- CLI Interface: Easy-to-use command-line interface with rich, colorized output
- AsciiDoc Support: Native support for AsciiDoc formatted content with header extraction
- Performance: ~50% faster on subsequent runs thanks to intelligent caching
- Flexible Options: Support for different git refs, custom cache directories, and cache control
- No Installation Required: Use the wrapper script without
pip install
- Python 3.12+ or Python 3.13 (recommended)
- Git (for repository operations)
- Choose your preferred toolchain:
- Option A:
uv
(modern, fast Python package manager) - Option B: Standard
pip
andvenv
- Option A:
uv
is a fast Python package manager that handles virtual environments automatically.
-
Install uv (if not already installed):
# macOS/Linux curl -LsSf https://astral.sh/uv/install.sh | sh # Windows powershell -c "irm https://astral.sh/uv/install.ps1 | iex" # Or with pip pip install uv
-
Clone and install:
git clone <repository-url> cd showroom-tool # Create virtual environment and install in one step uv sync source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Clone the repository:
git clone <repository-url> cd showroom-tool
-
Create and activate virtual environment:
# Using Python 3.13 (recommended) python3.13 -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate # Or Python 3.12 python3.12 -m venv .venv source .venv/bin/activate
-
Install the package:
pip install -e .
showroom-tool --help
If you prefer not to install the package, you can use the included wrapper script directly:
# Clone the repository
git clone <repository-url>
cd showroom-tool
# Create virtual environment and install dependencies
python3.12 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -r requirements.txt # or pip install -r requirements.txt
# Use the clean wrapper script syntax
python showroom-tool.py --help
python showroom-tool.py summary https://github.com/example/my-showroom
python showroom-tool.py review https://github.com/example/my-showroom
python showroom-tool.py description https://github.com/example/my-showroom
After installation with pip install -e .
, use the installed commands:
# AI-powered summary generation
showroom-tool summary https://github.com/example/my-showroom
# AI-powered review with scoring and feedback
showroom-tool review https://github.com/example/my-showroom
# AI-powered catalog description generation
showroom-tool description https://github.com/example/my-showroom
# Use specific branch or tag
showroom-tool summary https://github.com/example/my-showroom --ref develop
# Enable verbose output for detailed processing
showroom-tool summary https://github.com/example/my-showroom --verbose
# Clean JSON output for automation and piping
showroom-tool summary https://github.com/example/my-showroom --output json | jq
# View AI prompt templates
showroom-tool summary --output-prompt
showroom-tool review --output-prompt
showroom-tool description --output-prompt
The tool supports multiple LLM providers and defaults to Google Gemini. Configure using environment variables:
# Google Gemini (default) - no additional setup needed for provider selection
export GEMINI_API_KEY="your-gemini-api-key"
# OpenAI
export OPENAI_API_KEY="your-openai-api-key"
# Local LLM server (OpenAI-compatible)
export LOCAL_OPENAI_API_KEY="your-local-api-key"
export LOCAL_OPENAI_BASE_URL="http://localhost:8000/v1"
export LOCAL_OPENAI_MODEL="your-local-model"
# Optional: Customize model and temperature
export GEMINI_MODEL="gemini-2.0-flash-exp" # default
export OPENAI_MODEL="gpt-4o-2024-08-06" # default
export LLM_TEMPERATURE="0.1" # default
# Per-action temperatures (optional; override global)
export SHOWROOM_SUMMARY_TEMPERATURE="0.1"
export SHOWROOM_REVIEW_TEMPERATURE="0.1"
export SHOWROOM_DESCRIPTION_TEMPERATURE="0.1"
Choose your LLM provider with command line options:
# Use specific provider
python showroom-tool.py summary <repo-url> --llm-provider gemini
python showroom-tool.py summary <repo-url> --llm-provider openai
python showroom-tool.py summary <repo-url> --llm-provider local
# Override prompts/temperatures from a file (optional)
showroom-tool summary <repo-url> --prompts-file ./my_prompts_overrides.py
showroom-tool review <repo-url> --prompts-file ./overrides.json
# Prompts are automatically discovered from:
# 1. CLI: --prompts-file argument (highest priority)
# 2. Project: ./config/prompts.py (recommended for customization)
# 3. User: ~/.config/showroom-tool/prompts.py (global settings)
# 4. Built-in: src/showroom_tool/config/defaults.py (never edit)
📚 Showroom Lab Details
============================================================
Lab Name: Summit 2025 - LB2906 - Getting Started with Llamastack
Git Repository: https://github.com/rhpds/showroom-summit2025-lb2960-llamastack.git
Git Reference: main
Total Modules: 9
📖 Module Breakdown
------------------------------------------------------------
1. AI Applications and Llama Stack: A practical workshop
File: index.adoc | 615 words | 14 lines
2. Module 1: Getting Started
File: 01-Getting-Started.adoc | 1,553 words | 230 lines
3. Module 2: Llama Stack Inference Basics
File: 02_Lllamastack_Inference_Basics.adoc | 499 words | 34 lines
...
============================================================
🤖 AI Analysis Results:
{
"redhat_products": ["Red Hat OpenShift AI", "Llama Stack"],
"lab_audience": ["AI/ML developers", "Data scientists", "DevOps engineers"],
"lab_learning_objectives": [
"Set up and configure Llama Stack environment",
"Implement basic inference with Llama models",
"Build RAG applications with Llama Stack",
"Deploy AI applications on OpenShift"
],
"lab_summary": "This hands-on lab introduces participants to Llama Stack..."
}
The tool provides multiple ways to customize AI prompts and temperatures:
Edit the project-level configuration file to override defaults:
# Edit project prompts and temperatures
vi ./config/prompts.py
# Changes are automatically detected and used
showroom-tool summary <repo-url>
# Create experimental prompts file from template
cp ./config/prompts.py ./my_experiment.py
# Edit and test with any command
showroom-tool summary <repo-url> --prompts-file ./my_experiment.py
showroom-tool review <repo-url> --prompts-file ./custom_review.json
- CLI:
--prompts-file
argument (highest priority) - Project:
./config/prompts.py
(recommended for customization) - User:
~/.config/showroom-tool/prompts.py
(global settings) - Built-in:
src/showroom_tool/config/defaults.py
(never edit directly)
The tool includes an intelligent caching system that:
- Stores repositories in
~/.showroom-tool/cache/
by default - Checks for updates automatically and refreshes when needed
- Supports different refs with separate cache entries
- Improves performance by ~50% on subsequent runs
Cache is managed automatically, but you can control it:
--no-cache
: Disable caching completely--cache-dir <path>
: Use custom cache location- Cache is invalidated automatically when remote repository has updates
git clone <repository-url>
cd showroom-tool
# Create development environment
uv venv
source .venv/bin/activate
uv pip install -e .
# Install development dependencies (if available)
# uv pip install -e ".[dev]"
git clone <repository-url>
cd showroom-tool
# Create development environment
python3.13 -m venv .venv
source .venv/bin/activate
pip install -e .
# Install development dependencies (if available)
# pip install -e ".[dev]"
- Code formatting:
ruff format .
- Linting:
ruff check . --fix
- Testing:
pytest
(when tests are available) - Install changes:
uv pip install -e . --force-reinstall
orpip install -e . --force-reinstall
showroom-tool/
├── src/ # Source code (Python package layout)
│ └── showroom_tool/ # Main CLI package
│ ├── __init__.py # Package initialization
│ ├── __main__.py # Module entry point
│ ├── cli.py # CLI implementation and core logic
│ ├── basemodels.py # Pydantic BaseModels (Showroom, ShowroomModule, etc.)
│ ├── prompts.py # Public API for prompts and prompt building
│ ├── prompt_builder.py # Configuration discovery and merging logic
│ ├── shared_utilities.py # LLM integration and prompt utilities
│ ├── showroom.py # Repository fetching and processing
│ ├── outputs.py # Template rendering and output formatting
│ ├── graph_factory.py # LangGraph workflow definitions
│ └── config/ # Built-in defaults (never edit directly)
│ └── defaults.py # Default prompts and settings
├── config/ # Project-level configuration (edit these!)
│ └── prompts.py # Project prompts and temperature overrides
├── specs/ # Project specifications
│ ├── requirements.md # Detailed requirements and status
│ ├── structure.md # Project structure documentation
│ ├── tech.md # Technology stack information
│ └── product.md # Product overview
├── sample-code/ # Example/sample code (ignored by linter)
├── pyproject.toml # Project configuration, dependencies, and build
├── README.md # This file
└── .gitignore # Git ignore patterns
- Programming Language: Python 3.12+ (3.13 recommended)
- CLI Framework:
argparse
withrich
for enhanced output - Data Models:
pydantic
v2 for structured data validation - Git Operations:
GitPython
for repository management - YAML Processing:
pyyaml
for configuration file parsing - Package Management:
uv
(recommended) orpip
- Code Quality:
ruff
for linting and formatting - Build System:
hatchling
viapyproject.toml
# Use specific branch
showroom-tool <repo-url> --ref feature-branch
# Use specific tag
showroom-tool <repo-url> --ref v1.2.3
# Use specific commit
showroom-tool <repo-url> --ref a1b2c3d4
For development and testing, you can analyze local directories without cloning:
# Analyze local clone (bypass git operations)
showroom-tool summary --dir ./my-local-showroom
# Works with all commands and output formats
showroom-tool review --dir /path/to/showroom --output json
showroom-tool description --dir ~/work/showroom-lab --verbose
# Check current cache
ls ~/.showroom-tool/cache/
# Clear cache manually (if needed)
rm -rf ~/.showroom-tool/cache/
# Use temporary location
showroom-tool <repo-url> --cache-dir /tmp/temp-cache
# Method 1: Clean wrapper script (recommended)
python showroom-tool.py --help
python showroom-tool.py summary https://github.com/example/my-showroom
python showroom-tool.py review https://github.com/example/my-showroom --verbose
python showroom-tool.py description https://github.com/example/my-showroom --output json
# Method 2: Direct module execution
python -m src.showroom_tool summary https://github.com/example/my-showroom
python -m src.showroom_tool --help
ModuleNotFoundError: If you get import errors after installation:
# Reinstall in development mode
uv pip install -e . --force-reinstall
# or
pip install -e . --force-reinstall
Git Command Not Found: Ensure Git is installed:
# macOS
brew install git
# Ubuntu/Debian
sudo apt-get install git
# Windows
# Download from https://git-scm.com/
Permission Errors: On macOS/Linux, you might need:
# Ensure proper permissions
chmod +x ~/.local/bin/showroom-tool
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes following the existing code style
- Test your changes with real repositories
- Run linting:
ruff check . --fix
- Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Use
ruff
for formatting and linting - Follow existing patterns in the codebase
- Add type hints to all functions
- Use descriptive variable names
- Add docstrings to public functions
✅ Completed Features:
- Repository cloning and caching system
- AsciiDoc content parsing and module extraction
- Pydantic BaseModel data structures (
Showroom
,ShowroomModule
,ShowroomSummary
,ShowroomReview
,CatalogDescription
) - CLI with argument parsing and rich output
- Git reference support (branches, tags, commits)
- Comprehensive error handling
- AI-Powered Analysis: Complete LLM integration with structured outputs
- Summary Generation: Extract Red Hat products, audience, objectives, and summaries
- Review Capabilities: Score and provide feedback on completeness, clarity, technical detail, usefulness, and business value
- Catalog Descriptions: Generate compelling headlines, product lists, audience bullets, and key takeaways
- Multi-Provider LLM Support: Gemini (default), OpenAI, and local LLM servers
- Output Formats: Both JSON (for automation) and verbose (for humans) output modes
- Clean Usage Options: Wrapper script for easy execution without installation
🎯 Production Ready: The tool is now a complete AI-powered analysis platform ready for technical enablement scenarios.
For advanced users who want to customize AI behavior and understand how prompts are assembled:
📖 Prompt Engineering Guide - Comprehensive guide covering:
- How prompt assembly works (with flow diagrams)
- Key components you can modify
- Role of BaseModel description fields in guiding AI behavior
- Best practices for customizing analysis types
- Testing and debugging prompt changes
- Advanced customization techniques
This guide is essential for developers and prompt engineers who want to fine-tune the AI analysis behavior or create custom analysis types.
This project is licensed under the MIT License - see the LICENSE file for details.
- Create an issue for bug reports or feature requests
- Check existing issues before creating new ones
- Provide detailed information about your environment and the problem
- Include the output of
showroom-tool --help
and your Python version
Example Repository for Testing:
# Using wrapper script (no installation required)
python showroom-tool.py summary https://github.com/rhpds/showroom-summit2025-lb2960-llamastack.git --verbose
# Using installed package
showroom-tool summary https://github.com/rhpds/showroom-summit2025-lb2960-llamastack.git --verbose
showroom-tool review https://github.com/rhpds/showroom-summit2025-lb2960-llamastack.git --output json
showroom-tool description https://github.com/rhpds/showroom-summit2025-lb2960-llamastack.git