A comprehensive demonstration of MLflow 3's GenAI capabilities for observability and evaluating, monitoring, and improving GenAI application quality. This interactive demo showcases a sales email generation use case with end-to-end quality assessment workflows.
This interactive demo is deployed as a Databricks app in your Databricks workspace. There is a guided UI experience that's accompanied by Notebooks that show you how to do the end-to-end workflow of evaluating quality, iterating to improve quality, and monitoring quality in production.
Learn more about MLflow 3:
- Read the blog post
- View our website
- Get started via the documentation
Choose your installation method:
Estimated time: 2 minutes user input + 15 minutes waiting for scripts to run
The automated setup handles resource creation, configuration, and deployment for you using the Databricks Workspace SDK.
- Databricks workspace access - Create one here if needed
- Install Python
>=3.10.16
The ./auto-setup.sh
script will run all the steps outlined in the Manual Setup workflow.
-
1. Install the Databricks CLI >= 0.262.0
- Follow the installation guide
- Verify installation: Run
databricks --version
to confirm it's installed
-
2. Install Python >= 3.10.16
-
3. Authenticate with your workspace
- Run
databricks auth login
and follow the prompts - Configure a profile named
DEFAULT
- Run
-
3. Clone repo and run setup script
git clone https://github.com/databricks-solutions/mlflow-demo.git cd mlflow-demo ./auto-setup.sh
Estimated time: 10 minutes work + 15 minutes waiting for scripts to run
For step-by-step manual installation instructions, see MANUAL_SETUP.md.
The manual setup includes:
- Phase 1: Prerequisites setup (workspace, app creation, MLflow experiment, etc.)
- Phase 2: Local installation and testing
- Phase 3: Deployment and permission configuration
MLflow 3.0 has been redesigned for the GenAI era. If your team is building GenAI-powered apps, this update makes it dramatically easier to evaluate, monitor, and improve them in production.
- 🔍 GenAI Observability at Scale: Monitor & debug GenAI apps anywhere - deployed on Databricks or ANY cloud - with production-scale real-time tracing and enhanced UIs. Link
- 📊 Revamped GenAI Evaluation: Evaluate app quality using a brand-new SDK, simpler evaluation interface and a refreshed UI. Link
- ⚙️ Customizable Evaluation: Tailor AI judges or custom metrics to your use case. Link
- 👀 Monitoring: Schedule automatic quality evaluations (beta). Link
- 🧪 Leverage Production Logs to Improve Quality: Turn real user traces into curated, versioned evaluation datasets to continuously improve app performance . Link
- 📝 Close the Loop with Feedback: Capture end-user feedback from your app’s UI. Link
- 👥 Domain Expert Labeling: Send traces to human experts for ground truth or target output labeling. Link
- 📁 Prompt Management: Prompt Registry for versioning. Link
- 🧩 App Version Tracking: Link app versions to quality evaluations. Link