Skip to content

Commit 37988b4

Browse files
Enhance DSPy GEPA notebook structure and formatting
Add author attribution and comprehensive section headers following cookbook standards: - Include author credit with GitHub profile link - Add descriptive markdown headers for each major section - Update metadata with Colab GPU configuration - Improve overall notebook organization and readability Sections include: - Installation and Setup - Language Model Configuration (Ollama/OpenRouter) - Dataset Loading and Filtering - Dataset Preparation Functions - Baseline Chain-of-Thought Program - Evaluation Metric - Baseline Evaluation - GEPA Optimization - Optimized Program Evaluation The enhanced structure makes the notebook more accessible and easier to follow while maintaining consistency with other cookbook tutorials.
1 parent cd33934 commit 37988b4

File tree

1 file changed

+99
-2
lines changed

1 file changed

+99
-2
lines changed

notebooks/en/dspy_gepa.ipynb

Lines changed: 99 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"source": [
88
"# Optimizing Language Models with DSPy GEPA: From 42% to 64% Accuracy\n",
99
"\n",
10+
"_Authored by: [Behrooz Azarkhalili](https://github.com/behroozazarkhalili)_\n",
11+
"\n",
1012
"This notebook demonstrates how to use DSPy's GEPA (Generalized Error-driven Prompt Augmentation) optimizer to improve language model performance on mathematical reasoning tasks. We'll work with the NuminaMath-1.5 dataset and show how GEPA can boost accuracy from 42% to 64% through automated prompt optimization.\n",
1113
"\n",
1214
"**What you'll learn:**\n",
@@ -24,6 +26,16 @@
2426
"GEPA works by analyzing errors, generating targeted feedback, and automatically refining prompts to address common failure patterns. This makes it particularly effective for complex reasoning tasks where prompt quality significantly impacts performance."
2527
]
2628
},
29+
{
30+
"cell_type": "markdown",
31+
"id": "99b369f9",
32+
"metadata": {},
33+
"source": [
34+
"## Installation and Setup\n",
35+
"\n",
36+
"Install required dependencies and import libraries for DSPy, dataset processing, and model configuration."
37+
]
38+
},
2739
{
2840
"cell_type": "code",
2941
"execution_count": null,
@@ -67,6 +79,16 @@
6779
"print(\"🔄 Make sure Ollama is running: ollama run qwen3:8b\")"
6880
]
6981
},
82+
{
83+
"cell_type": "markdown",
84+
"id": "ee1fa682",
85+
"metadata": {},
86+
"source": [
87+
"## Language Model Configuration\n",
88+
"\n",
89+
"Configure your language model - either local (Ollama) or cloud-based (OpenRouter) - for use with DSPy."
90+
]
91+
},
7092
{
7193
"cell_type": "code",
7294
"execution_count": null,
@@ -99,6 +121,16 @@
99121
"train_split = load_dataset(\"AI-MO/NuminaMath-1.5\")['train']"
100122
]
101123
},
124+
{
125+
"cell_type": "markdown",
126+
"id": "aca72fbc",
127+
"metadata": {},
128+
"source": [
129+
"## Dataset Loading and Filtering\n",
130+
"\n",
131+
"Load the NuminaMath-1.5 dataset and filter for problems with numeric answers suitable for evaluation."
132+
]
133+
},
102134
{
103135
"cell_type": "code",
104136
"execution_count": null,
@@ -180,6 +212,16 @@
180212
" return train_set, val_set, test_set"
181213
]
182214
},
215+
{
216+
"cell_type": "markdown",
217+
"id": "e6d6b6f9",
218+
"metadata": {},
219+
"source": [
220+
"## Dataset Preparation Functions\n",
221+
"\n",
222+
"Helper functions to process the dataset, split it into train/val/test sets, and preview examples."
223+
]
224+
},
183225
{
184226
"cell_type": "code",
185227
"execution_count": null,
@@ -234,6 +276,16 @@
234276
"program = dspy.ChainOfThought(GenerateResponse)"
235277
]
236278
},
279+
{
280+
"cell_type": "markdown",
281+
"id": "3659214d",
282+
"metadata": {},
283+
"source": [
284+
"## Baseline Chain-of-Thought Program\n",
285+
"\n",
286+
"Create a simple baseline using DSPy's Chain-of-Thought module to establish initial performance."
287+
]
288+
},
237289
{
238290
"cell_type": "code",
239291
"execution_count": null,
@@ -269,6 +321,16 @@
269321
"evaluate(program)"
270322
]
271323
},
324+
{
325+
"cell_type": "markdown",
326+
"id": "329bacee",
327+
"metadata": {},
328+
"source": [
329+
"## Evaluation Metric\n",
330+
"\n",
331+
"Define the evaluation metric to compare model predictions against ground truth answers."
332+
]
333+
},
272334
{
273335
"cell_type": "code",
274336
"execution_count": null,
@@ -303,6 +365,16 @@
303365
"outputs": [],
304366
"source": []
305367
},
368+
{
369+
"cell_type": "markdown",
370+
"id": "07134dea",
371+
"metadata": {},
372+
"source": [
373+
"## Baseline Evaluation\n",
374+
"\n",
375+
"Evaluate the baseline Chain-of-Thought program to establish our starting accuracy before optimization."
376+
]
377+
},
306378
{
307379
"cell_type": "code",
308380
"execution_count": null,
@@ -357,6 +429,16 @@
357429
")\n"
358430
]
359431
},
432+
{
433+
"cell_type": "markdown",
434+
"id": "e5fe6dd8",
435+
"metadata": {},
436+
"source": [
437+
"## GEPA Optimization\n",
438+
"\n",
439+
"Apply GEPA optimizer with error-driven feedback to automatically improve the prompt and boost performance."
440+
]
441+
},
360442
{
361443
"cell_type": "code",
362444
"execution_count": null,
@@ -381,6 +463,16 @@
381463
"print(optimized_program.predict.signature.instructions)"
382464
]
383465
},
466+
{
467+
"cell_type": "markdown",
468+
"id": "74c7476f",
469+
"metadata": {},
470+
"source": [
471+
"## Optimized Program Evaluation\n",
472+
"\n",
473+
"Evaluate the GEPA-optimized program to measure the improvement in accuracy and effectiveness."
474+
]
475+
},
384476
{
385477
"cell_type": "code",
386478
"execution_count": null,
@@ -393,8 +485,13 @@
393485
}
394486
],
395487
"metadata": {
488+
"accelerator": "GPU",
489+
"colab": {
490+
"gpuType": "L4",
491+
"provenance": []
492+
},
396493
"kernelspec": {
397-
"display_name": "behrooz",
494+
"display_name": "Python 3",
398495
"language": "python",
399496
"name": "python3"
400497
},
@@ -408,7 +505,7 @@
408505
"name": "python",
409506
"nbconvert_exporter": "python",
410507
"pygments_lexer": "ipython3",
411-
"version": "3.11.11"
508+
"version": "3.11.0"
412509
}
413510
},
414511
"nbformat": 4,

0 commit comments

Comments
 (0)