Skip to content

Commit 70c7f74

Browse files
Enhance documentation and add new notebook for structured generation using Vision Language Models
- Updated the table of contents to include a new section for "Structured Generation from Documents Using Vision Language Models". - Added a new Jupyter notebook that demonstrates how to extract structured information from documents using the SmolVLM-500M-Instruct model, including installation instructions, model initialization, and example usage.
1 parent 55892e0 commit 70c7f74

File tree

2 files changed

+247
-1
lines changed

2 files changed

+247
-1
lines changed

notebooks/en/_toctree.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@
7272
title: Phoenix Observability Dashboard on HF Spaces
7373
- local: search_and_learn
7474
title: Scaling Test-Time Compute for Longer Thinking in LLMs
75-
75+
7676
- title: Computer Vision Recipes
7777
isExpanded: false
7878
sections:
@@ -108,6 +108,8 @@
108108
title: Smol Multimodal RAG, Building with ColSmolVLM and SmolVLM on Colab's Free-Tier GPU
109109
- local: fine_tuning_vlm_dpo_smolvlm_instruct
110110
title: Fine-tuning SmolVLM using direct preference optimization (DPO) with TRL on a consumer GPU
111+
- local: structured_generation_vision_languag_models
112+
title: Structured Generation from Documents Using Vision Language Models
111113

112114
- title: Search Recipes
113115
isExpanded: false
Lines changed: 244 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,244 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Structured Generation from Documents Using Vision Language Models\n",
8+
"\n",
9+
"We will be using the SmolVLM-500M-Instruct model from HuggingFaceTB to extract structured information from documents. We will do so using the HuggingFace Transformers library and the Outlines library, which facilitates structured generation based on limiting token sampling probabilities. We will also use the Gradio library to create a simple UI for uploading and extracting structured information from documents.\n",
10+
"\n",
11+
"## Dependencies and imports\n",
12+
"\n",
13+
"First, let's install the necessary libraries."
14+
]
15+
},
16+
{
17+
"cell_type": "code",
18+
"execution_count": null,
19+
"metadata": {},
20+
"outputs": [],
21+
"source": [
22+
"!pip install outlines transformers torch flash-attn outlines datasets sentencepiece gradio"
23+
]
24+
},
25+
{
26+
"cell_type": "markdown",
27+
"metadata": {},
28+
"source": [
29+
"Let's continue with importing the necessary libraries."
30+
]
31+
},
32+
{
33+
"cell_type": "code",
34+
"execution_count": 1,
35+
"metadata": {},
36+
"outputs": [],
37+
"source": [
38+
"import outlines\n",
39+
"import torch\n",
40+
"\n",
41+
"from io import BytesIO\n",
42+
"from urllib.request import urlopen\n",
43+
"from PIL import Image\n",
44+
"from outlines.models.transformers_vision import transformers_vision\n",
45+
"from transformers import AutoModelForImageTextToText, AutoProcessor\n",
46+
"from pydantic import BaseModel, Field\n",
47+
"from typing import List\n",
48+
"from enum import StrEnum"
49+
]
50+
},
51+
{
52+
"cell_type": "markdown",
53+
"metadata": {},
54+
"source": [
55+
"## Initialising our model\n",
56+
"\n",
57+
"We will start by initialising our model from [HuggingFaceTB/SmolVLM-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct). Outlines expects us to pass in a model class and processor class, so we will make this example a bit more generic by creating a function that returns those. Alternatively, you could look at the model and tokenizer config within the [Hub repo files](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct/tree/main), and import those classes directly."
58+
]
59+
},
60+
{
61+
"cell_type": "code",
62+
"execution_count": null,
63+
"metadata": {},
64+
"outputs": [],
65+
"source": [
66+
"model_name = \"HuggingFaceTB/SmolVLM-Instruct\" # original magnet model is able to be loaded without issue\n",
67+
"\n",
68+
"\n",
69+
"def get_model_and_processor_class(model_name: str):\n",
70+
" model = AutoModelForImageTextToText.from_pretrained(model_name)\n",
71+
" processor = AutoProcessor.from_pretrained(model_name)\n",
72+
" classes = model.__class__, processor.__class__\n",
73+
" del model, processor\n",
74+
" return classes\n",
75+
"\n",
76+
"\n",
77+
"model_class, processor_class = get_model_and_processor_class(model_name)\n",
78+
"\n",
79+
"if torch.cuda.is_available():\n",
80+
" device = \"cuda\"\n",
81+
"elif torch.backends.mps.is_available():\n",
82+
" device = \"mps\"\n",
83+
"else:\n",
84+
" device = \"cpu\"\n",
85+
"\n",
86+
"model = transformers_vision(\n",
87+
" model_name,\n",
88+
" model_class=model_class,\n",
89+
" device=device,\n",
90+
" model_kwargs={\"torch_dtype\": torch.bfloat16, \"device_map\": \"auto\"},\n",
91+
" processor_kwargs={\"device\": device},\n",
92+
" processor_class=processor_class,\n",
93+
")\n",
94+
"model"
95+
]
96+
},
97+
{
98+
"cell_type": "markdown",
99+
"metadata": {},
100+
"source": [
101+
"Now, we are going to define a function that will define how the output of our model will be structured. We will want to extract Tags for object in the image along with a string and a confidence score."
102+
]
103+
},
104+
{
105+
"cell_type": "code",
106+
"execution_count": 93,
107+
"metadata": {},
108+
"outputs": [],
109+
"source": [
110+
"class TagType(StrEnum):\n",
111+
" ENTITY = \"Entity\"\n",
112+
" RELATIONSHIP = \"Relationship\"\n",
113+
" STYLE = \"Style\"\n",
114+
" ATTRIBUTE = \"Attribute\"\n",
115+
" COMPOSITION = \"Composition\"\n",
116+
" CONTEXTUAL = \"Contextual\"\n",
117+
" TECHNICAL = \"Technical\"\n",
118+
" SEMANTIC = \"Semantic\"\n",
119+
"\n",
120+
"class ImageTag(BaseModel):\n",
121+
" tag_name: str\n",
122+
" tag_description: str\n",
123+
" tag_type: TagType\n",
124+
" confidence_score: float\n",
125+
"\n",
126+
"\n",
127+
"class ImageData(BaseModel):\n",
128+
" tags_list: List[ImageTag] = Field(min_items=1)\n",
129+
" short_caption: str\n",
130+
"\n",
131+
"\n",
132+
"image_objects_generator = outlines.generate.json(model, ImageData)"
133+
]
134+
},
135+
{
136+
"cell_type": "markdown",
137+
"metadata": {},
138+
"source": [
139+
"Now, let's come up with an extraction prompt. We will want to extract Tags for object in the image along with a string and a confidence score and provide some guidance to the model about the different tags and structrue."
140+
]
141+
},
142+
{
143+
"cell_type": "code",
144+
"execution_count": 96,
145+
"metadata": {},
146+
"outputs": [],
147+
"source": [
148+
"prompt = \"\"\"\n",
149+
"You are a structured image analysis assitant. Generate comprehensive tag list for an image classification system. Use at least 1 tag per type. Return the results as a valid JSON object.\n",
150+
"\"\"\".strip()"
151+
]
152+
},
153+
{
154+
"cell_type": "code",
155+
"execution_count": 95,
156+
"metadata": {},
157+
"outputs": [
158+
{
159+
"data": {
160+
"text/plain": [
161+
"ImageData(tags_list=[ImageTag(tag_name='spacecraft', tag_description='You are an EVA astronaut standing on the moon', tag_type=<TagType.STYLE: 'Style'>, confidence_score=0.9471130702150571), ImageTag(tag_name='tire track', tag_description='You think tike this used to lead your way here', tag_type=<TagType.ENTITY: 'Entity'>, confidence_score=1.0), ImageTag(tag_name='space helmet', tag_description='Ozone spacesuit with white metal visor', tag_type=<TagType.ENTITY: 'Entity'>, confidence_score=0.9737292349276361), ImageTag(tag_name='space suit', tag_description='White Astronaut', tag_type=<TagType.ENTITY: 'Entity'>, confidence_score=0.9749979480665247), ImageTag(tag_name='astronaut', tag_description='Astronaut', tag_type=<TagType.ENTITY: 'Entity'>, confidence_score=0.8412833526756263)], short_caption=\"An astronaut from space sits on the lunar surface at around 200 feet below him, over a tan lunar ground with bays leading to his original path and some rocks oncrete having a shiny armor. Both left and right have a sphere that is used for eyes and protection. Left is wearing a baseball with playing field across, and other articles, the heavy one having a shiny metal visor drum on top. The astronaut's grin can be seen over the helmet as he comes out with his right arm out of the sat gadget and leaves it as leaving the shining metal bars as he is from the center of the image.\")"
162+
]
163+
},
164+
"execution_count": 95,
165+
"metadata": {},
166+
"output_type": "execute_result"
167+
}
168+
],
169+
"source": [
170+
"def img_from_url(url):\n",
171+
" img_byte_stream = BytesIO(urlopen(url).read())\n",
172+
" return Image.open(img_byte_stream).convert(\"RGB\")\n",
173+
"\n",
174+
"\n",
175+
"image_url = (\n",
176+
" \"https://upload.wikimedia.org/wikipedia/commons/9/98/Aldrin_Apollo_11_original.jpg\"\n",
177+
")\n",
178+
"image = img_from_url(image_url)\n",
179+
"\n",
180+
"\n",
181+
"def extract_objects(image, prompt):\n",
182+
" messages = [\n",
183+
" {\n",
184+
" \"role\": \"user\",\n",
185+
" \"content\": [{\"type\": \"image\"}, {\"type\": \"text\", \"text\": prompt}],\n",
186+
" },\n",
187+
" ]\n",
188+
"\n",
189+
" formatted_prompt = model.processor.apply_chat_template(\n",
190+
" messages, add_generation_prompt=True\n",
191+
" )\n",
192+
"\n",
193+
" result = image_objects_generator(formatted_prompt, [image])\n",
194+
" return result\n",
195+
"\n",
196+
"\n",
197+
"extract_objects(image, prompt)"
198+
]
199+
},
200+
{
201+
"cell_type": "markdown",
202+
"metadata": {},
203+
"source": [
204+
"## Conclusion\n",
205+
"\n",
206+
"We've seen how to extract structured information from documents using a vision language model. We can use similar extractive methods to extract structured information from documents, using somehting like `pdf2image` to convert the document to images and running information extraction on each image pdf of the page.\n",
207+
"\n",
208+
"```python\n",
209+
"pdf_path = \"path/to/your/pdf/file.pdf\"\n",
210+
"pages = convert_from_path(pdf_path)\n",
211+
"for page in pages:\n",
212+
" extract_objects = extract_objects(page, prompt)\n",
213+
"```\n",
214+
"\n",
215+
"## Next Steps\n",
216+
"\n",
217+
"- Take a look at the [Outlines](https://github.com/outlines-ai/outlines) library for more information on how to use it. Explore the different methods and parameters.\n",
218+
"- Explore extraction on your own usecase.\n",
219+
"- Use a different method of extracting structured information from documents."
220+
]
221+
}
222+
],
223+
"metadata": {
224+
"kernelspec": {
225+
"display_name": ".venv",
226+
"language": "python",
227+
"name": "python3"
228+
},
229+
"language_info": {
230+
"codemirror_mode": {
231+
"name": "ipython",
232+
"version": 3
233+
},
234+
"file_extension": ".py",
235+
"mimetype": "text/x-python",
236+
"name": "python",
237+
"nbconvert_exporter": "python",
238+
"pygments_lexer": "ipython3",
239+
"version": "3.11.11"
240+
}
241+
},
242+
"nbformat": 4,
243+
"nbformat_minor": 2
244+
}

0 commit comments

Comments
 (0)