You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"This notebook provides examples of setting up an Annotate Project using annotations generated by the [Ultralytics](https://docs.ultralytics.com/) library of YOLOv8. In this guide, we will show you how to:\n",
35
+
"\n",
36
+
"1. Import image data rows for labeling\n",
37
+
"\n",
38
+
"2. Set up an ontology that matches the YOLOv8 annotations\n",
39
+
"\n",
40
+
"3. Import data rows and attach the ontology to a project\n",
"The first few steps of this guide will demonstrate a basic workflow of creating data rows and setting up a project. For a quick, complete overview of this process, see [Quick start](https://docs.labelbox.com/reference/quick-start)."
105
+
],
106
+
"cell_type": "markdown"
107
+
},
108
+
{
109
+
"metadata": {},
110
+
"source": [
111
+
"### Import an Image Data Row\n",
112
+
"In this example, we use YOLOv8 to annotate this [image](https://storage.googleapis.com/labelbox-datasets/image_sample_data/2560px-Kitano_Street_Kobe01s5s4110.jpeg), which contains many objects that YOLOv8 can detect. Later in this guide, we will provide more details on the specific annotations."
113
+
],
114
+
"cell_type": "markdown"
115
+
},
116
+
{
117
+
"metadata": {},
118
+
"source": "global_key = str(uuid.uuid4())\n\n# create data row\ndata_row = {\n\"row_data\":\n\"https://storage.googleapis.com/labelbox-datasets/image_sample_data/2560px-Kitano_Street_Kobe01s5s4110.jpeg\",\n\"global_key\":\n global_key,\n\"media_type\":\n\"IMAGE\",\n}\n\n# create dataset and import data row\ndataset = client.create_dataset(name=\"YOLOv8 Demo Dataset\")\ntask = dataset.create_data_rows([data_row])\ntask.wait_till_done()\n\nprint(f\"Errors: {task.errors}\")",
119
+
"cell_type": "code",
120
+
"outputs": [],
121
+
"execution_count": null
122
+
},
123
+
{
124
+
"metadata": {},
125
+
"source": [
126
+
"### Set Up an Ontology and Project\n",
127
+
"\n",
128
+
"You need to create an ontology and project that match the data rows you are labeling. The ontology needs to include the annotations you want to derive from YOLOv8. Each feature name must be unique because Labelbox does not support ontologies with duplicate feature names at the first level.\n",
129
+
"\n",
130
+
"We will include bounding boxes, segment masks, and polygon tools to demonstrate converting each type of annotation from YOLOv8. We will also explain class mapping later in this guide.\n"
"Now we can export the data row from our project. Then add the row_data and global_key to a list to make our predictions."
168
+
],
169
+
"cell_type": "markdown"
170
+
},
171
+
{
172
+
"metadata": {},
173
+
"source": [
174
+
"#### Export data"
175
+
],
176
+
"cell_type": "markdown"
177
+
},
178
+
{
179
+
"metadata": {},
180
+
"source": "export_task = project.export()\nexport_task.wait_till_done()\n\n# prediction list we will be populating\nurl_list = []\nglobal_keys = []\n\n\n# callback that is ran on each data row\ndef export_callback(output: lb.BufferedJsonConverterOutput):\n\n data_row = output.json\n\n url_list.append(data_row[\"data_row\"][\"row_data\"])\n\n global_keys.append(data_row[\"data_row\"][\"global_key\"])\n\n\n# check if export has errors\nif export_task.has_errors():\n export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start()\n\nif export_task.has_result():\n export_task.get_buffered_stream().start(stream_handler=export_callback)",
181
+
"cell_type": "code",
182
+
"outputs": [],
183
+
"execution_count": null
184
+
},
185
+
{
186
+
"metadata": {},
187
+
"source": [
188
+
"### Import YOLOv8 Annotations to a Project\n",
189
+
"\n",
190
+
"Now that you have finished your initial setup, we can create predictions using YOLOv8 and import the annotations into our project. In this step, we will:\n",
191
+
"\n",
192
+
"1. Define our import functions\n",
193
+
"\n",
194
+
"2. Create our labels\n",
195
+
"\n",
196
+
"3. Import our labels as either ground truths or MAL labels (pre-labels)"
197
+
],
198
+
"cell_type": "markdown"
199
+
},
200
+
{
201
+
"metadata": {},
202
+
"source": [
203
+
"#### Define Import Functions\n",
204
+
"\n",
205
+
"YOLOv8 supports a wide range of annotations. In this guide, we only import bounding boxes, polygons, and segment masks that match the ontology we created earlier. The following functions handle each annotation type by navigating through the YOLOv8 result payload and converting it to the Labelbox annotation format.\n",
206
+
"\n",
207
+
"All these functions support class mapping, which aligns YOLOv8 annotation names with Labelbox feature names. This mapping allows for different names in Labelbox and YOLOv8 and enables common YOLOv8 names to correspond to the same Labelbox feature in our ontology. We will define this mapping first. In our example, we map `bus` and `truck` to the Labelbox feature name `Vehicle` and person to `Person`. We will create a mapping for each tool type."
"source": "def get_yolo_bbox_annotation_predictions(\n yolo_results, model,\n ontology_mapping: dict[str:str]) -> list[lb_types.ObjectAnnotation]:\n \"\"\"Convert YOLOV8 model bbox prediction results to Labelbox annotations format.\n\n Args:\n yolo_results (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n\n for yolo_result in yolo_results:\n for bbox in yolo_result.boxes:\n class_name = model.names[int(bbox.cls)]\n\n # ignore bboxes that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n # get bbox coordinates\n start_x, start_y, end_x, end_y = bbox.xyxy.tolist()[0]\n\n bbox_source = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Rectangle(\n start=lb_types.Point(x=start_x, y=start_y),\n end=lb_types.Point(x=end_x, y=end_y),\n ),\n )\n\n annotations.append(bbox_source)\n\n return annotations",
228
+
"cell_type": "code",
229
+
"outputs": [],
230
+
"execution_count": null
231
+
},
232
+
{
233
+
"metadata": {},
234
+
"source": [
235
+
"##### Segment Mask"
236
+
],
237
+
"cell_type": "markdown"
238
+
},
239
+
{
240
+
"metadata": {},
241
+
"source": "def get_yolo_segment_annotation_predictions(\n yolo_results, model,\n ontology_mapping: dict[str:str]) -> list[lb_types.Label]:\n \"\"\"Convert YOLOV8 segment mask prediction results to Labelbox annotations format\n\n Args:\n yolo_results (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n\n for yolo_result in yolo_results:\n for i, mask in enumerate(yolo_result.masks.data):\n class_name = model.names[int(yolo_result.boxes[i].cls)]\n\n # ignore segment masks that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n # get binary numpy array to byte array. You must resize mask to match image.\n mask = (mask.numpy() * 255).astype(\"uint8\")\n img = Image.fromarray(mask, \"L\")\n img = img.resize(\n (yolo_result.orig_shape[1], yolo_result.orig_shape[0]))\n img_byte_arr = io.BytesIO()\n img.save(img_byte_arr, format=\"PNG\")\n encoded_image_bytes = img_byte_arr.getvalue()\n\n mask_data = lb_types.MaskData(im_bytes=encoded_image_bytes)\n mask_annotation = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Mask(mask=mask_data, color=(255, 255, 255)),\n )\n annotations.append(mask_annotation)\n\n return annotations",
242
+
"cell_type": "code",
243
+
"outputs": [],
244
+
"execution_count": null
245
+
},
246
+
{
247
+
"metadata": {},
248
+
"source": [
249
+
"##### Polygon"
250
+
],
251
+
"cell_type": "markdown"
252
+
},
253
+
{
254
+
"metadata": {},
255
+
"source": "def get_yolo_polygon_annotation_predictions(\n yolo_results, model, ontology_mapping: dict[str:str]) -> list[lb.Label]:\n \"\"\"Convert YOLOv8 model results to Labelbox polygon annotations format.\n\n Args:\n yolo_result (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n for yolo_result in yolo_results:\n for i, coordinates in enumerate(yolo_result.masks.xy):\n class_name = model.names[int(yolo_result.boxes[i].cls)]\n\n # ignore polygons that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n polygon_annotation = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Polygon(points=[\n lb_types.Point(x=coordinate[0], y=coordinate[1])\n for coordinate in coordinates\n ]),\n )\n annotations.append(polygon_annotation)\n\n return annotations",
256
+
"cell_type": "code",
257
+
"outputs": [],
258
+
"execution_count": null
259
+
},
260
+
{
261
+
"metadata": {},
262
+
"source": [
263
+
"#### Creating our Labels\n",
264
+
"Now that we have defined our functions to create our Labelbox annotations, we can run each image through YOLOv8 to obtain our predictions and then use those results with our global keys to create our labels. "
265
+
],
266
+
"cell_type": "markdown"
267
+
},
268
+
{
269
+
"metadata": {},
270
+
"source": "# label list that will be populated\nlabels = []\n\nfor i, global_key in enumerate(global_keys):\n annotations = []\n\n # make YOLOv8 predictions\n result = model.predict(url_list[i])\n\n # run result through each function and adding them to our annotation list\n annotations += get_yolo_bbox_annotation_predictions(result, model,\n bbox_class_mapping)\n annotations += get_yolo_polygon_annotation_predictions(\n result, model, polygon_class_mapping)\n annotations += get_yolo_segment_annotation_predictions(\n result, model, mask_class_mapping)\n\n labels.append(\n lb_types.Label(data={\"global_key\": global_key},\n annotations=annotations))",
271
+
"cell_type": "code",
272
+
"outputs": [],
273
+
"execution_count": null
274
+
},
275
+
{
276
+
"metadata": {},
277
+
"source": [
278
+
"#### Import Annotations to Labelbox\n",
279
+
"We have created our labels and can import them to our project. For more information on importing annotations, see [import image annotations](https://docs.labelbox.com/reference/import-image-annotations)."
280
+
],
281
+
"cell_type": "markdown"
282
+
},
283
+
{
284
+
"metadata": {},
285
+
"source": [
286
+
"##### Option A: Upload as [Pre-labels (Model Assisted Labeling)](https://docs.labelbox.com/docs/model-assisted-labeling)\n",
287
+
"\n",
288
+
"This option is helpful for speeding up the initial labeling process and reducing the manual labeling workload for high-volume datasets."
"#### Option B: Upload to a Labeling Project as [Ground Truths](https://docs.labelbox.com/docs/import-ground-truth)\n",
303
+
"\n",
304
+
"This option is helpful for loading high-confidence labels from another platform or previous projects that just need review rather than manual labeling effort."
0 commit comments