Skip to content

[SN-153] YOLOv8 notebook #1673

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 25 commits into from
Jun 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,11 @@
<td><a href="https://github.com/Labelbox/labelbox-python/tree/develop/examples/integrations/sam/meta_sam.ipynb" target="_blank"><img src="https://img.shields.io/badge/GitHub-100000?logo=github&logoColor=white" alt="Open In Github"></a></td>
<td><a href="https://colab.research.google.com/github/Labelbox/labelbox-python/blob/develop/examples/integrations/sam/meta_sam.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></td>
</tr>
<tr>
<td>Import YOLOv8 Annotations</td>
<td><a href="https://github.com/Labelbox/labelbox-python/tree/develop/examples/integrations/yolo/import_yolov8_annotations.ipynb" target="_blank"><img src="https://img.shields.io/badge/GitHub-100000?logo=github&logoColor=white" alt="Open In Github"></a></td>
<td><a href="https://colab.research.google.com/github/Labelbox/labelbox-python/blob/develop/examples/integrations/yolo/import_yolov8_annotations.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></td>
</tr>
</tbody>
</table>

Expand Down
331 changes: 331 additions & 0 deletions examples/integrations/yolo/import_yolov8_annotations.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,331 @@
{
"nbformat": 4,
"nbformat_minor": 2,
"metadata": {},
"cells": [
{
"metadata": {},
"source": [
"<td>",
" <a target=\"_blank\" href=\"https://labelbox.com\" ><img src=\"https://labelbox.com/blog/content/images/2021/02/logo-v4.svg\" width=256/></a>",
"</td>\n"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"<td>\n",
"<a href=\"https://colab.research.google.com/github/Labelbox/labelbox-python/blob/develop/examples/integrations/yolo/import_yolov8_annotations.ipynb\" target=\"_blank\"><img\n",
"src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n",
"</td>\n",
"\n",
"<td>\n",
"<a href=\"https://github.com/Labelbox/labelbox-python/tree/develop/examples/integrations/yolo/import_yolov8_annotations.ipynb\" target=\"_blank\"><img\n",
"src=\"https://img.shields.io/badge/GitHub-100000?logo=github&logoColor=white\" alt=\"GitHub\"></a>\n",
"</td>"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"# Import YOLOv8 Annotations\n",
"This notebook provides examples of setting up an Annotate Project using annotations generated by the [Ultralytics](https://docs.ultralytics.com/) library of YOLOv8. In this guide, we will show you how to:\n",
"\n",
"1. Import image data rows for labeling\n",
"\n",
"2. Set up an ontology that matches the YOLOv8 annotations\n",
"\n",
"3. Import data rows and attach the ontology to a project\n",
"\n",
"4. Process images using Ultralytics\n",
"\n",
"5. Import the annotations generated"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"## Set Up"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "%pip install -q --upgrade \"labelbox[data]\"\n%pip install -q --upgrade ultralytics",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": "import labelbox as lb\nimport labelbox.types as lb_types\n\nimport ultralytics\nfrom PIL import Image\n\nimport uuid\nimport io",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## API Key and Client\n",
"Replace the value of `API_KEY` with a valid [API key](https://docs.labelbox.com/reference/create-api-key) to connect to the Labelbox client."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "API_KEY = None\nclient = lb.Client(api_key=API_KEY)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## Set Up a YOLOv8 model\n",
"Initialize our model for image data rows using `yolov8n-seg.pt`, which supports segmentation masks."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "model = ultralytics.YOLO(\"yolov8n-seg.pt\")",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## Example: Import YOLOv8 Annotations\n",
"\n",
"The first few steps of this guide will demonstrate a basic workflow of creating data rows and setting up a project. For a quick, complete overview of this process, see [Quick start](https://docs.labelbox.com/reference/quick-start)."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"### Import an Image Data Row\n",
"In this example, we use YOLOv8 to annotate this [image](https://storage.googleapis.com/labelbox-datasets/image_sample_data/2560px-Kitano_Street_Kobe01s5s4110.jpeg), which contains many objects that YOLOv8 can detect. Later in this guide, we will provide more details on the specific annotations."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "global_key = str(uuid.uuid4())\n\n# create data row\ndata_row = {\n \"row_data\":\n \"https://storage.googleapis.com/labelbox-datasets/image_sample_data/2560px-Kitano_Street_Kobe01s5s4110.jpeg\",\n \"global_key\":\n global_key,\n \"media_type\":\n \"IMAGE\",\n}\n\n# create dataset and import data row\ndataset = client.create_dataset(name=\"YOLOv8 Demo Dataset\")\ntask = dataset.create_data_rows([data_row])\ntask.wait_till_done()\n\nprint(f\"Errors: {task.errors}\")",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Set Up an Ontology and Project\n",
"\n",
"You need to create an ontology and project that match the data rows you are labeling. The ontology needs to include the annotations you want to derive from YOLOv8. Each feature name must be unique because Labelbox does not support ontologies with duplicate feature names at the first level.\n",
"\n",
"We will include bounding boxes, segment masks, and polygon tools to demonstrate converting each type of annotation from YOLOv8. We will also explain class mapping later in this guide.\n"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"#### Create an Ontology"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "ontology_builder = lb.OntologyBuilder(tools=[\n lb.Tool(tool=lb.Tool.Type.BBOX, name=\"Vehicle_bbox\"),\n lb.Tool(tool=lb.Tool.Type.BBOX, name=\"Person_bbox\"),\n lb.Tool(tool=lb.Tool.Type.RASTER_SEGMENTATION, name=\"Vehicle_mask\"),\n lb.Tool(tool=lb.Tool.Type.RASTER_SEGMENTATION, name=\"Person_mask\"),\n lb.Tool(tool=lb.Tool.Type.POLYGON, name=\"Vehicle_polygon\"),\n lb.Tool(tool=lb.Tool.Type.POLYGON, name=\"Person_polygon\"),\n])\n\nontology = client.create_ontology(\n name=\"YOLOv8 Demo Ontology\",\n normalized=ontology_builder.asdict(),\n media_type=lb.MediaType.Image,\n)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"#### Create and Set Up a Project"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "project = client.create_project(name=\"YOLOv8 Demo Project\",\n media_type=lb.MediaType.Image)\n\nproject.create_batch(name=\"batch 1\", global_keys=[global_key])\n\nproject.setup_editor(ontology)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Export Data Rows and Get Predictions\n",
"\n",
"Now we can export the data row from our project. Then add the row_data and global_key to a list to make our predictions."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"#### Export data"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "export_task = project.export()\nexport_task.wait_till_done()\n\n# prediction list we will be populating\nurl_list = []\nglobal_keys = []\n\n\n# callback that is ran on each data row\ndef export_callback(output: lb.BufferedJsonConverterOutput):\n\n data_row = output.json\n\n url_list.append(data_row[\"data_row\"][\"row_data\"])\n\n global_keys.append(data_row[\"data_row\"][\"global_key\"])\n\n\n# check if export has errors\nif export_task.has_errors():\n export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start()\n\nif export_task.has_result():\n export_task.get_buffered_stream().start(stream_handler=export_callback)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Import YOLOv8 Annotations to a Project\n",
"\n",
"Now that you have finished your initial setup, we can create predictions using YOLOv8 and import the annotations into our project. In this step, we will:\n",
"\n",
"1. Define our import functions\n",
"\n",
"2. Create our labels\n",
"\n",
"3. Import our labels as either ground truths or MAL labels (pre-labels)"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"#### Define Import Functions\n",
"\n",
"YOLOv8 supports a wide range of annotations. In this guide, we only import bounding boxes, polygons, and segment masks that match the ontology we created earlier. The following functions handle each annotation type by navigating through the YOLOv8 result payload and converting it to the Labelbox annotation format.\n",
"\n",
"All these functions support class mapping, which aligns YOLOv8 annotation names with Labelbox feature names. This mapping allows for different names in Labelbox and YOLOv8 and enables common YOLOv8 names to correspond to the same Labelbox feature in our ontology. We will define this mapping first. In our example, we map `bus` and `truck` to the Labelbox feature name `Vehicle` and person to `Person`. We will create a mapping for each tool type."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "bbox_class_mapping = {\n \"person\": \"Person_bbox\",\n \"bus\": \"Vehicle_bbox\",\n \"truck\": \"Vehicle_bbox\",\n}\nmask_class_mapping = {\n \"person\": \"Person_mask\",\n \"bus\": \"Vehicle_mask\",\n \"truck\": \"Vehicle_mask\",\n}\npolygon_class_mapping = {\n \"person\": \"Person_polygon\",\n \"bus\": \"Vehicle_polygon\",\n \"truck\": \"Vehicle_polygon\",\n}",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"##### Bounding Box"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "def get_yolo_bbox_annotation_predictions(\n yolo_results, model,\n ontology_mapping: dict[str:str]) -> list[lb_types.ObjectAnnotation]:\n \"\"\"Convert YOLOV8 model bbox prediction results to Labelbox annotations format.\n\n Args:\n yolo_results (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n\n for yolo_result in yolo_results:\n for bbox in yolo_result.boxes:\n class_name = model.names[int(bbox.cls)]\n\n # ignore bboxes that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n # get bbox coordinates\n start_x, start_y, end_x, end_y = bbox.xyxy.tolist()[0]\n\n bbox_source = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Rectangle(\n start=lb_types.Point(x=start_x, y=start_y),\n end=lb_types.Point(x=end_x, y=end_y),\n ),\n )\n\n annotations.append(bbox_source)\n\n return annotations",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"##### Segment Mask"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "def get_yolo_segment_annotation_predictions(\n yolo_results, model,\n ontology_mapping: dict[str:str]) -> list[lb_types.Label]:\n \"\"\"Convert YOLOV8 segment mask prediction results to Labelbox annotations format\n\n Args:\n yolo_results (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n\n for yolo_result in yolo_results:\n for i, mask in enumerate(yolo_result.masks.data):\n class_name = model.names[int(yolo_result.boxes[i].cls)]\n\n # ignore segment masks that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n # get binary numpy array to byte array. You must resize mask to match image.\n mask = (mask.numpy() * 255).astype(\"uint8\")\n img = Image.fromarray(mask, \"L\")\n img = img.resize(\n (yolo_result.orig_shape[1], yolo_result.orig_shape[0]))\n img_byte_arr = io.BytesIO()\n img.save(img_byte_arr, format=\"PNG\")\n encoded_image_bytes = img_byte_arr.getvalue()\n\n mask_data = lb_types.MaskData(im_bytes=encoded_image_bytes)\n mask_annotation = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Mask(mask=mask_data, color=(255, 255, 255)),\n )\n annotations.append(mask_annotation)\n\n return annotations",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"##### Polygon"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "def get_yolo_polygon_annotation_predictions(\n yolo_results, model, ontology_mapping: dict[str:str]) -> list[lb.Label]:\n \"\"\"Convert YOLOv8 model results to Labelbox polygon annotations format.\n\n Args:\n yolo_result (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n for yolo_result in yolo_results:\n for i, coordinates in enumerate(yolo_result.masks.xy):\n class_name = model.names[int(yolo_result.boxes[i].cls)]\n\n # ignore polygons that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n polygon_annotation = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Polygon(points=[\n lb_types.Point(x=coordinate[0], y=coordinate[1])\n for coordinate in coordinates\n ]),\n )\n annotations.append(polygon_annotation)\n\n return annotations",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"#### Creating our Labels\n",
"Now that we have defined our functions to create our Labelbox annotations, we can run each image through YOLOv8 to obtain our predictions and then use those results with our global keys to create our labels. "
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "# label list that will be populated\nlabels = []\n\nfor i, global_key in enumerate(global_keys):\n annotations = []\n\n # make YOLOv8 predictions\n result = model.predict(url_list[i])\n\n # run result through each function and adding them to our annotation list\n annotations += get_yolo_bbox_annotation_predictions(result, model,\n bbox_class_mapping)\n annotations += get_yolo_polygon_annotation_predictions(\n result, model, polygon_class_mapping)\n annotations += get_yolo_segment_annotation_predictions(\n result, model, mask_class_mapping)\n\n labels.append(\n lb_types.Label(data={\"global_key\": global_key},\n annotations=annotations))",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"#### Import Annotations to Labelbox\n",
"We have created our labels and can import them to our project. For more information on importing annotations, see [import image annotations](https://docs.labelbox.com/reference/import-image-annotations)."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"##### Option A: Upload as [Pre-labels (Model Assisted Labeling)](https://docs.labelbox.com/docs/model-assisted-labeling)\n",
"\n",
"This option is helpful for speeding up the initial labeling process and reducing the manual labeling workload for high-volume datasets."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "upload_job = lb.MALPredictionImport.create_from_objects(\n client=client,\n project_id=project.uid,\n name=\"mal_job\" + str(uuid.uuid4()),\n predictions=labels,\n)\n\nprint(f\"Errors: {upload_job.errors}\")\nprint(f\"Status of uploads: {upload_job.statuses}\")",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"#### Option B: Upload to a Labeling Project as [Ground Truths](https://docs.labelbox.com/docs/import-ground-truth)\n",
"\n",
"This option is helpful for loading high-confidence labels from another platform or previous projects that just need review rather than manual labeling effort."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "upload_job = lb.LabelImport.create_from_objects(\n client=client,\n project_id=project.uid,\n name=\"label_import_job\" + str(uuid.uuid4()),\n labels=labels,\n)\n\nprint(f\"Errors: {upload_job.errors}\")\nprint(f\"Status of uploads: {upload_job.statuses}\")",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## Clean Up\n",
"Uncomment and run the cell below to optionally delete Labelbox objects created."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "# batch.delete()\n# project.delete()\n# dataset.delete()",
"cell_type": "code",
"outputs": [],
"execution_count": null
}
]
}
Loading