Skip to content

[SN-153] YOLOv8 notebook #1673

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 25 commits into from
Jun 14, 2024
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,11 @@
<td><a href="https://github.com/Labelbox/labelbox-python/tree/develop/examples/integrations/sam/meta_sam.ipynb" target="_blank"><img src="https://img.shields.io/badge/GitHub-100000?logo=github&logoColor=white" alt="Open In Github"></a></td>
<td><a href="https://colab.research.google.com/github/Labelbox/labelbox-python/blob/develop/examples/integrations/sam/meta_sam.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></td>
</tr>
<tr>
<td>Import Yolo Annotations</td>
<td><a href="https://github.com/Labelbox/labelbox-python/tree/develop/examples/integrations/yolo/import_yolo_annotations.ipynb" target="_blank"><img src="https://img.shields.io/badge/GitHub-100000?logo=github&logoColor=white" alt="Open In Github"></a></td>
<td><a href="https://colab.research.google.com/github/Labelbox/labelbox-python/blob/develop/examples/integrations/yolo/import_yolo_annotations.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></td>
</tr>
</tbody>
</table>

Expand Down
312 changes: 312 additions & 0 deletions examples/integrations/yolo/import_yolo_annotations.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,312 @@
{
"nbformat": 4,
"nbformat_minor": 2,
"metadata": {},
"cells": [
{
"metadata": {},
"source": [
"<td>",
" <a target=\"_blank\" href=\"https://labelbox.com\" ><img src=\"https://labelbox.com/blog/content/images/2021/02/logo-v4.svg\" width=256/></a>",
"</td>\n"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"<td>\n",
"<a href=\"https://colab.research.google.com/github/Labelbox/labelbox-python/blob/develop/examples/integrations/yolo/import_yolo_annotations.ipynb\" target=\"_blank\"><img\n",
"src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n",
"</td>\n",
"\n",
"<td>\n",
"<a href=\"https://github.com/Labelbox/labelbox-python/tree/develop/examples/integrations/yolo/import_yolo_annotations.ipynb\" target=\"_blank\"><img\n",
"src=\"https://img.shields.io/badge/GitHub-100000?logo=github&logoColor=white\" alt=\"GitHub\"></a>\n",
"</td>"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"# Import YOLOv8 annotations\n",
"This notebook will provide examples of setting up a Project with annotations generated with YOLOv8. We will use the [Ultralytics](https://docs.ultralytics.com/) library to generate annotations. In this guide, we will be:\n",
"1. Importing a demo image data rows that will be labeled\n",
"2. Setting up our ontology that matches our YOLOv8 annotations\n",
"3. Importing our data rows and attaching our ontology to a project\n",
"4. Running our images through Ultralytics\n",
"5. Importing the annotations generated\n"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"## Set up"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "%pip install -q --upgrade \"labelbox[data]\"\n%pip install -q --upgrade ultralytics",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": "import labelbox as lb\nimport labelbox.types as lb_types\n\nimport ultralytics\nfrom PIL import Image\n\nimport uuid\nimport io",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## API key and client\n",
"Provide a valid API key below to properly connect to the Labelbox client. Please review [Create API key guide](https://docs.labelbox.com/reference/create-api-key) for more information."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "API_KEY = None\nclient = lb.Client(api_key=API_KEY)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## Set up a YOLOv8 model\n",
"Below, we will initialize our model for our image data rows. We are using `yolov8n-seg.pt` since it supports segmentation masks. "
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "model = ultralytics.YOLO(\"yolov8n-seg.pt\")",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## Example: Import YOLOv8 Annotations\n",
"\n",
"The first few steps of this guide will demonstrate a basic workflow of creating data rows and setting up a project. For a quick, complete overview of this process, visit our [Quick start](https://docs.labelbox.com/reference/quick-start) guide."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"### Importing an image data row\n",
"\n",
"We will be using this [image](https://storage.googleapis.com/labelbox-datasets/image_sample_data/2560px-Kitano_Street_Kobe01s5s4110.jpeg) to be annotated with YOLOv8. This image has a lot of objects that can be picked up by YOLOv8. Later in this guide, we will go into more detail on the exact annotations."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "global_key = str(uuid.uuid4())\n\n# create data row\ndata_row = {\n \"row_data\":\n \"https://storage.googleapis.com/labelbox-datasets/image_sample_data/2560px-Kitano_Street_Kobe01s5s4110.jpeg\",\n \"global_key\":\n global_key,\n \"media_type\":\n \"IMAGE\",\n}\n\n# create dataset and import data row\ndataset = client.create_dataset(name=\"YOLOv8 Demo Dataset\")\ntask = dataset.create_data_rows([data_row])\ntask.wait_till_done()\n\nprint(f\"Errors: {task.errors}\")",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Setting up an ontology and a project\n",
"You must create a matching ontology and project with the data rows you are trying to label. The ontology should include the annotations that you want to derive from YOLOv8. We will introduce and explain class mappings later in this guide, so feel free to name your ontology features anything you want. In our example, we will include a combination of bounding boxes, segment masks, and polygon tools to demonstrate converting each type of annotation from YOLOv8. Labelbox does not support ontologies were the same feature name is present at the first level so each of our feature names need to be unique.\n"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"#### Create an ontology"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "ontology_builder = lb.OntologyBuilder(tools=[\n lb.Tool(tool=lb.Tool.Type.BBOX, name=\"Vehicle_bbox\"),\n lb.Tool(tool=lb.Tool.Type.BBOX, name=\"Person_bbox\"),\n lb.Tool(tool=lb.Tool.Type.RASTER_SEGMENTATION, name=\"Vehicle_mask\"),\n lb.Tool(tool=lb.Tool.Type.RASTER_SEGMENTATION, name=\"Person_mask\"),\n lb.Tool(tool=lb.Tool.Type.POLYGON, name=\"Vehicle_polygon\"),\n lb.Tool(tool=lb.Tool.Type.POLYGON, name=\"Person_polygon\"),\n])\n\nontology = client.create_ontology(\n name=\"YOLOv8 Demo Ontology\",\n normalized=ontology_builder.asdict(),\n media_type=lb.MediaType.Image,\n)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"#### Create and set up a project"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "project = client.create_project(name=\"YOLOv8 Demo Project\",\n media_type=lb.MediaType.Image)\n\nproject.create_batch(name=\"batch 1\", global_keys=[global_key])\n\nproject.setup_editor(ontology)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Export our data rows and get our predictions\n",
"In the step below, we export our data row from our project and then add the `row_data` and `global_key` to a list to make our predictions."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"#### Export data"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "export_task = project.export()\nexport_task.wait_till_done()\n\n# prediction list we will be populating\nurl_list = []\nglobal_keys = []\n\n\n# callback that is ran on each data row\ndef export_callback(output: lb.BufferedJsonConverterOutput):\n\n data_row = output.json\n\n url_list.append(data_row[\"data_row\"][\"row_data\"])\n\n global_keys.append(data_row[\"data_row\"][\"global_key\"])\n\n\n# check if export has errors\nif export_task.has_errors():\n export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start()\n\nif export_task.has_result():\n export_task.get_buffered_stream().start(stream_handler=export_callback)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"### Import YOLOv8 annotations to a project\n",
"Now that you have finished your initial set up we can create our predictions from YOLOv8 and import our annotations towards our project. We will be doing the following in this step:\n",
"1. Defining our import functions\n",
"2. Creating our labels\n",
"3. Importing our labels as either ground truths or MAL labels (pre-labels)"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"#### Defining our import functions\n",
"YOLOv8 supports a wide range of annotations. This guide shows importing bounding boxes, polygons, and segment masks that match our ontology. Below are the functions used for each type. These functions follow the same style, essentially navigating through our result payload from YOLOv8 and converting it to the Labelbox annotation format. All of our functions support a class mapping which maps YOLOv8 annotation names to Labelbox feature names. We have this mapping to support having different names for Labelbox features compared to YOLOv8 annotation names. It also allows us to map common YOLOv8 names to the same Labelbox feature attached to our ontology. We will define this mapping first. In our case, we are mapping `bus` and `truck` to our Labelbox feature name `Vehicle` and `person` to our Labelbox feature name `Person`. We will create a mapping per tool type."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "bbox_class_mapping = {\n \"person\": \"Person_bbox\",\n \"bus\": \"Vehicle_bbox\",\n \"truck\": \"Vehicle_bbox\",\n}\nmask_class_mapping = {\n \"person\": \"Person_mask\",\n \"bus\": \"Vehicle_mask\",\n \"truck\": \"Vehicle_mask\",\n}\npolygon_class_mapping = {\n \"person\": \"Person_polygon\",\n \"bus\": \"Vehicle_polygon\",\n \"truck\": \"Vehicle_polygon\",\n}",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"##### Bounding box"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "def get_yolo_bbox_annotation_predictions(\n yolo_results, model,\n ontology_mapping: dict[str:str]) -> list[lb_types.ObjectAnnotation]:\n \"\"\"Convert YOLOV8 model bbox prediction results to labelbox annotations format\n\n Args:\n yolo_results (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n\n for yolo_result in yolo_results:\n for bbox in yolo_result.boxes:\n class_name = model.names[int(bbox.cls)]\n\n # ignore bboxes that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n # get bbox coordinates\n start_x, start_y, end_x, end_y = bbox.xyxy.tolist()[0]\n\n bbox_source = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Rectangle(\n start=lb_types.Point(x=start_x, y=start_y),\n end=lb_types.Point(x=end_x, y=end_y),\n ),\n )\n\n annotations.append(bbox_source)\n\n return annotations",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"##### Segment mask"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "def get_yolo_segment_annotation_predictions(\n yolo_results, model,\n ontology_mapping: dict[str:str]) -> list[lb_types.Label]:\n \"\"\"Convert YOLOV8 segment mask prediction results to labelbox annotations format\n\n Args:\n yolo_results (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n\n for yolo_result in yolo_results:\n for i, mask in enumerate(yolo_result.masks.data):\n class_name = model.names[int(yolo_result.boxes[i].cls)]\n\n # ignore segment masks that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n # get binary numpy array to byte array. You must resize mask to match image.\n mask = (mask.numpy() * 255).astype(\"uint8\")\n img = Image.fromarray(mask, \"L\")\n img = img.resize(\n (yolo_result.orig_shape[1], yolo_result.orig_shape[0]))\n img_byte_arr = io.BytesIO()\n img.save(img_byte_arr, format=\"PNG\")\n encoded_image_bytes = img_byte_arr.getvalue()\n\n mask_data = lb_types.MaskData(im_bytes=encoded_image_bytes)\n mask_annotation = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Mask(mask=mask_data, color=(255, 255, 255)),\n )\n annotations.append(mask_annotation)\n\n return annotations",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"##### Polygon"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "def get_yolo_polygon_annotation_predictions(\n yolo_results, model, ontology_mapping: dict[str:str]) -> list[lb.Label]:\n \"\"\"Convert YOLOv8 model results to labelbox polygon annotations format\n\n Args:\n yolo_result (Results): YOLOv8 prediction results.\n model (Model): YOLOv8 model.\n ontology_mapping (dict[<yolo_class_name>: <labelbox_feature_name>]): Allows mapping between YOLOv8 class names and different Labelbox feature names.\n Returns:\n list[lb_types.ObjectAnnotation]\n \"\"\"\n annotations = []\n for yolo_result in yolo_results:\n for i, coordinates in enumerate(yolo_result.masks.xy):\n class_name = model.names[int(yolo_result.boxes[i].cls)]\n\n # ignore polygons that are not included in our mapping\n if not class_name in ontology_mapping.keys():\n continue\n\n polygon_annotation = lb_types.ObjectAnnotation(\n name=ontology_mapping[class_name],\n value=lb_types.Polygon(points=[\n lb_types.Point(x=coordinate[0], y=coordinate[1])\n for coordinate in coordinates\n ]),\n )\n annotations.append(polygon_annotation)\n\n return annotations",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"#### Creating our labels\n",
"Now that we have defined our functions to create our Labelbox annotations, we can run each image through YOLOv8 to obtain our predictions and then use those results with our global keys to create our labels. "
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "# label list that will be populated\nlabels = []\n\nfor i, global_key in enumerate(global_keys):\n annotations = []\n\n # make YOLOv8 predictions\n result = model.predict(url_list[i])\n\n # run result through each function and adding them to our annotation list\n annotations += get_yolo_bbox_annotation_predictions(result, model,\n bbox_class_mapping)\n annotations += get_yolo_polygon_annotation_predictions(\n result, model, polygon_class_mapping)\n annotations += get_yolo_segment_annotation_predictions(\n result, model, mask_class_mapping)\n\n labels.append(\n lb_types.Label(data={\"global_key\": global_key},\n annotations=annotations))",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"#### Import annotations to Labelbox\n",
"We have created our labels and can import them to our project. For more information on importing annotations, visit our [import image annotations](https://docs.labelbox.com/reference/import-image-annotations) guide."
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": [
"##### Option A: Upload to a labeling project as pre-labels (MAL)"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "# upload MAL labels for this data row in project\nupload_job = lb.MALPredictionImport.create_from_objects(\n client=client,\n project_id=project.uid,\n name=\"mal_job\" + str(uuid.uuid4()),\n predictions=labels,\n)\nupload_job.wait_until_done()\n\nprint(\"Errors: \", upload_job.errors)\nprint(\"Status of uploads: \", upload_job.statuses)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"##### Option B: Upload to a labeling project using ground truth"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "# upload label for this data row in project\nupload_job = lb.LabelImport.create_from_objects(\n client=client,\n project_id=project.uid,\n name=\"label_import_job\" + str(uuid.uuid4()),\n labels=labels,\n)\nupload_job.wait_until_done\n\nprint(\"Errors:\", upload_job.errors)\nprint(\"Status of uploads: \", upload_job.statuses)",
"cell_type": "code",
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"source": [
"## Clean up\n",
"Uncomment and run the cell below to optionally delete Labelbox objects created"
],
"cell_type": "markdown"
},
{
"metadata": {},
"source": "# batch.delete()\n# project.delete()\n# dataset.delete()",
"cell_type": "code",
"outputs": [],
"execution_count": null
}
]
}