Skip to content

Commit e1d5882

Browse files
authored
Merge pull request #247 from Labelbox/develop
🚒🚒 3.0.0 🚒🚒
2 parents 155e241 + cb5f54e commit e1d5882

File tree

22 files changed

+813
-767
lines changed

22 files changed

+813
-767
lines changed

β€ŽCHANGELOG.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,57 @@
11
# Changelog
22

3+
# Version 3.0.0 (2021-08-12)
4+
## Added
5+
* Annotation types
6+
- A set of python objects for working with labelbox data
7+
- Creates a standard interface for both exports and imports
8+
- See example notebooks on how to use under examples/annotation_types
9+
- Note that these types are not yet supported for tiled imagery
10+
* MEA Support
11+
- Beta MEA users can now just use the latest SDK release
12+
* Metadata support
13+
- New metadata features are now fully supported by the SDK
14+
* Easier export
15+
- `project.export_labels()` accepts a boolean indicating whether or not to download the result
16+
- Create annotation objects directly from exports with `project.label_generator()` or `project.video_label_generator()`
17+
- `project.video_label_generator()` asynchronously fetches video annotations
18+
* Retry logic on data uploads
19+
- Bulk creation of data rows will be more reliable
20+
* Datasets
21+
- Determine the number of data rows just by calling `dataset.row_count`.
22+
- Updated threading logic in create_data_rows() to make it compatible with aws lambdas
23+
* Ontology
24+
- `OntologyBuilder`, `Classification`, `Option`, and `Tool` can now be imported from `labelbox` instead of `labelbox.schema.ontology`
25+
26+
## Removed
27+
* Deprecated:
28+
- `project.reviews()`
29+
- `project.create_prediction()`
30+
- `project.create_prediction_model()`
31+
- `project.create_label()`
32+
- `Project.predictions()`
33+
- `Project.active_prediction_model`
34+
- `data_row.predictions`
35+
- `PredictionModel`
36+
- `Prediction`
37+
* Replaced:
38+
- `data_row.metadata()` use `data_row.attachments()` instead
39+
- `data_row.create_metadata()` use `data_row.create_attachments()` instead
40+
- `AssetMetadata` use `AssetAttachment` instead
41+
42+
## Fixes
43+
* Support derived classes of ontology objects when using `from_dict`
44+
* Notebooks:
45+
- Video export bug where the code would fail if the exported projects had tools other than bounding boxes
46+
- MAL demos were broken due to an image download failing.
47+
48+
## Misc
49+
* Data processing dependencies are not installed by default to for users that only want client functionality.
50+
* To install all dependencies required for the data modules (annotation types and mea metric calculation) use `pip install labelbox[data]`
51+
* Decrease wait time between updates for `BulkImportRequest.wait_until_done()`.
52+
* Organization is no longer used to create the LFO in `Project.setup()`
53+
54+
355
# Version 3.0.0-rc3 (2021-08-11)
456
## Updates
557
* Geometry.raster now has a consistent interface and improved functionality

β€Žexamples/annotation_types/basics.ipynb

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,12 @@
66
"metadata": {},
77
"source": [
88
"## Annotation Types\n",
9-
"This is a common format for representing human and machine generated annotations. A standard interface allows us to build tools that only need to work with a single interface. For example, if model predictions and labels are all represented by a common format we can write all of our etl, visualization code, training code to work with a single interface. Annotation types can also provide a seamless transition between local modeling and using labelbox. Some of the helper functions include:\n",
9+
"This is a common format for representing human and machine generated annotations. A standard interface allows us to build one set of tools that is compatible with all of our data. For example, if model predictions and labels are all represented by a common format we can write all of our etl, visualization code, training code to work with a single interface. Annotation types can also provide a seamless transition between local modeling and using labelbox. Some of the helper functions include:\n",
1010
"* Build annotations locally with local file paths, numpy arrays, or urls and create data rows with a single line of code\n",
11-
"* Easily upload model predictions by converting predictions to \n",
12-
"* Configure project ontology from model inferences\n",
13-
"* Easily access video data without having to worry about downloading each frame.\n",
14-
"* Helper functions for drawing annotations, converting them into shapely obejects, and much more."
11+
"* Easily upload model predictions for MAL or MEA by converting annotation objects to the import format\n",
12+
"* Configure project ontologies from a set of model inferences\n",
13+
"* Easily access video data without having to worry about downloading each frame's annotations.\n",
14+
"* Helper functions for drawing annotations, converting them into shapely objects, and much more."
1515
]
1616
},
1717
{
@@ -23,7 +23,7 @@
2323
"## Installation\n",
2424
"* Installing annotation types requires a slightly different pattern\n",
2525
" - `pip install \"labelbox[data]\"`\n",
26-
"* `pip install labelbox` is still valid but it won't add the required dependencies. If you only want the client functionality of the SDK then don't add the data extras. However, you will likely get import errors if attempting to use the annotation types"
26+
"* `pip install labelbox` is still valid but it won't add the required dependencies. If you only want the client functionality of the SDK then don't add the [data] extras. However, you will likely get import errors if attempting to use the annotation types"
2727
]
2828
},
2929
{
@@ -33,7 +33,7 @@
3333
"metadata": {},
3434
"outputs": [],
3535
"source": [
36-
"!pip install \"labelbox[data]\" --pre"
36+
"!pip install \"labelbox[data]\""
3737
]
3838
},
3939
{
@@ -152,7 +152,7 @@
152152
" - `project.label_generator()`\n",
153153
" - `project.video_label_generator()`\n",
154154
"3. Use a converter to load from another format\n",
155-
" - Covered in converters.ipynb notebook."
155+
" - Covered in the converters.ipynb notebook."
156156
]
157157
},
158158
{
@@ -161,7 +161,7 @@
161161
"metadata": {},
162162
"source": [
163163
"### Basic LabelCollection\n",
164-
"* A Label collection is either a labelList or LabelGenerator containing Labels\n",
164+
"* A Label collection is either a `labelList` or `LabelGenerator` containing `Labels`\n",
165165
" * More on this in label_containers.ipynb\n",
166166
"* Each label contains:\n",
167167
" 1. Data\n",
@@ -193,7 +193,7 @@
193193
"id": "circular-router",
194194
"metadata": {},
195195
"source": [
196-
"* All models are pydantic so we can easily convert all of our objects to dictionaries and view the schema."
196+
"* All models are pydantic models so we can easily convert all of our objects to dictionaries and view the schema."
197197
]
198198
},
199199
{
@@ -354,8 +354,8 @@
354354
"source": [
355355
"#### Non-public urls\n",
356356
"* If the urls in your data is not publicly accessible you can override the fetching logic\n",
357-
"* For TextData and ImageData overwrite the following function and make sure it has the same signature. `data.fetch_remote(self) -> bytes`.\n",
358-
"* For VideoData, the signature is `VideoData.fetch_remote(self, local_path)`. This function needs to download the video file locally to that local_path to work."
357+
"* For `TextData` and `ImageData` overwrite the following function and make sure it has the same signature. `data.fetch_remote(self) -> bytes`.\n",
358+
"* For `VideoData`, the signature is `VideoData.fetch_remote(self, local_path)`. This function needs to download the video file locally to that local_path to work."
359359
]
360360
},
361361
{
@@ -382,27 +382,27 @@
382382
"metadata": {},
383383
"source": [
384384
"* There are 4 types of annotations\n",
385-
" 1. ObjectAnnotation\n",
385+
" 1. `ObjectAnnotation`\n",
386386
" - Objects with location information\n",
387387
" - Annotations that are found in the object field of the labelbox export\n",
388-
" - Classes: Point, Polygon, Mask, Line, Rectangle, Named Entity\n",
389-
" 2. ClassificationAnnotation\n",
388+
" - Classes: `Point`, `Polygon`, `Mask`, `Line`, `Rectangle`, `TextEntity`\n",
389+
" 2. `ClassificationAnnotation`\n",
390390
" - Classifications that can apply to data or another annotation\n",
391-
" - Classes: Checklist, Radio, Text, Dropdown\n",
392-
" 3. VideoObjectAnnotation\n",
391+
" - Classes: `Checklist`, `Radio`, `Text`, `Dropdown`\n",
392+
" 3. `VideoObjectAnnotation`\n",
393393
" - Same as object annotation but there are extra fields for video information\n",
394-
" 4. VideoClassificationAnnotation\n",
394+
" 4. `VideoClassificationAnnotation`\n",
395395
" - Same as classification annotation but there are extra fields for video information \n",
396396
"-------- \n",
397397
"* Create an annotation by providing the following:\n",
398398
"1. Value\n",
399-
" - Must be either a Geometry, TextEntity, or Classification\n",
399+
" - Must be either a `Geometry`, `TextEntity`, or `Classification`\n",
400400
" - This is the same as a top level tool in labelbox\n",
401-
"2. name or feature_schema_id\n",
401+
"2. Name or feature_schema_id\n",
402402
" - This is the id that corresponds to a particular class or just simply the class name\n",
403403
" - If uploading to labelbox this must match a field in an ontology.\n",
404404
"3. (Optional) Classifications \n",
405-
" - List of ClassificationAnnotations. This self referencing field enables infinite nesting of classifications.\n",
405+
" - List of `ClassificationAnnotations`. This self referencing field enables infinite nesting of classifications.\n",
406406
" - Be careful with how you use the tool. Labelbox does not support nesting classifications\n",
407407
" - E.g. you can have tool.classifications but not tool.classifications[0].classifications\n",
408408
" "
@@ -652,7 +652,7 @@
652652
"##### Geometry Utilities\n",
653653
"* All of the previous objects except TextEntity inherit from the Geometry base class\n",
654654
"* They have the following properties and functions\n",
655-
" 1. raster(height width, kwargs)\n",
655+
" 1. draw(height width, kwargs)\n",
656656
" 2. shape - property\n",
657657
" 3. geometry - property"
658658
]
@@ -714,7 +714,7 @@
714714
"outputs": [],
715715
"source": [
716716
"color = (255,255,255)\n",
717-
"np_mask = polygon_annotation.value.raster(height = im.size[1], width = im.size[0], color = color)\n",
717+
"np_mask = polygon_annotation.value.draw(height = im.size[1], width = im.size[0], color = color)\n",
718718
"Image.fromarray(np.hstack([np_mask, np_data]))"
719719
]
720720
},
@@ -767,9 +767,9 @@
767767
" Polygon(points = [Point(x=x,y=y) for x,y in [[82, 180], [83, 184], [88, 184], [86, 180]]]),\n",
768768
" Polygon(points = [Point(x=x,y=y) for x,y in [[97, 182], [99, 184], [102, 183], [101, 180], [98, 180]]]), \n",
769769
"]\n",
770-
"eye_masks = np.max([eye.raster(height = h, width = w) for eye in eyes], axis = 0)\n",
770+
"eye_masks = np.max([eye.draw(height = h, width = w) for eye in eyes], axis = 0)\n",
771771
"nose = Polygon(points =[ Point(x=x,y=y) for x,y in [[95, 192], [93, 197], [96, 198], [100, 197], [100, 194], [100, 192], [96, 192]]])\n",
772-
"nose_mask = nose.raster(height = h, width = w, color = nose_color)\n",
772+
"nose_mask = nose.draw(height = h, width = w, color = nose_color)\n",
773773
"# Picks the brighter color if there is overlap. \n",
774774
"# If you don't want overlap then just simply create separate masks\n",
775775
"np_seg_mask = np.max([nose_mask, eye_masks], axis = 0)\n",
@@ -801,7 +801,7 @@
801801
"id": "swiss-storm",
802802
"metadata": {},
803803
"source": [
804-
"* Calling `mask.raster()` will return a mask with pixels equal to the specified color"
804+
"* Calling `mask.draw()` will return a mask with pixels equal to the specified color"
805805
]
806806
},
807807
{
@@ -811,8 +811,8 @@
811811
"metadata": {},
812812
"outputs": [],
813813
"source": [
814-
"eye_raster = eye_mask.raster()\n",
815-
"nose_raster = nose_mask.raster()\n",
814+
"eye_raster = eye_mask.draw()\n",
815+
"nose_raster = nose_mask.draw()\n",
816816
"Image.fromarray(np.hstack([eye_raster,nose_raster, np_data]))"
817817
]
818818
},
@@ -1027,7 +1027,7 @@
10271027
"outputs": [],
10281028
"source": [
10291029
"def signing_function(obj_bytes: bytes) -> str:\n",
1030-
" # WARNING: Do not use this signer. You will not be able to resign these images at a later date\n",
1030+
" # Do not use this signer. You will not be able to resign these images at a later date\n",
10311031
" url = client.upload_data(content=obj_bytes, sign=True)\n",
10321032
" return url"
10331033
]
@@ -1138,7 +1138,7 @@
11381138
"metadata": {},
11391139
"source": [
11401140
"### Creating Data Rows\n",
1141-
"* Our Labels objects are great for working with locally but we might want to upload to labelbox\n",
1141+
"* `Labels` objects are great for working with locally but we might want to upload to labelbox\n",
11421142
"* This is required for MAL, MEA, and to add additional labels to the data.\n"
11431143
]
11441144
},
@@ -1253,7 +1253,7 @@
12531253
"source": [
12541254
"### Next Steps\n",
12551255
"* Annotation types should be thought of as low level interfaces\n",
1256-
"* We are working on a set of tools to make this less verbose. Please provide any feedback!\n",
1256+
"* We are working on a set of tools to make working with annotation types less verbose. Please provide any feedback!\n",
12571257
"* Checkout other notebooks to see how to use higher level tools that are compatible with these interfaces"
12581258
]
12591259
},

β€Žexamples/annotation_types/converters.ipynb

Lines changed: 12 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,16 @@
66
"metadata": {},
77
"source": [
88
"# Converters\n",
9-
"* The goal is to create a set of converts that convert to and from the labelbox object format.\n",
9+
"* The goal is to create a set of converts that convert to and from labelbox annotation types.\n",
1010
"* This is automatically used when exporting labels from labelbox with:\n",
11-
" 1. label.label_generator()\n",
12-
" 2. label.video_label_generator()\n",
11+
" 1. `label.label_generator()`\n",
12+
" 2. `label.video_label_generator()`\n",
1313
"* Currently we support:\n",
1414
" 1. NDJson Converter\n",
1515
" - Convert to and from the prediction import format (mea, mal)\n",
1616
" 2. LabelboxV1 Converter\n",
1717
" - Convert to and from the prediction import format (mea, mal)\n",
18-
"* Converters use the LabelGenerator by default to minimize memory but are compatible with LabelLists"
18+
"* Converters use the `LabelGenerator` by default to minimize memory but are compatible with `LabelList`s"
1919
]
2020
},
2121
{
@@ -25,7 +25,7 @@
2525
"metadata": {},
2626
"outputs": [],
2727
"source": [
28-
"!pip install \"labelbox[data]\" --pre"
28+
"!pip install \"labelbox[data]\""
2929
]
3030
},
3131
{
@@ -104,8 +104,8 @@
104104
"metadata": {},
105105
"source": [
106106
"### Video\n",
107-
"* No longer need to download urls for each data row. This happens in the background of the converter\n",
108-
"* Easy to draw annotations directly exported from labelbox"
107+
"* Users no longer need to download urls for each data row. This happens in the background of the converter\n",
108+
"* It is easy to draw annotations directly exported from labelbox"
109109
]
110110
},
111111
{
@@ -367,7 +367,7 @@
367367
" \n",
368368
" for annotation in annotation_lookup[idx]:\n",
369369
" if isinstance(annotation.value, Rectangle):\n",
370-
" frame = annotation.value.raster(canvas = frame.astype(np.uint8), thickness = 10, color= (255,0,0))\n",
370+
" frame = annotation.value.draw(canvas = frame.astype(np.uint8), thickness = 10, color= (255,0,0))\n",
371371
" \n",
372372
" im = Image.fromarray(frame)\n",
373373
" w,h = im.size\n",
@@ -439,7 +439,7 @@
439439
"canvas = np.zeros((h, w, 3), dtype = np.uint8)\n",
440440
"for annotation in label_list[0].annotations:\n",
441441
" if isinstance(annotation.value, Geometry):\n",
442-
" canvas = annotation.value.raster(canvas = canvas)\n",
442+
" canvas = annotation.value.draw(canvas = canvas)\n",
443443
"Image.fromarray(canvas)"
444444
]
445445
},
@@ -487,7 +487,7 @@
487487
}
488488
],
489489
"source": [
490-
"# We can also reserialize:\n",
490+
"# We can also serialize back to the original payload:\n",
491491
"for result in LBV1Converter.serialize(label_list):\n",
492492
" print(result)"
493493
]
@@ -498,8 +498,8 @@
498498
"metadata": {},
499499
"source": [
500500
"## NDJson Converter\n",
501-
"* Converts common annotation types into the ndjson format.\n",
502-
"* Only supports MAL tools. So videos annotated with bounding boxes can't be converted"
501+
"* Converts common annotation types into the ndjson format\n",
502+
"* Only tools that are compatible with MAL are supported"
503503
]
504504
},
505505
{
@@ -520,8 +520,6 @@
520520
}
521521
],
522522
"source": [
523-
"# TODO: Throw an error on these video annotations..\n",
524-
"\n",
525523
"ndjson = []\n",
526524
"for row in NDJsonConverter.serialize(label_list):\n",
527525
" ndjson.append(row)\n",

β€Žexamples/annotation_types/label_containers.ipynb

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
"metadata": {},
2121
"outputs": [],
2222
"source": [
23-
"!pip install \"labelbox[data]\" --pre"
23+
"!pip install \"labelbox[data]\""
2424
]
2525
},
2626
{
@@ -132,10 +132,10 @@
132132
" im_h, im_w = 300, 200\n",
133133
" image_url = \"https://picsum.photos/id/1003/200/300\"\n",
134134
" nose_color, eye_color = (0,255,0), (255,0,0)\n",
135-
" nose_mask = Point(x = 96, y = 194).raster(im_h, im_w, thickness = 3)\n",
135+
" nose_mask = Point(x = 96, y = 194).draw(im_h, im_w, thickness = 3)\n",
136136
" eye_masks = [\n",
137-
" Point(x = 84, y = 182).raster(im_h, im_w, thickness = 3),\n",
138-
" Point(x = 99, y = 181).raster(im_h, im_w, thickness = 3),\n",
137+
" Point(x = 84, y = 182).draw(im_h, im_w, thickness = 3),\n",
138+
" Point(x = 99, y = 181).draw(im_h, im_w, thickness = 3),\n",
139139
" ]\n",
140140
" mask_arr = np.max([*eye_masks,nose_mask] , axis = 0)\n",
141141
" mask = MaskData(arr = mask_arr)\n",
@@ -256,8 +256,8 @@
256256
"source": [
257257
"# LabelList\n",
258258
"* This object is essentially a list of Labels with a set of helpful utilties\n",
259-
"* This object is simple and fast at the expense of memory\n",
260-
" * Larger datasets shouldn't use label list ( or at least will require more memory ).\n",
259+
"* It is simple and fast at the expense of memory\n",
260+
" * Larger datasets shouldn't use label list ( or at least will require more memory )\n",
261261
"* Why use label list over just a list of labels?\n",
262262
" * Multithreaded utilities (faster)\n",
263263
" * Compatible with converter functions (functions useful for translating between formats, etl, and training )"
@@ -273,7 +273,7 @@
273273
"labels = get_labels()\n",
274274
"label_list = LabelList(labels)\n",
275275
"\n",
276-
"# Also build label lists iteratively\n",
276+
"# Also build LabelLists iteratively\n",
277277
"label_list = LabelList()\n",
278278
"for label in labels:\n",
279279
" label_list.append(label)"
@@ -429,10 +429,10 @@
429429
"source": [
430430
"# LabelGenerator\n",
431431
"* This object generates labels and provides a set of helpful utilties\n",
432-
"* This object is complex and slower than LabelList in order to be highly memory efficient\n",
432+
"* This object is complex and slower than the `LabelList` in order to be highly memory efficient\n",
433433
" * Larger datasets should use label generators\n",
434434
"* Why use label generator over just a generator that yields labels?\n",
435-
" * This object supports parallel io operations to buffer results in the background.\n",
435+
" * Parallel io operations are run in the background to prepare results\n",
436436
" * Compatible with converter functions (functions useful for translating between formats, etl, and training )\n",
437437
"* The first qsize elements run serially from when the chained functions are added.\n",
438438
" * After that iterating will get much faster."

β€Žexamples/annotation_types/mal_using_annotation_types.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"metadata": {},
1818
"outputs": [],
1919
"source": [
20-
"!pip install \"labelbox[data]\" --pre"
20+
"!pip install \"labelbox[data]\""
2121
]
2222
},
2323
{

0 commit comments

Comments
Β (0)