You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"This notebook will provide an example workflow of setting up a Model Chat Evaluation (MCE) Project with the Labelbox-Python SDK.\n",
36
-
"MOE Projects are set up differently then other editors with it's own unique method and modifications to existing methods:\n",
34
+
"Model Chat Evaluation Projects are set up differently then other editors with it's own unique method and modifications to existing methods:\n",
37
35
"\n",
38
36
"- `client.create_model_evaluation_project`: Main method used to create a model chat evaluation project\n",
39
37
"\n",
40
38
"- `client.create_ontology`: Methods used to create Labelbox ontologies for MCE project this requires a `ontology_kind` parameter set to `lb.OntologyKind.ModelEvaluation`\n",
41
39
"\n",
42
40
"- `client.create_ontology_from_feature_schemas`: Similar to `client.create_ontology` but from a list of `feature schema ids` which is designed to allow you to use existing features instead of creating new features. This also requires a `ontology_kind` set to `lb.OntologyKind.ModelEvaluation`."
43
-
],
44
-
"cell_type": "markdown"
41
+
]
45
42
},
46
43
{
44
+
"cell_type": "markdown",
47
45
"metadata": {},
48
46
"source": [
49
47
"## Set Up"
50
-
],
51
-
"cell_type": "markdown"
48
+
]
52
49
},
53
50
{
54
-
"metadata": {},
55
-
"source": "%pip install -q \"labelbox[data]\"",
56
51
"cell_type": "code",
52
+
"execution_count": null,
53
+
"metadata": {},
57
54
"outputs": [],
58
-
"execution_count": null
55
+
"source": [
56
+
"%pip install -q \"labelbox[data]\""
57
+
]
59
58
},
60
59
{
61
-
"metadata": {},
62
-
"source": "import labelbox as lb",
63
60
"cell_type": "code",
61
+
"execution_count": null,
62
+
"metadata": {},
64
63
"outputs": [],
65
-
"execution_count": null
64
+
"source": [
65
+
"import labelbox as lb"
66
+
]
66
67
},
67
68
{
69
+
"cell_type": "markdown",
68
70
"metadata": {},
69
71
"source": [
70
72
"## API Key and Client\n",
71
73
"Provide a valid API key below in order to properly connect to the Labelbox client. Please review [Create API key guide](https://docs.labelbox.com/reference/create-api-key) for more information."
"## Example: Create Model Chat Evaluation Project\n",
86
91
"\n",
87
92
"The steps to creating a Model Chat Evaluation Project through the Labelbox-Python SDK are similar to creating a regular project. However, they vary slightly, and we will showcase the different methods in this example workflow."
88
-
],
89
-
"cell_type": "markdown"
93
+
]
90
94
},
91
95
{
96
+
"cell_type": "markdown",
92
97
"metadata": {},
93
98
"source": [
94
-
"### Create a MOE Ontology\n",
99
+
"### Create a Model Chat Evaluation Ontology\n",
95
100
"\n",
96
101
"You can create ontologies for Model Evaluation projects the same way as creating ontologies for other projects with the only requirement of passing in a `ontology_kind` parameter which needs set to `lb.OntologyKind.ModelEvaluation`. You can create ontologies with two methods: `client.create_ontology` and `client.create_ontology_from_feature_schemas`."
97
-
],
98
-
"cell_type": "markdown"
102
+
]
99
103
},
100
104
{
105
+
"cell_type": "markdown",
101
106
"metadata": {},
102
107
"source": [
103
108
"#### Option A: `client.create_ontology`\n",
104
109
"\n",
105
110
"Typically, you create ontologies and generate the associated features at the same time. Below is an example of creating an ontology for your model chat evaluation project using supported tools and classifications. For information on supported annotation types visit our [model chat evaluation](https://docs.labelbox.com/docs/model-chat-evaluation#supported-annotation-types) guide."
"Ontologies can also be created with feature schema IDs. This makes your ontologies with existing features compared to generating new features. You can get these features by going to the _Schema_ tab inside Labelbox. (uncomment the below code block for this option)"
" - `dataset_name`: The name of the dataset were the data rows that are generated will be located. Include this parameter only needed if wanting to create a new dataset.\n",
151
203
"\n",
152
-
" - `dataset_id`: A dataset ID of an existing Labelbox dataset. Include this parameter if you are wanting to append to an existing MOE dataset.\n",
204
+
" - `dataset_id`: A dataset ID of an existing Labelbox dataset. Include this parameter if you are wanting to append to an existing MCE dataset.\n",
153
205
"\n",
154
206
" - `data_row_count`: The number of data row assets that will be generated and used with your project.\n"
155
-
],
156
-
"cell_type": "markdown"
207
+
]
157
208
},
158
209
{
159
-
"metadata": {},
160
-
"source": "project = client.create_model_evaluation_project(\n name=\"Demo MCE Project\",\n media_type=lb.MediaType.Conversational,\n dataset_name=\"Demo MCE dataset\",\n data_row_count=100,\n)\n\n# Setup project with ontology created above\nproject.setup_editor(ontology)",
"Exporting from a Model Chat Evaluation project works the same as exporting from other projects. In this example, unless you have created labels inside the Labelbox platform your exported will be shown as empty. Please review our [Model Chat Evaluation Export](https://docs.labelbox.com/reference/export-model-chat-evaluation-annotations) guide for a sample export."
170
-
],
171
-
"cell_type": "markdown"
231
+
"Exporting from a Model Chat Evaluation project works the same as exporting from other projects. In this example, unless you have created labels inside the Labelbox platform your export will be shown as empty. Please review our [Model Chat Evaluation Export](https://docs.labelbox.com/reference/export-model-chat-evaluation-annotations) guide for a sample export."
232
+
]
172
233
},
173
234
{
174
-
"metadata": {},
175
-
"source": "# Start export from project\nexport_task = project.export()\nexport_task.wait_till_done()\n\n# Conditional if task has errors\nif export_task.has_errors():\n export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start(\n stream_handler=lambda error: print(error))\n\nif export_task.has_result():\n # Start export stream\n stream = export_task.get_buffered_stream()\n\n # Iterate through data rows\n for data_row in stream:\n print(data_row.json)",
"This section serves as an optional clean-up step to delete the Labelbox assets created within this guide. You will need to uncomment the delete methods shown."
0 commit comments