From eccaff8cae43df4f4f61dae10b0dd302ffcb0ff4 Mon Sep 17 00:00:00 2001 From: Gabefire <33893811+Gabefire@users.noreply.github.com> Date: Tue, 2 Jul 2024 09:12:24 -0500 Subject: [PATCH 1/4] modified notebook with new methods --- ...ultimodal_chat_project.ipynb => multimodal_chat_project.ipynb} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename examples/project_configuration/{live_multimodal_chat_project.ipynb => multimodal_chat_project.ipynb} (100%) diff --git a/examples/project_configuration/live_multimodal_chat_project.ipynb b/examples/project_configuration/multimodal_chat_project.ipynb similarity index 100% rename from examples/project_configuration/live_multimodal_chat_project.ipynb rename to examples/project_configuration/multimodal_chat_project.ipynb From 8b60baad8d302afa7c026f42937c7fa7f7a90092 Mon Sep 17 00:00:00 2001 From: Gabefire <33893811+Gabefire@users.noreply.github.com> Date: Tue, 2 Jul 2024 09:14:50 -0500 Subject: [PATCH 2/4] modified notebook with new methods --- .../multimodal_chat_project.ipynb | 327 +++++++++++++----- 1 file changed, 242 insertions(+), 85 deletions(-) diff --git a/examples/project_configuration/multimodal_chat_project.ipynb b/examples/project_configuration/multimodal_chat_project.ipynb index 0eab0c809..2ceb76337 100644 --- a/examples/project_configuration/multimodal_chat_project.ipynb +++ b/examples/project_configuration/multimodal_chat_project.ipynb @@ -1,18 +1,16 @@ { - "nbformat": 4, - "nbformat_minor": 2, - "metadata": {}, "cells": [ { + "cell_type": "markdown", "metadata": {}, "source": [ - "", - " ", + "\n", + " \n", "\n" - ], - "cell_type": "markdown" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "\n", @@ -24,111 +22,203 @@ "\n", "" - ], - "cell_type": "markdown" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ - "# Live Multimodal Chat project setup\n", + "# Multimodal chat project setup\n", "\n", - "This notebook will provide an example workflow of setting up a Live Multimodal Chat (LMC) Project with the Labelbox-Python SDK.\n", - "Live Multimodal Chat Projects are set up differently than other projects with its own unique method and modifications to existing methods:\n", + "This notebook will provide an example workflow of setting up a multimodal Chat (MMC) Project with the Labelbox-Python SDK.\n", + "Multimodal Chat Projects are set up differently than other projects with its own unique method and modifications to existing methods:\n", "\n", - "- `client.create_model_evaluation_project`: The main method used to create a Live Multimodal Chat project\n", + "- `client.create_model_evaluation_project`: The main method used to create a live multimodal Chat project.\n", + " \n", + "- `client.create_offline_model_evaluation_project`: The main method used to create a offline multimodal Chat project.\n", "\n", - "- `client.create_ontology`: Methods used to create Labelbox ontologies for LMC project this requires an `ontology_kind` parameter set to `lb.OntologyKind.ModelEvaluation`\n", + "- `client.create_ontology`: Methods used to create Labelbox ontologies for LMC project this requires an `ontology_kind` parameter set to `lb.OntologyKind.ModelEvaluation`.\n", "\n", "- `client.create_ontology_from_feature_schemas`: Similar to `client.create_ontology` but from a list of `feature schema ids` designed to allow you to use existing features instead of creating new features. This also requires an `ontology_kind` set to `lb.OntologyKind.ModelEvaluation`." - ], - "cell_type": "markdown" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "## Set up" - ], - "cell_type": "markdown" + ] }, { - "metadata": {}, - "source": "%pip install -q --upgrade \"labelbox[data]\"", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "%pip install -q --upgrade \"labelbox[data]\"" + ] }, { - "metadata": {}, - "source": "import labelbox as lb", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "import labelbox as lb" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "## API key and client\n", - "Provide a valid API key below in order to properly connect to the Labelbox client. Please review [Create API key guide](https://docs.labelbox.com/reference/create-api-key) for more information." - ], - "cell_type": "markdown" + "Please provide a valid API key below to connect to the Labelbox client properly. For more information, please review the [Create API key guide](https://docs.labelbox.com/reference/create-api-key)." + ] }, { - "metadata": {}, - "source": "API_KEY = None\nclient = lb.Client(api_key=API_KEY)", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "API_KEY = None\n", + "client = lb.Client(api_key=API_KEY)" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ - "## Example: Create Live Multimodal Chat project\n", + "## Example: Create multimodal Chat project\n", "\n", - "The steps to creating a Live Multimodal Chat Project through the Labelbox-Python SDK are similar to creating a regular project. However, they vary slightly, and we will showcase the different methods in this example workflow." - ], - "cell_type": "markdown" + "The steps to creating a multimodal Chat Projects through the Labelbox-Python SDK are similar to creating a regular project. However, they vary slightly, and we will showcase the different methods in this example workflow." + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ - "### Create a Live Multimodal Chat ontology\n", + "### Create a multimodal chat ontology\n", "\n", - "You can create ontologies for Model Evaluation projects the same way as creating ontologies for other projects with the only requirement of passing in a `ontology_kind` parameter which needs set to `lb.OntologyKind.ModelEvaluation`. You can create ontologies with two methods: `client.create_ontology` and `client.create_ontology_from_feature_schemas`." - ], - "cell_type": "markdown" + "You can create ontologies for multimodal chat projects in the same way as other project ontologies using two methods: `client.create_ontology` and `client.create_ontology_from_feature_schemas`. The only additional requirement is to pass an ontology_kind parameter, which needs to be set to `lb.OntologyKind.ModelEvaluation`." + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "#### Option A: `client.create_ontology`\n", "\n", - "Typically, you create ontologies and generate the associated features at the same time. Below is an example of creating an ontology for your Live Multimodal Chat project using supported tools and classifications. For information on supported annotation types visit our [Live Multimodal Chat](https://docs.labelbox.com/docs/live-multimodal-chat#supported-annotation-types) guide." - ], - "cell_type": "markdown" + "Typically, you create ontologies and generate the associated features simultaneously. Below is an example of creating an ontology for your multimodal chat project using supported tools and classifications; for information on supported annotation types, visit our [multimodal chat evaluation guide](https://docs.labelbox.com/docs/multimodal-chat#supported-annotation-types) guide." + ] }, { - "metadata": {}, - "source": "ontology_builder = lb.OntologyBuilder(\n tools=[\n lb.Tool(\n tool=lb.Tool.Type.MESSAGE_SINGLE_SELECTION,\n name=\"single select feature\",\n ),\n lb.Tool(\n tool=lb.Tool.Type.MESSAGE_MULTI_SELECTION,\n name=\"multi select feature\",\n ),\n lb.Tool(tool=lb.Tool.Type.MESSAGE_RANKING, name=\"ranking feature\"),\n ],\n classifications=[\n lb.Classification(\n class_type=lb.Classification.Type.CHECKLIST,\n name=\"checklist feature\",\n options=[\n lb.Option(value=\"option 1\", label=\"option 1\"),\n lb.Option(value=\"option 2\", label=\"option 2\"),\n ],\n ),\n lb.Classification(\n class_type=lb.Classification.Type.RADIO,\n name=\"radio_question\",\n options=[\n lb.Option(value=\"first_radio_answer\"),\n lb.Option(value=\"second_radio_answer\"),\n ],\n ),\n ],\n)\n\n# Create ontology\nontology = client.create_ontology(\n \"LMC ontology\",\n ontology_builder.asdict(),\n media_type=lb.MediaType.Conversational,\n ontology_kind=lb.OntologyKind.ModelEvaluation,\n)", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "ontology_builder = lb.OntologyBuilder(\n", + " tools=[\n", + " lb.Tool(\n", + " tool=lb.Tool.Type.MESSAGE_SINGLE_SELECTION,\n", + " name=\"single select feature\",\n", + " ),\n", + " lb.Tool(\n", + " tool=lb.Tool.Type.MESSAGE_MULTI_SELECTION,\n", + " name=\"multi select feature\",\n", + " ),\n", + " lb.Tool(tool=lb.Tool.Type.MESSAGE_RANKING, name=\"ranking feature\"),\n", + " ],\n", + " classifications=[\n", + " lb.Classification(\n", + " class_type=lb.Classification.Type.CHECKLIST,\n", + " name=\"checklist feature\",\n", + " options=[\n", + " lb.Option(value=\"option 1\", label=\"option 1\"),\n", + " lb.Option(value=\"option 2\", label=\"option 2\"),\n", + " ],\n", + " ),\n", + " lb.Classification(\n", + " class_type=lb.Classification.Type.RADIO,\n", + " name=\"radio_question\",\n", + " options=[\n", + " lb.Option(value=\"first_radio_answer\"),\n", + " lb.Option(value=\"second_radio_answer\"),\n", + " ],\n", + " ),\n", + " ],\n", + ")\n", + "\n", + "# Create ontology\n", + "ontology = client.create_ontology(\n", + " \"LMC ontology\",\n", + " ontology_builder.asdict(),\n", + " media_type=lb.MediaType.Conversational,\n", + " ontology_kind=lb.OntologyKind.ModelEvaluation,\n", + ")" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "### Option B: `client.create_ontology_from_feature_schemas`\n", "Ontologies can also be created with feature schema IDs. This makes your ontologies with existing features compared to generating new features. You can get these features by going to the _Schema_ tab inside Labelbox. (uncomment the below code block for this option)" - ], - "cell_type": "markdown" + ] }, { + "cell_type": "code", + "execution_count": null, "metadata": {}, - "source": "# ontology = client.create_ontology_from_feature_schemas(\n# \"LMC ontology\",\n# feature_schema_ids=[\"\",\n", + " description=\"\", # optional\n", + ")" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "### Set Up Live Multimodal Chat project\n", @@ -150,25 +240,35 @@ " - `dataset_id`: An optional dataset ID of an existing Labelbox dataset. Include this parameter if you are wanting to append to an existing LMC dataset.\n", "\n", " - `data_row_count`: The number of data row assets that will be generated and used with your project.\n" - ], - "cell_type": "markdown" + ] }, { - "metadata": {}, - "source": "project = client.create_model_evaluation_project(\n name=\"Demo LMC Project\",\n media_type=lb.MediaType.Conversational,\n dataset_name=\"Demo LMC dataset\",\n data_row_count=100,\n)\n\n# Setup project with ontology created above\nproject.setup_editor(ontology)", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "project = client.create_model_evaluation_project(\n", + " name=\"Demo LMC Project\",\n", + " media_type=lb.MediaType.Conversational,\n", + " dataset_name=\"Demo LMC dataset\",\n", + " data_row_count=100,\n", + ")\n", + "\n", + "# Setup project with ontology created above\n", + "project.setup_editor(ontology)" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "## Setting up model config\n", "You can create, delete, attach and remove model configs from your Live Multimodal Chat project through the Labelbox-Python SDK. These are the model configs that you will be evaluating for your responses. " - ], - "cell_type": "markdown" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "### Creating model config\n", @@ -181,83 +281,140 @@ "- `inference_params`: JSON of model configuration parameters. This will vary depending on the model you are trying to set up. It is recommended to first set up a model config inside the UI to learn all the associated parameters.\n", "\n", "For the example below, we will be setting up a Google Gemini 1.5 Pro model config." - ], - "cell_type": "markdown" + ] }, { - "metadata": {}, - "source": "MODEL_ID = \"270a24ba-b983-40d6-9a1f-98a1bbc2fb65\"\n\ninference_params = {\"max_new_tokens\": 1024, \"use_attachments\": True}\n\nmodel_config = client.create_model_config(\n name=\"Example model config\",\n model_id=MODEL_ID,\n inference_params=inference_params,\n)", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "MODEL_ID = \"270a24ba-b983-40d6-9a1f-98a1bbc2fb65\"\n", + "\n", + "inference_params = {\"max_new_tokens\": 1024, \"use_attachments\": True}\n", + "\n", + "model_config = client.create_model_config(\n", + " name=\"Example model config\",\n", + " model_id=MODEL_ID,\n", + " inference_params=inference_params,\n", + ")" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "### Attaching model config to project\n", "You can attach and remove model configs to your project using `project.add_model_config` or `project.remove_model_config`. Both methods take just a `model_config` ID." - ], - "cell_type": "markdown" + ] }, { - "metadata": {}, - "source": "project.add_model_config(model_config.uid)", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "project.add_model_config(model_config.uid)" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "### Delete model config\n", "You can also delete model configs using the `client.delete_model_config`. You just need to pass in the `model_config` ID in order to delete your model config. You can obtain this ID from your created model config above or get the model configs directly from your project using `project.project_model_configs` and then iterating through the list of model configs attached to your project. Uncomment the code below to delete your model configs. " - ], - "cell_type": "markdown" + ] }, { - "metadata": {}, - "source": "# model_configs = project.project_model_configs()\n\n# for model_config in model_configs:\n# client.delete_model_config(model_config.uid)", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "# model_configs = project.project_model_configs()\n", + "\n", + "# for model_config in model_configs:\n", + "# client.delete_model_config(model_config.uid)" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ - "**To finish setting up your LMC project, you will need to navigate to your project overview inside the Labelbox platform and select _Complete setup_ on the left side panel**" - ], - "cell_type": "markdown" + "### Mark project setup as completed\n", + "\n", + "Once you have finalized your project and set up your model configs, you must mark the project setup as completed.\n", + "\n", + "**Once the project is marked as \"setup complete\", a user can not add, modify, or delete existing project model configs.**" + ] }, { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "project.set_project_model_setup_complete()" + ] + }, + { + "cell_type": "markdown", "metadata": {}, "source": [ "## Exporting Live Multimodal Chat project\n", "Exporting from a Live Multimodal Chat project works the same as exporting from other projects. In this example, your export will be shown as empty unless you have created labels inside the Labelbox platform. Please review our [Live Multimodal Chat Export](https://docs.labelbox.com/reference/export-live-multimodal-chat-annotations) guide for a sample export." - ], - "cell_type": "markdown" + ] }, { - "metadata": {}, - "source": "# Start export from project\nexport_task = project.export()\nexport_task.wait_till_done()\n\n# Conditional if task has errors\nif export_task.has_errors():\n export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start(\n stream_handler=lambda error: print(error))\n\nif export_task.has_result():\n # Start export stream\n stream = export_task.get_buffered_stream()\n\n # Iterate through data rows\n for data_row in stream:\n print(data_row.json)", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "# Start export from project\n", + "export_task = project.export()\n", + "export_task.wait_till_done()\n", + "\n", + "# Conditional if task has errors\n", + "if export_task.has_errors():\n", + " export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start(\n", + " stream_handler=lambda error: print(error))\n", + "\n", + "if export_task.has_result():\n", + " # Start export stream\n", + " stream = export_task.get_buffered_stream()\n", + "\n", + " # Iterate through data rows\n", + " for data_row in stream:\n", + " print(data_row.json)" + ] }, { + "cell_type": "markdown", "metadata": {}, "source": [ "## Clean up\n", "\n", "This section serves as an optional clean-up step to delete the Labelbox assets created within this guide. You will need to uncomment the delete methods shown." - ], - "cell_type": "markdown" + ] }, { - "metadata": {}, - "source": "# project.delete()\n# client.delete_unused_ontology(ontology.uid)\n# dataset.delete()", "cell_type": "code", + "execution_count": null, + "metadata": {}, "outputs": [], - "execution_count": null + "source": [ + "# project.delete()\n", + "# client.delete_unused_ontology(ontology.uid)\n", + "# dataset.delete()" + ] } - ] -} \ No newline at end of file + ], + "metadata": { + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From bdfd54a6c822f339ee024a20cf214fe3848093d0 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 2 Jul 2024 14:15:51 +0000 Subject: [PATCH 3/4] :art: Cleaned --- .../multimodal_chat_project.ipynb | 285 ++++++------------ 1 file changed, 86 insertions(+), 199 deletions(-) diff --git a/examples/project_configuration/multimodal_chat_project.ipynb b/examples/project_configuration/multimodal_chat_project.ipynb index 2ceb76337..c2f741046 100644 --- a/examples/project_configuration/multimodal_chat_project.ipynb +++ b/examples/project_configuration/multimodal_chat_project.ipynb @@ -1,31 +1,33 @@ { + "nbformat": 4, + "nbformat_minor": 2, + "metadata": {}, "cells": [ { - "cell_type": "markdown", "metadata": {}, "source": [ - "\n", - " \n", + "", + " ", "\n" - ] + ], + "cell_type": "markdown" }, { - "cell_type": "markdown", "metadata": {}, "source": [ "\n", - "\n", "\n", "\n", "\n", - "\n", "" - ] + ], + "cell_type": "markdown" }, { - "cell_type": "markdown", "metadata": {}, "source": [ "# Multimodal chat project setup\n", @@ -40,149 +42,95 @@ "- `client.create_ontology`: Methods used to create Labelbox ontologies for LMC project this requires an `ontology_kind` parameter set to `lb.OntologyKind.ModelEvaluation`.\n", "\n", "- `client.create_ontology_from_feature_schemas`: Similar to `client.create_ontology` but from a list of `feature schema ids` designed to allow you to use existing features instead of creating new features. This also requires an `ontology_kind` set to `lb.OntologyKind.ModelEvaluation`." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "markdown", "metadata": {}, "source": [ "## Set up" - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "%pip install -q --upgrade \"labelbox[data]\"", + "cell_type": "code", "outputs": [], - "source": [ - "%pip install -q --upgrade \"labelbox[data]\"" - ] + "execution_count": null }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "import labelbox as lb", + "cell_type": "code", "outputs": [], - "source": [ - "import labelbox as lb" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "## API key and client\n", "Please provide a valid API key below to connect to the Labelbox client properly. For more information, please review the [Create API key guide](https://docs.labelbox.com/reference/create-api-key)." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "API_KEY = None\nclient = lb.Client(api_key=API_KEY)", + "cell_type": "code", "outputs": [], - "source": [ - "API_KEY = None\n", - "client = lb.Client(api_key=API_KEY)" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "## Example: Create multimodal Chat project\n", "\n", "The steps to creating a multimodal Chat Projects through the Labelbox-Python SDK are similar to creating a regular project. However, they vary slightly, and we will showcase the different methods in this example workflow." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "markdown", "metadata": {}, "source": [ "### Create a multimodal chat ontology\n", "\n", "You can create ontologies for multimodal chat projects in the same way as other project ontologies using two methods: `client.create_ontology` and `client.create_ontology_from_feature_schemas`. The only additional requirement is to pass an ontology_kind parameter, which needs to be set to `lb.OntologyKind.ModelEvaluation`." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "markdown", "metadata": {}, "source": [ "#### Option A: `client.create_ontology`\n", "\n", "Typically, you create ontologies and generate the associated features simultaneously. Below is an example of creating an ontology for your multimodal chat project using supported tools and classifications; for information on supported annotation types, visit our [multimodal chat evaluation guide](https://docs.labelbox.com/docs/multimodal-chat#supported-annotation-types) guide." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "ontology_builder = lb.OntologyBuilder(\n tools=[\n lb.Tool(\n tool=lb.Tool.Type.MESSAGE_SINGLE_SELECTION,\n name=\"single select feature\",\n ),\n lb.Tool(\n tool=lb.Tool.Type.MESSAGE_MULTI_SELECTION,\n name=\"multi select feature\",\n ),\n lb.Tool(tool=lb.Tool.Type.MESSAGE_RANKING, name=\"ranking feature\"),\n ],\n classifications=[\n lb.Classification(\n class_type=lb.Classification.Type.CHECKLIST,\n name=\"checklist feature\",\n options=[\n lb.Option(value=\"option 1\", label=\"option 1\"),\n lb.Option(value=\"option 2\", label=\"option 2\"),\n ],\n ),\n lb.Classification(\n class_type=lb.Classification.Type.RADIO,\n name=\"radio_question\",\n options=[\n lb.Option(value=\"first_radio_answer\"),\n lb.Option(value=\"second_radio_answer\"),\n ],\n ),\n ],\n)\n\n# Create ontology\nontology = client.create_ontology(\n \"LMC ontology\",\n ontology_builder.asdict(),\n media_type=lb.MediaType.Conversational,\n ontology_kind=lb.OntologyKind.ModelEvaluation,\n)", + "cell_type": "code", "outputs": [], - "source": [ - "ontology_builder = lb.OntologyBuilder(\n", - " tools=[\n", - " lb.Tool(\n", - " tool=lb.Tool.Type.MESSAGE_SINGLE_SELECTION,\n", - " name=\"single select feature\",\n", - " ),\n", - " lb.Tool(\n", - " tool=lb.Tool.Type.MESSAGE_MULTI_SELECTION,\n", - " name=\"multi select feature\",\n", - " ),\n", - " lb.Tool(tool=lb.Tool.Type.MESSAGE_RANKING, name=\"ranking feature\"),\n", - " ],\n", - " classifications=[\n", - " lb.Classification(\n", - " class_type=lb.Classification.Type.CHECKLIST,\n", - " name=\"checklist feature\",\n", - " options=[\n", - " lb.Option(value=\"option 1\", label=\"option 1\"),\n", - " lb.Option(value=\"option 2\", label=\"option 2\"),\n", - " ],\n", - " ),\n", - " lb.Classification(\n", - " class_type=lb.Classification.Type.RADIO,\n", - " name=\"radio_question\",\n", - " options=[\n", - " lb.Option(value=\"first_radio_answer\"),\n", - " lb.Option(value=\"second_radio_answer\"),\n", - " ],\n", - " ),\n", - " ],\n", - ")\n", - "\n", - "# Create ontology\n", - "ontology = client.create_ontology(\n", - " \"LMC ontology\",\n", - " ontology_builder.asdict(),\n", - " media_type=lb.MediaType.Conversational,\n", - " ontology_kind=lb.OntologyKind.ModelEvaluation,\n", - ")" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "### Option B: `client.create_ontology_from_feature_schemas`\n", "Ontologies can also be created with feature schema IDs. This makes your ontologies with existing features compared to generating new features. You can get these features by going to the _Schema_ tab inside Labelbox. (uncomment the below code block for this option)" - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "# ontology = client.create_ontology_from_feature_schemas(\n# \"LMC ontology\",\n# feature_schema_ids=[\"\",\n description=\"\", # optional\n)", + "cell_type": "code", "outputs": [], - "source": [ - "project = client.create_offline_model_evaluation_project(\n", - " name=\"\",\n", - " description=\"\", # optional\n", - ")" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "### Set Up Live Multimodal Chat project\n", @@ -240,35 +183,25 @@ " - `dataset_id`: An optional dataset ID of an existing Labelbox dataset. Include this parameter if you are wanting to append to an existing LMC dataset.\n", "\n", " - `data_row_count`: The number of data row assets that will be generated and used with your project.\n" - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "project = client.create_model_evaluation_project(\n name=\"Demo LMC Project\",\n media_type=lb.MediaType.Conversational,\n dataset_name=\"Demo LMC dataset\",\n data_row_count=100,\n)\n\n# Setup project with ontology created above\nproject.setup_editor(ontology)", + "cell_type": "code", "outputs": [], - "source": [ - "project = client.create_model_evaluation_project(\n", - " name=\"Demo LMC Project\",\n", - " media_type=lb.MediaType.Conversational,\n", - " dataset_name=\"Demo LMC dataset\",\n", - " data_row_count=100,\n", - ")\n", - "\n", - "# Setup project with ontology created above\n", - "project.setup_editor(ontology)" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "## Setting up model config\n", "You can create, delete, attach and remove model configs from your Live Multimodal Chat project through the Labelbox-Python SDK. These are the model configs that you will be evaluating for your responses. " - ] + ], + "cell_type": "markdown" }, { - "cell_type": "markdown", "metadata": {}, "source": [ "### Creating model config\n", @@ -281,64 +214,47 @@ "- `inference_params`: JSON of model configuration parameters. This will vary depending on the model you are trying to set up. It is recommended to first set up a model config inside the UI to learn all the associated parameters.\n", "\n", "For the example below, we will be setting up a Google Gemini 1.5 Pro model config." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "MODEL_ID = \"270a24ba-b983-40d6-9a1f-98a1bbc2fb65\"\n\ninference_params = {\"max_new_tokens\": 1024, \"use_attachments\": True}\n\nmodel_config = client.create_model_config(\n name=\"Example model config\",\n model_id=MODEL_ID,\n inference_params=inference_params,\n)", + "cell_type": "code", "outputs": [], - "source": [ - "MODEL_ID = \"270a24ba-b983-40d6-9a1f-98a1bbc2fb65\"\n", - "\n", - "inference_params = {\"max_new_tokens\": 1024, \"use_attachments\": True}\n", - "\n", - "model_config = client.create_model_config(\n", - " name=\"Example model config\",\n", - " model_id=MODEL_ID,\n", - " inference_params=inference_params,\n", - ")" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "### Attaching model config to project\n", "You can attach and remove model configs to your project using `project.add_model_config` or `project.remove_model_config`. Both methods take just a `model_config` ID." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "project.add_model_config(model_config.uid)", + "cell_type": "code", "outputs": [], - "source": [ - "project.add_model_config(model_config.uid)" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "### Delete model config\n", "You can also delete model configs using the `client.delete_model_config`. You just need to pass in the `model_config` ID in order to delete your model config. You can obtain this ID from your created model config above or get the model configs directly from your project using `project.project_model_configs` and then iterating through the list of model configs attached to your project. Uncomment the code below to delete your model configs. " - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "# model_configs = project.project_model_configs()\n\n# for model_config in model_configs:\n# client.delete_model_config(model_config.uid)", + "cell_type": "code", "outputs": [], - "source": [ - "# model_configs = project.project_model_configs()\n", - "\n", - "# for model_config in model_configs:\n", - "# client.delete_model_config(model_config.uid)" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "### Mark project setup as completed\n", @@ -346,75 +262,46 @@ "Once you have finalized your project and set up your model configs, you must mark the project setup as completed.\n", "\n", "**Once the project is marked as \"setup complete\", a user can not add, modify, or delete existing project model configs.**" - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "project.set_project_model_setup_complete()", + "cell_type": "code", "outputs": [], - "source": [ - "project.set_project_model_setup_complete()" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "## Exporting Live Multimodal Chat project\n", "Exporting from a Live Multimodal Chat project works the same as exporting from other projects. In this example, your export will be shown as empty unless you have created labels inside the Labelbox platform. Please review our [Live Multimodal Chat Export](https://docs.labelbox.com/reference/export-live-multimodal-chat-annotations) guide for a sample export." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "# Start export from project\nexport_task = project.export()\nexport_task.wait_till_done()\n\n# Conditional if task has errors\nif export_task.has_errors():\n export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start(\n stream_handler=lambda error: print(error))\n\nif export_task.has_result():\n # Start export stream\n stream = export_task.get_buffered_stream()\n\n # Iterate through data rows\n for data_row in stream:\n print(data_row.json)", + "cell_type": "code", "outputs": [], - "source": [ - "# Start export from project\n", - "export_task = project.export()\n", - "export_task.wait_till_done()\n", - "\n", - "# Conditional if task has errors\n", - "if export_task.has_errors():\n", - " export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start(\n", - " stream_handler=lambda error: print(error))\n", - "\n", - "if export_task.has_result():\n", - " # Start export stream\n", - " stream = export_task.get_buffered_stream()\n", - "\n", - " # Iterate through data rows\n", - " for data_row in stream:\n", - " print(data_row.json)" - ] + "execution_count": null }, { - "cell_type": "markdown", "metadata": {}, "source": [ "## Clean up\n", "\n", "This section serves as an optional clean-up step to delete the Labelbox assets created within this guide. You will need to uncomment the delete methods shown." - ] + ], + "cell_type": "markdown" }, { - "cell_type": "code", - "execution_count": null, "metadata": {}, + "source": "# project.delete()\n# client.delete_unused_ontology(ontology.uid)\n# dataset.delete()", + "cell_type": "code", "outputs": [], - "source": [ - "# project.delete()\n", - "# client.delete_unused_ontology(ontology.uid)\n", - "# dataset.delete()" - ] - } - ], - "metadata": { - "language_info": { - "name": "python" + "execution_count": null } - }, - "nbformat": 4, - "nbformat_minor": 2 -} + ] +} \ No newline at end of file From a650a93c3d020439a2b91c334143659496d78bad Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 2 Jul 2024 14:16:35 +0000 Subject: [PATCH 4/4] :memo: README updated --- examples/README.md | 180 ++++++++++++++++++++++----------------------- 1 file changed, 90 insertions(+), 90 deletions(-) diff --git a/examples/README.md b/examples/README.md index 30f615bc2..38452d9d5 100644 --- a/examples/README.md +++ b/examples/README.md @@ -17,14 +17,9 @@ - Ontologies - Open In Github - Open In Colab - - - Data Rows - Open In Github - Open In Colab + Custom Embeddings + Open In Github + Open In Colab Batches @@ -32,34 +27,39 @@ Open In Colab - Projects - Open In Github - Open In Colab + User Management + Open In Github + Open In Colab - Custom Embeddings - Open In Github - Open In Colab + Basics + Open In Github + Open In Colab Data Row Metadata Open In Github Open In Colab + + Data Rows + Open In Github + Open In Colab + Quick Start Open In Github Open In Colab - Basics - Open In Github - Open In Colab + Ontologies + Open In Github + Open In Colab - User Management - Open In Github - Open In Colab + Projects + Open In Github + Open In Colab @@ -75,16 +75,16 @@ - - Composite Mask Export - Open In Github - Open In Colab - Export Data Open In Github Open In Colab + + Composite Mask Export + Open In Github + Open In Colab + Exporting to CSV Open In Github @@ -105,9 +105,14 @@ - Live Multimodal Chat Project - Open In Github - Open In Colab + Queue Management + Open In Github + Open In Colab + + + Multimodal Chat Project + Open In Github + Open In Colab Project Setup @@ -119,11 +124,6 @@ Open In Github Open In Colab - - Queue Management - Open In Github - Open In Colab - @@ -138,6 +138,11 @@ + + Conversational + Open In Github + Open In Colab + Conversational LLM Data Generation Open In Github @@ -154,19 +159,9 @@ Open In Colab - Audio - Open In Github - Open In Colab - - - Conversational - Open In Github - Open In Colab - - - PDF - Open In Github - Open In Colab + DICOM + Open In Github + Open In Colab Image @@ -174,14 +169,14 @@ Open In Colab - DICOM - Open In Github - Open In Colab + Tiled + Open In Github + Open In Colab - Conversational LLM - Open In Github - Open In Colab + Audio + Open In Github + Open In Colab HTML @@ -189,9 +184,14 @@ Open In Colab - Tiled - Open In Github - Open In Colab + Conversational LLM + Open In Github + Open In Colab + + + PDF + Open In Github + Open In Colab @@ -217,16 +217,16 @@ Open In Github Open In Colab - - Meta SAM Video - Open In Github - Open In Colab - Meta SAM Open In Github Open In Colab + + Meta SAM Video + Open In Github + Open In Colab + Import YOLOv8 Annotations Open In Github @@ -247,14 +247,9 @@ - Custom Metrics Demo - Open In Github - Open In Colab - - - Model Slices - Open In Github - Open In Colab + Model Predictions to Project + Open In Github + Open In Colab Custom Metrics Basics @@ -262,9 +257,14 @@ Open In Colab - Model Predictions to Project - Open In Github - Open In Colab + Model Slices + Open In Github + Open In Colab + + + Custom Metrics Demo + Open In Github + Open In Colab @@ -281,19 +281,9 @@ - PDF Predictions - Open In Github - Open In Colab - - - HTML Predictions - Open In Github - Open In Colab - - - Conversational Predictions - Open In Github - Open In Colab + Video Predictions + Open In Github + Open In Colab Image Predictions @@ -306,9 +296,14 @@ Open In Colab - Geospatial Predictions - Open In Github - Open In Colab + HTML Predictions + Open In Github + Open In Colab + + + Conversational Predictions + Open In Github + Open In Colab Conversational LLM Predictions @@ -316,9 +311,14 @@ Open In Colab - Video Predictions - Open In Github - Open In Colab + Geospatial Predictions + Open In Github + Open In Colab + + + PDF Predictions + Open In Github + Open In Colab