Skip to content

Commit 0bf8223

Browse files
Merge pull request #467 from microsoft/workshop-page
fix: workshop doc fixes
2 parents 0950602 + c41ddf9 commit 0bf8223

File tree

7 files changed

+16
-13
lines changed

7 files changed

+16
-13
lines changed

.github/workflows/deploy.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,5 +39,4 @@ jobs:
3939
with:
4040
github_token: ${{ secrets.GITHUB_TOKEN }}
4141
publish_dir: docs/workshop/site
42-
# optionally set the branch to deploy to:
43-
# publish_branch: gh-pages
42+

docs/workshop/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,13 +23,13 @@ The current repository is instrumented with a `workshop/docs` folder that contai
2323
1. Install the `mkdocs-material` package
2424

2525
```bash
26-
pip install mkdocs-material
26+
pip install mkdocs-material mkdocs-jupyter
2727
```
2828

2929
2. Run the `mkdocs serve` command from the `workshop` folder
3030

3131
```bash
32-
cd workshop/docs
32+
cd docs/workshopcd
3333
mkdocs serve -a localhost:5000
3434
```
3535

docs/workshop/docs/workshop/Challenge-1/Code_Walkthrough/02_Frontend.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,9 @@
55

66
The frontend is a **React-based web interface** that allows users to explore insights from conversations, interact with an AI-powered chatbot, and view dynamic visualizations.
77

8-
![image](../../../../../../documents/Images/ReadMe/ui.png)
8+
9+
![image](../../img/ReadMe/ckm-ui.png)
10+
911

1012
### Features
1113

docs/workshop/docs/workshop/Challenge-1/Solution_Overview.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,9 @@
22
<!-- ## Overview -->
33
The Conversation Knowledge Mining Solution Accelerator is a robust application designed to extract actionable insights from conversational data. It leverages Azure AI services and provides an interactive user interface for querying and visualizing data. The solution is built with a modular architecture, combining a React-based frontend, a FastAPI backend, and Azure services for data processing and storage.
44

5-
![image](../../../../../documents/Images/ReadMe/solution-architecture.png)
5+
6+
![image](../img/ReadMe/ckm-sol-arch.png)
7+
68

79
The solution extracts insights from call audio files or transcripts and enables users to interact with the data via a chatbot and dynamic charts:
810

docs/workshop/docs/workshop/Challenge-5/notebooks/video_chapter_generation.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
"source": [
2424
"\n",
2525
"## Pre-requisites\n",
26-
"1. Follow [README](../README.md#configure-azure-ai-service-resource) to create essential resource that will be used in this sample.\n",
26+
"1. Follow [README](../docs/create_azure_ai_service.md) to create essential resource that will be used in this sample.\n",
2727
"1. Install required packages"
2828
]
2929
},
@@ -92,7 +92,7 @@
9292
"metadata": {},
9393
"source": [
9494
"## Create a custom analyzer and submit the video to generate the description\n",
95-
"The custom analyzer schema is defined in [../analyzer_templates/video_content_understanding.json](../analyzer_templates/video_content_understanding.json). The main custom field is `segmentDescription` as we need to get the descriptions of video segments and feed them into chatGPT to generate the scenes and chapters. Adding transcripts will help to increase the accuracy of scenes/chapters segmentation results. To get transcripts, we will need to set the`returnDetails` parameter in the `config` field to `True`.\n",
95+
"The custom analyzer schema is defined in **../analyzer_templates/video_content_understanding.json**. The main custom field is `segmentDescription` as we need to get the descriptions of video segments and feed them into chatGPT to generate the scenes and chapters. Adding transcripts will help to increase the accuracy of scenes/chapters segmentation results. To get transcripts, we will need to set the`returnDetails` parameter in the `config` field to `True`.\n",
9696
"\n",
9797
"In this example, we will use the utility class `AzureContentUnderstandingClient` to load the analyzer schema from the template file and submit it to Azure Content Understanding service. Then, we will analyze the video to get the segment descriptions and transcripts.\n"
9898
]

docs/workshop/docs/workshop/Challenge-5/notebooks/video_tag_generation.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
"source": [
2424
"\n",
2525
"## Pre-requisites\n",
26-
"1. Follow [README](../README.md#configure-azure-ai-service-resource) to create essential resource that will be used in this sample.\n",
26+
"1. Follow [README](../docs/create_azure_ai_service.md) to create essential resource that will be used in this sample.\n",
2727
"1. Install required packages"
2828
]
2929
},
@@ -92,7 +92,7 @@
9292
"metadata": {},
9393
"source": [
9494
"## Create a custom analyzer and submit the video to generate tags\n",
95-
"The custom analyzer schema is defined in [../analyzer_templates/video_tag.json](../analyzer_templates/video_tag.json). The custom fields are `segmentDescription`, `transcript` and `tags`. Adding description and transcripts helps to increase the accuracy of tag generation results. To get transcripts, we will need to set the`returnDetails` parameter in the `config` field to `True`.\n",
95+
"The custom analyzer schema is defined in **../analyzer_templates/video_tag.json**. The custom fields are `segmentDescription`, `transcript` and `tags`. Adding description and transcripts helps to increase the accuracy of tag generation results. To get transcripts, we will need to set the`returnDetails` parameter in the `config` field to `True`.\n",
9696
"\n",
9797
"In this example, we will use the utility class `AzureContentUnderstandingClient` to load the analyzer schema from the template file and submit it to Azure Content Understanding service. Then, we will analyze the video to get the segment tags.\n"
9898
]

docs/workshop/mkdocs.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ nav:
117117
- CU-AI Foundry: workshop/Challenge-0/CU-Challenge.md
118118
- Challenge 1:
119119
- Solution Overview: workshop/Challenge-1/Solution_Overview.md
120-
- Deployment: workshop/Challenge-1/index.md
120+
- Deployment: workshop/Challenge-1/Deployment.md
121121
# - App Authentication: workshop/Challenge-1/App_Authentication.md
122122
- Explore Data: workshop/Challenge-1/Code_Walkthrough/01_Data_Explore.md
123123
- Frontend: workshop/Challenge-1/Code_Walkthrough/02_Frontend.md
@@ -134,8 +134,8 @@ nav:
134134
- Knowledge Mining API Notebook: workshop/Challenge-3-and-4/knowledge_mining_api.ipynb
135135
- Challenge 5:
136136
- Overview: workshop/Challenge-5/index.md
137-
- Chapter Generation Notebook: workshop/Challenge-5/video_chapter_generation.ipynb
138-
- Tag Generation Notebook: workshop/Challenge-5/video_tag_generation.ipynb
137+
- Chapter Generation Notebook: workshop/Challenge-5/notebooks/video_chapter_generation.ipynb
138+
- Tag Generation Notebook: workshop/Challenge-5/notebooks/video_tag_generation.ipynb
139139
- Challenge 6:
140140
- Overview: workshop/Challenge-6/index.md
141141
- Content Safety Evaluation Notebook: workshop/Challenge-6/Content_safety_evaluation.ipynb

0 commit comments

Comments
 (0)