Skip to content

Commit 265caa4

Browse files
authored
Merge pull request #26 from rafvasq/update-lab3
Revamps Workshop
2 parents 465c2c9 + 327c76b commit 265caa4

File tree

12 files changed

+534
-517
lines changed

12 files changed

+534
-517
lines changed

docs/README.md

Lines changed: 17 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,11 @@ description: Learn how to leverage Open Source AI
44
logo: images/ibm-blue-background.png
55
---
66

7-
## Open Source AI workshop
7+
## Open Source AI Workshop
88

9-
Welcome to the Open Source AI workshop! Thank you for trusting us to help you learn about this
10-
new and exciting space. In this workshop, you'll gain the skills and confidence to effectively use LLMs locally through simple exercises and experimentation, and learn best practices for leveraging open source AI.
9+
You've probably heard how tools like ChatGPT are changing workflows — but when it comes to privacy, security, and control, using public AI tools isn't always an option. In this hands-on workshop, you'll learn how to run your own local, open-source LLMs — no cloud, no cost, and no compromise.
10+
11+
We'll walk through installing and running models with tools like ollama, AnythingLLM, and Continue using familiar environments like VS Code. By the end, you'll have a fully functional local AI assistant, ready to support your work securely and offline.
1112

1213
Our overarching goals of this workshop is as follows:
1314

@@ -16,31 +17,25 @@ Our overarching goals of this workshop is as follows:
1617
* Learn about Prompt Engineering and how to leverage a local LLM in daily tasks.
1718

1819
!!! tip
19-
This workshop may seem short, but a lot of working with AI is exploration and engagement.
20-
These labs is set up for you to get "everything you need to start" put together so you
21-
can share in a collaborative learning environment and shared exploration. Don't hesitate
22-
to raise your hand ask questions and engage with the other students.
23-
24-
By the time you leave today, you'll have everything you need leverage this on your laptop
25-
at home, without internet access, in a secure manner.
20+
working with AI is all about exploration and hands-on engagement. These labs are designed to give you everything you need to get started — so you can collaborate, experiment, and learn together. Don’t hesitate to ask questions, raise your hand, and connect with other participants.
2621

2722
## Agenda
2823

2924
| Lab | Description |
3025
| :--- | :--- |
31-
| [Lab 0: Pre-work](pre-work/README.md) | Install pre-requisites for the workshop |
26+
| [Lab 0: Workshop Pre-work](pre-work/README.md) | Install pre-requisites for the workshop |
3227
| [Lab 1: Configuring AnythingLLM](lab-1/README.md) | Set up AnythingLLM to start using an LLM locally |
33-
| [Lab 2: Using the local LLM](lab-2/README.md) | Test some general prompt templates |
34-
| [Lab 3: Engineering prompts](lab-3/README.md) | Learn and apply Prompt Engineering concepts |
35-
| [Lab 4: Using AnythingLLM for a local RAG](lab-4/README.md) | Build a simple local RAG |
36-
| [Lab 5: Building an AI co-pilot](lab-5/README.md) | Build a coding assistant |
37-
| [Lab 6: Using your coding co-pilot](lab-6/README.md) | Use your coding assistant for tasks |
38-
39-
Thank you SO MUCH for joining us in this workshop! If you have any thoughts or questions at any point,
40-
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
28+
| [Lab 1.5: Configuring Open-WebUI](lab-1.5/README.md) | Set up Open-WebUI to start using an LLM locally |
29+
| [Lab 2: Chatting with Your Local AI](lab-2/README.md) | Get acquainted with your local LLM |
30+
| [Lab 3: Prompt Engineering](lab-3/README.md) | Learn about prompt engineering techniques |
31+
| [Lab 4: Applying What You Learned](lab-4/README.md) | Refine your prompting skills |
32+
| [Lab 5: Building a local AI Assistant](lab-5/README.md) | Build a Granite coding assistant |
33+
| [Lab 6: Coding with an AI Assistant](lab-6/README.md) | Write code using Continue and Granite |
34+
35+
Thank you SO MUCH for joining us in this workshop! If you have any questions or feedback,
36+
the TAs would love answer them for you. If you come across any issues or bugs, don't hesitate
4137
to open a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
42-
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
43-
ASAP.
38+
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) -- we'll take a look as soon as we can.
4439

4540
## Compatibility
4641

@@ -55,3 +50,4 @@ This workshop has been tested on the following platforms:
5550
* [JJ Asghar](https://github.com/jjasghar)
5651
* [Gabe Goodhart](https://github.com/gabe-l-hart)
5752
* [Ming Zhao](https://github.com/mingxzhao)
53+
* [Rafael Vasquez](https://github.com/rafvasq)

docs/images/continue.png

55.2 KB
Loading

docs/lab-1.5/README.md

Lines changed: 11 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,25 @@
11
---
22
title: Configuring Open-WebUI
3-
description: Steps to configure Open-WebUI for usage
3+
description: Set up Open-WebUI to start using an LLM locally
44
logo: images/ibm-blue-background.png
55
---
66

7-
!!! warning
8-
This is **optional**. You don't need Open-WebUI if you have AnythingLLM already running.
7+
Let's start by configuring [Open-WebUI](../pre-work/README.md#installing-open-webui) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
98

10-
Now that you have [Open-WebUI installed](../pre-work/README.md#installing-open-webui) let's configure it with `ollama` and Open-WebUI to talk to one another. The following screenshots will be from a Mac, but the gist of this should be the same on Windows and Linux.
11-
12-
Open up Open-WebUI (assuming you've run `open-webui serve` and nothing else), and you should see something like the following:
13-
14-
![default screen](../images/openwebui_open_screen.png)
15-
16-
If you see something similar, Open-WebUI is installed correctly! Continue on, if not, please find a workshop TA or raise your hand for some help.
17-
18-
Before clicking the *Getting Started* button, make sure that `ollama` has `granite3.1-dense` downloaded:
9+
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
1910

2011
```bash
2112
ollama pull granite3.1-dense:8b
2213
```
2314

2415
!!! note
25-
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.
16+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
17+
18+
Open up Open-WebUI (assuming you've run `open-webui serve`):
19+
20+
![default screen](../images/openwebui_open_screen.png)
21+
22+
If you see something similar, Open-WebUI is installed correctly! Continue on, if not, please find a workshop TA or raise your hand for some help.
2623

2724
Click *Getting Started*. Fill out the next screen and click the *Create Admin Account*. This will be your login for your local machine. Remember that this because it will be your Open-WebUI configuration login information if want to dig deeper into it after this workshop.
2825

@@ -41,4 +38,4 @@ The first response may take a minute to process. This is because `ollama` is spi
4138

4239
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
4340

44-
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!
41+
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to the next lab and have a chat with your model!

docs/lab-1/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,25 @@
11
---
22
title: Configuring AnythingLLM
3-
description: Steps to configure AnythingLLM for usage
3+
description: Set up AnythingLLM to start using an LLM locally
44
logo: images/ibm-blue-background.png
55
---
66

7-
Now that you've got [AnythingLLM installed](../pre-work/README.md#anythingllm), we need to configure it with `ollama`. The following screenshots are taken from a Mac, but the gist of this should be the same on Windows and Linux.
7+
Let's start by configuring [AnythingLLM installed](../pre-work/README.md#anythingllm) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
88

9-
First, if you haven't already, download the Granite 3.1 model. Open up a terminal and run the following command:
9+
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
1010

1111
```bash
1212
ollama pull granite3.1-dense:8b
1313
```
1414

1515
!!! note
16-
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for.
16+
The download may take a few minutes depending on your internet connection. In the meantime, you can check out information about model we're using [here](https://ollama.com/library/granite3.1-dense). Check out how many languages it supports and take note of its capabilities. It'll help you decide what tasks you might want to use it for in the future.
1717

18-
Either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.
18+
Open the AnythingLLM desktop application and either click on the *Get Started* button or open up settings (the 🔧 button). For now, we are going to configure the global settings for `ollama` but you can always change it in the future.
1919

2020
![wrench icon](../images/anythingllm_wrench_icon.png)
2121

22-
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
22+
Click on the *LLM* section, and select **Ollama** as the LLM Provider. Select the `granite3.1-dense:8b` model you downloaded. You'd be able to see all the models you have access to through `ollama` here.
2323

2424
![llm configuration](../images/anythingllm_llm_config.png)
2525

@@ -41,4 +41,4 @@ The first response may take a minute to process. This is because `ollama` is spi
4141

4242
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
4343

44-
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Have a quick chat with your model before moving on to the next lab!
44+
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to the next lab and have a chat with your model!

0 commit comments

Comments
 (0)