Skip to content

Commit 33f5983

Browse files
authored
Merge pull request #27 from rafvasq/final-touches
Fix typos/links, add conclusions, rm references tab
2 parents 265caa4 + 1f69f3b commit 33f5983

File tree

10 files changed

+48
-41
lines changed

10 files changed

+48
-41
lines changed

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Our overarching goals of this workshop is as follows:
1717
* Learn about Prompt Engineering and how to leverage a local LLM in daily tasks.
1818

1919
!!! tip
20-
working with AI is all about exploration and hands-on engagement. These labs are designed to give you everything you need to get started — so you can collaborate, experiment, and learn together. Don’t hesitate to ask questions, raise your hand, and connect with other participants.
20+
Working with AI is all about exploration and hands-on engagement. These labs are designed to give you everything you need to get started — so you can collaborate, experiment, and learn together. Don’t hesitate to ask questions, raise your hand, and connect with other participants.
2121

2222
## Agenda
2323

docs/lab-1.5/README.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ description: Set up Open-WebUI to start using an LLM locally
44
logo: images/ibm-blue-background.png
55
---
66

7+
## Setup
8+
79
Let's start by configuring [Open-WebUI](../pre-work/README.md#installing-open-webui) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
810

911
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
@@ -25,11 +27,12 @@ Click *Getting Started*. Fill out the next screen and click the *Create Admin Ac
2527

2628
![user setup screen](../images/openwebui_user_setup_screen.png)
2729

28-
You should see the Open-WebUI main page now, with `granite3.1-dense:latest` right there in
29-
the center!
30+
You should see the Open-WebUI main page now, with `granite3.1-dense:latest` right there in the center!
3031

3132
![main screen](../images/openwebui_main_screen.png)
3233

34+
## Testing the Connection
35+
3336
Test it out! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
3437

3538
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
@@ -38,4 +41,6 @@ The first response may take a minute to process. This is because `ollama` is spi
3841

3942
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
4043

41-
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to the next lab and have a chat with your model!
44+
## Conclusion
45+
46+
**Congratulations!** Now you have Open-WebUI running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!

docs/lab-1/README.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ description: Set up AnythingLLM to start using an LLM locally
44
logo: images/ibm-blue-background.png
55
---
66

7+
## Setup
8+
79
Let's start by configuring [AnythingLLM installed](../pre-work/README.md#anythingllm) and `ollama` to talk to one another. The following screenshots will be from a Mac, but this should be similar on Windows and Linux.
810

911
First, if you haven't already, download the Granite 3.1 model. Make sure that `ollama` is running in the background (you may have to run `ollama serve` in its own terminal depending on how you installed it) and in another terminal run the following command:
@@ -33,6 +35,8 @@ Give it a name (e.g. the event you're attending today):
3335

3436
![naming new workspace](../images/anythingllm_naming_workspace.png)
3537

38+
## Testing the Connection
39+
3640
Now, let's test our connection AnythingLLM! I like asking the question, "Who is Batman?" as a sanity check. Every LLM should know who Batman is.
3741

3842
The first response may take a minute to process. This is because `ollama` is spinning up to serve the model. Subsequent responses should be much faster.
@@ -41,4 +45,6 @@ The first response may take a minute to process. This is because `ollama` is spi
4145

4246
You may notice that your answer is slighty different then the screen shot above. This is expected and nothing to worry about!
4347

44-
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to the next lab and have a chat with your model!
48+
## Conclusion
49+
50+
**Congratulations!** Now you have AnythingLLM running and it's configured to work with `granite3.1-dense` and `ollama`. Move on to [Lab 2](https://ibm.github.io/opensource-ai-workshop/lab-2/) and have a chat with your model!

docs/lab-2/README.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@ description: Get acquainted with your local LLM
44
logo: images/ibm-blue-background.png
55
---
66

7-
It's time for the fun exploration part your Prompt Engineering (PE) journey.
7+
It's time for the fun exploration part your Prompt Engineering (PE) journey. In this lab, you're encouraged to spend as much time as you can chatting with the model, especially if you have little experience doing so. Keep some questions in mind: can you make it speak in a different tone? Can it provide a recipe for a cake or a poem about technology? Is it self-aware?
8+
9+
## Chatting with the Model
810

911
Open a brand _new_ Workspace in AnythingLLM (or Open-WebUI) called "Learning Prompt Engineering".
1012

@@ -15,15 +17,14 @@ For some inspiration, I like to start with `Who is Batman?` then work from there
1517
Batman's top 10 enemies are, or what was the most creative way Batman saved the day? Some example responses to those questions are below.
1618

1719
!! note
18-
If you treat the LLM like a knowledge repository, you can get a lot of useful information out of it. But remember not to
20+
If you treat the LLM like a knowledge repository, you can get a lot of useful information out of it. But, remember not to
1921
blindly accept its output. You should always cross-reference important things. Treat it like a confident librarian! They've read
2022
a lot and they can be very fast at finding books, but they can mix things up too!
2123

22-
## Example Output using the `ollama` CLI
24+
## Using the `ollama` CLI
2325

2426
This is an example of of using the CLI with vanilla ollama:
2527

26-
2728
```
2829
$ ollama run granite3.1-dense
2930
>>> Who is Batman?
@@ -99,8 +100,8 @@ good - all hallmarks of his character. The innovative approach to saving the day
99100
in Batman's extensive history.
100101
```
101102

102-
## Try it Yourself
103+
## Conclusion
103104

104-
Spend some time asking your LLM about anything about any topic and exploring how you can alter its output to provide you with more interesting or satisfying responses.
105+
Spend as much time as you want asking your LLM about anything about any topic and exploring how you can alter its output to provide you with more interesting or satisfying responses.
105106

106-
When you feel acquainted with your model, move on to [Lab 3](/docs/lab-3/README.md) to learn about Prompt Engineering.
107+
When you are acquainted with your model, move on to [Lab 3](https://ibm.github.io/opensource-ai-workshop/lab-3/) to learn about Prompt Engineering.

docs/lab-3/README.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,9 @@ logo: images/ibm-blue-background.png
1111

1212
Prompt engineering is the practice of designing clear, intentional instructions to guide the behavior of an AI model.
1313

14-
It involves crafting prompts—usually in natural language—that help a model identify what task to perform, how to perform it, and if there are considerations in style or format.
15-
This can include specifying tone, structure, context, or even assigning the AI a particular role.
16-
Prompt engineering is essential because the quality and precision of the prompt can significantly influence the quality, relevance, and creativity of the generated output.
17-
As generative models become more powerful, skillful prompting becomes a key tool for unlocking their full potential.
14+
It involves crafting prompts that help a model identify what task to perform, how to perform it, and if there are considerations in style or format. This can include specifying tone, structure, context, or even assigning the AI a particular role.
15+
16+
Prompt engineering is essential because the quality and precision of the prompt can significantly influence the quality, relevance, and creativity of the generated output. As generative models become more powerful, skillful prompting becomes a key tool for unlocking their full potential.
1817

1918
### The Three Key Principles of PE
2019

@@ -105,7 +104,6 @@ to be repaired and we should be able to reach out in a couple weeks.
105104

106105
So much better! By providing more context and more insight into what you are expecting in a response, we can improve the quality of our responses greatly. Also, by providing **multiple** examples, you're achieving *multi-shot prompting*!.
107106

108-
Let's move on to the next lab and apply what you've learned with some exercises.
107+
## Conclusion
109108

110-
!!! tip
111-
You could even use `ollama`'s CLI in a terminal to interact with your model by using `ollama run granite3.1-dense`
109+
Now that you know the basics of prompt engineering and simple techniques you can use to level-up your prompts, let's move on to [Lab 4](https://ibm.github.io/opensource-ai-workshop/lab-4/) and apply what you've learned with some exercises.

docs/lab-4/README.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@ description: Refine your prompting skills
44
logo: images/ibm-blue-background.png
55
---
66

7-
Complete the following exercises using your local LLM.
7+
Complete the following exercises using your local LLM. Try to come up with your own prompts from scratch! Take note of what works and what doesn't.
88

99
- **Be curious!** What if you ask the same question but in a different way, does the response significantly change?
1010
- **Be creative!** Do you want the response to be organized in a numbered or bulleted list instead of sentences?
11-
- **Be specific!** Aim for perfection. Use descriptive language, examples, and parameters to perfect your output.
11+
- **Be specific!** Aim for perfection. Use descriptive language and examples to perfect your output.
1212

1313
!!! note
1414
Discovered something cool or unexpected? Don’t keep it to yourself, raise your hand or let the TA know!
@@ -209,3 +209,7 @@ all designed to be completed in a single session of gameplay
209209

210210
The best part of this prompt is that you can take the output and extend or shorten the portions it starts with, and tailor the story to your adventurers' needs!
211211
</details>
212+
213+
## Conclusion
214+
215+
Well done! By completing these exercises, you're well on your way to being a prompt expert. In [Lab 5](https://ibm.github.io/opensource-ai-workshop/lab-5/), we'll move towards code-generation and learn how to use a local coding assistant.

docs/lab-5/README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,11 +69,14 @@ For inline code suggestions, it's generally recommended that you use smaller mod
6969

7070
Now that you have everything configured in VSCode, let's make sure that it works. Ensure that `ollama` is running in the background either as a status bar item or in the terminal using `ollama serve`.
7171

72-
7372
Open the Continue exension and test your local assistant.
7473

7574
```text
7675
What language is popular for backend development?
7776
```
7877

7978
Additionally, if you open a file for editing you should see possible tab completions to the right of your cursor (it may take a few seconds to show up).
79+
80+
## Conclusion
81+
82+
With your AI coding assistant now set up, move on to [Lab 6](https://ibm.github.io/opensource-ai-workshop/lab-6/) and actually use it!

docs/lab-6/README.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Write the code for conway's game of life using pygame
3636
!!! note
3737
[What is Conway's Game of Life?](https://en.wikipedia.org/wiki/Conway's_Game_of_Life)
3838

39-
After a few moments, the mode should start writing code in the file, it might look something like:
39+
After a few moments, the model should start writing code in the file, it might look something like:
4040
![gameoflife_v1](../images/gameoflife_v1.png)
4141

4242
## AI-Generated Code
@@ -56,7 +56,7 @@ At this point, you can practice debugging or refactoring code with the AI co-pil
5656
In the example generated code, a "main" entry point to the script is missing. In this case, using `cmd+I` again and trying the prompt: "write a main function for my game that plays ten rounds of Conway's
5757
game of life using the `board()` function." might help. What happens?
5858

59-
It's hard to read the generated case in the example case, making it hard to read the logic. To clean it up, I'll define a `main` function so the entry point exists. There was also a `tkinter` section in the generated code, I decided to put the main game loop there:
59+
It's hard to read the generated code in the example case, making it difficult to understand the logic. To clean it up, I'll define a `main` function so the entry point exists. There's also a `tkinter` section in the generated code, I decided to put the main game loop there:
6060

6161
```python
6262
if __name__ == '__main__':
@@ -78,7 +78,7 @@ It looks like the code is improving:
7878

7979
## Explaining the Code
8080

81-
To debug further, use Granite-Code to explain what the different functions do. Simply highlight one of them, and use `cmd+L` to add it to the context window of your assistant and write a prompt similar to:
81+
To debug further, use Granite-Code to explain what the different functions do. Simply highlight one of them, use `cmd+L` to add it to the context window of your assistant and write a prompt similar to:
8282

8383
```text
8484
what does this function do?
@@ -98,24 +98,17 @@ Assuming you still have a function you wanted explained above in the context-win
9898
write a pytest test for this function
9999
```
100100

101-
Now I got a good framework for a test here:
101+
The model generated a great framework for a test here:
102102
![lazy pytest](../images/pytest_test.png)
103103

104-
Notice that my test only spans what is provided in the context, so the test isn't integrated into my project yet. But, the code provides a good start. I'll need to create a new test file and integrate `pytest` into my project.
104+
Notice that the test only spans what is provided in the context, so it isn't integrated into my project yet. But, the code provides a good start. I'll need to create a new test file and integrate `pytest` into my project to use it.
105105

106106
## Adding Comments
107107

108-
Continue also provides the ability to automatically add comments to code:
108+
Continue also provides the ability to automatically add comments to code. Try it out!
109109

110110
![comment_code](../images/comment_code.png)
111111

112-
113112
## Conclusion
114113

115-
116-
!!! success
117-
Thank you SO MUCH for joining us on this workshop, if you have any thoughts or questions
118-
the TAs would love answer them for you. If you found any issues or bugs, don't hesitate
119-
to put a [Pull Request](https://github.com/IBM/opensource-ai-workshop/pulls) or an
120-
[Issue](https://github.com/IBM/opensource-ai-workshop/issues/new) in and we'll get to it
121-
ASAP.
114+
This lab was all about using our local, open-source AI co-pilot to write complex code in Python. By combining Continue and Granite-Code, we were able to generate code, explain functions, write tests, and add comments to our code!

docs/pre-work/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,6 @@ pip install open-webui
9696
open-webui serve
9797
```
9898

99-
Now that you have all of the tools you need, let's start building our local AI co-pilot.
99+
## Conclusion
100100

101-
**Head over to [Lab 1](/docs/lab-1/README.md) if you have AnythingLLM or [Lab 1.5](/docs/lab-1.5/README.md) for Open-WebUI.**
101+
Now that you have all of the tools you need, head over to [Lab 1](https://ibm.github.io/opensource-ai-workshop/lab-1/) if you have AnythingLLM or [Lab 1.5](https://ibm.github.io/opensource-ai-workshop/lab-1.5/) for Open-WebUI.

mkdocs.yml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,6 @@ nav:
2121
- Lab 4. Applying What You Learned: lab-4/README.md
2222
- Lab 5. Configuring an AI Co-pilot: lab-5/README.md
2323
- Lab 6. Coding with an AI Co-pilot: lab-6/README.md
24-
- References:
25-
- Additional Resources: resources/RESOURCES.md
26-
- MkDocs Cheatsheet: resources/MKDOCS.md
2724

2825
## DO NOT CHANGE BELOW THIS LINE
2926

0 commit comments

Comments
 (0)