Skip to content

Commit df60815

Browse files
committed
Add correct image rendering to langchain ollama article
1 parent e0d48dd commit df60815

File tree

3 files changed

+53
-60
lines changed

3 files changed

+53
-60
lines changed

llm/__marimo__/session/lchain_ollama.py.json

Lines changed: 23 additions & 54 deletions
Large diffs are not rendered by default.

llm/lchain_ollama.py

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,21 @@ def _(mo):
4747
Before diving into integration steps, let's understand the two key technologies we'll be working with in this tutorial.
4848
4949
### What is Ollama?
50+
"""
51+
)
52+
return
53+
54+
55+
@app.cell
56+
def _(mo):
57+
mo.image(src="images/ollama.png", alt="Ollama logo")
58+
return
5059

51-
![Ollama logo](https://ollama.com/public/blog/embedding-models.png)
5260

61+
@app.cell
62+
def _(mo):
63+
mo.md(
64+
r"""
5365
Ollama is an open-source framework designed to run large language models locally on your machine. It provides a simplified interface for downloading, running, and interacting with various open-source LLMs without needing extensive technical setup. Ollama handles the complex infrastructure requirements so developers can focus on using LLMs rather than managing them.
5466
5567
- Provides a simple CLI and REST API for running models locally
@@ -165,10 +177,22 @@ def _(mo):
165177
- Large models (70B+) typically require a dedicated GPU with 24GB+ VRAM
166178
167179
For a full list of models you can serve locally, check out [the Ollama model library](https://ollama.com/search). Before pulling a model and potentially waste your hardware resources, check out [the VRAM calculator](https://apxml.com/tools/vram-calculator) that tells you if you can run a specific model on your machine:
180+
"""
181+
)
182+
return
183+
184+
185+
@app.cell
186+
def _(mo):
187+
mo.image(src="images/vram.png", alt="VRAM Calculator showing memory requirements for different LLM models across various quantization levels")
188+
return
168189

169-
![VRAM Calculator showing memory requirements for different LLM models across various quantization levels](images/vram.png)
170190

171-
### Basic Chat Integration - continue from here
191+
@app.cell
192+
def _(mo):
193+
mo.md(
194+
r"""
195+
### Basic Chat Integration
172196
173197
LangChain provides dedicated classes for working with Ollama chat models:
174198

public/llm/lchain_ollama.html

Lines changed: 3 additions & 3 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)