Skip to content

Commit 2c83a13

Browse files
committed
Update the configuration manual for the AI section.
1 parent 43b71b7 commit 2c83a13

File tree

4 files changed

+35
-3
lines changed

4 files changed

+35
-3
lines changed

_config.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
title: Pledger.io
22

33
release:
4-
version: 4.0.1
4+
version: 4.0.2
55
date: 2025-02-21
66

77
github:

_release-notes/v4.x/v4.0.0.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11

2-
== 4.0.1
2+
== 4.0.2
33

44
*Release date: 2025-02-21*
55

docs/getting-started/advanced/using-an-llm.adoc

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,10 +37,17 @@ NOTE: Use this docker image `ghcr.io/pledger-io/amd64-embedded-llm`.
3737
When starting the docker image it will automatically download the model configured in the environment.
3838
Please see the page `xref:{document-root}/how-to/installation/configuration.adoc#configure_llm[Large Language Model options]` for how to configure {application} correctly.
3939

40+
=== Choosing different AI models
41+
4042
For most users it is advised to not change the model used by Ollama, the picked model has the best accuracy vs performance.
4143
If you have an Nvidia or AMD GPU you may be able to utilize that to power Ollama.
4244
This would allow you to pick a larger model like `llama3.3` or `mistral`.
4345

46+
You can view the full list of available models on the link:https://ollama.com/search[Ollama website].
47+
48+
=== Enhance performance by using your GPU
49+
50+
Ollama supports using the link:https://github.com/ollama/ollama/blob/main/docs/gpu.md[GPU] to speed up the responses of the LLM.
4451
To utilize the GPU you would have to adopt the docker command to something like this:
4552

4653
[source,shell,linenums]
@@ -62,4 +69,29 @@ To utilize the GPU you would have to adopt the docker command to something like
6269

6370
== OpenAI based AI
6471

65-
TIP: Still to be documented....
72+
As an alternative to hosting the AI agent yourself you could choose to user Open AI.
73+
The results from Open AI will be more accurate than the Ollama based once as the model used is far more advanced.
74+
75+
WARNING: Be aware that information will be sent from {application} to Open AI.
76+
{application} will allow access to the list of budgets, categories and tags, as well as the information of any transaction that you are editing.
77+
78+
[[open_ai_generate_token]]
79+
=== Generating an API key
80+
81+
1. Visit the link:https://platform.openai.com[OpenAI website].
82+
2. Click on "Sign Up" if you don't have an account, or "Log In" if you already have one.
83+
3. Complete the required verification steps.
84+
4. Visit the link:https://platform.openai.com/api-keys[API key page].
85+
5. Hit the "Create new key" option.
86+
6. Set a name, a project and choose "All" permissions.
87+
7. Hit "Create" and copy the key somewhere safe for usage in {application}.
88+
89+
=== Configuring {application} to use Open AI
90+
91+
For Open AI to be used you will have to set the following environment variables:
92+
93+
- `AI_ENGINE` should be set to `open-ai`.
94+
- `OPENAI_TOKEN` should contain the key that xref:#open_ai_generate_token[was created before].
95+
96+
NOTE: If you are not using the docker image with embedded LLM you *must* also set the `MICRAUT_PROFILES` variable to `ai`.
97+
This will enable the AI features inside {application}.
17.2 KB
Loading

0 commit comments

Comments
 (0)