Replies: 5 comments 3 replies
-
private_gpt\components\llm\llm_component.py looking at ...\py3.11\Lib\site-packages\llama_index\llms\llama_utils.py messages_to_prompt use llama_utils.py as an example and define copy&crafted inside your private_gpt\components\llm\llm_component.py |
Beta Was this translation helpful? Give feedback.
-
hey themperature i have found, |
Beta Was this translation helpful? Give feedback.
-
stop privateGPT and run again, if you pass wrong param it will respond with error that invalid param was passed, see reference for other params here: for messages templates check - it will give you idea what I meant about crafting own messages_to_prompt and completion_to_prompt https://github.com/run-llama/llama_index/blob/dbefde942434dde6d1d2a3eef87ed366028da07b/llama_index/llms/llama_utils.py#L4 |
Beta Was this translation helpful? Give feedback.
-
thy i added p an k but other stuff is tu much fo me :D but the response token seems very low, never the less that only 3 models run fine with prtvateGPT^^ allways at max 1200 characters as response thats is not much ... maybe ou have any idea ? and if you have time... |
Beta Was this translation helpful? Give feedback.
-
Hi, is there any update here? can we go beyond lama.cpp limitation in this case 4096 tokens in answer prompt ? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
where i can add those parameters and how ?
(i know where i can chage the model in settings.yaml)
thx a lot !
Beta Was this translation helpful? Give feedback.
All reactions