You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd resp0nses. When this happens, you generally have three options:
Fine-tune the model to your specifications or even train your own.
Prompt-engineer the prompt to the best shape you can achieve.
Orchestrate multiple prompts in a pipeline to get the best result.
In all of these situations, but especially in 3., the ✨ Promptbook can make your life waaaaaaaaaay easier.
Separates concerns between prompt-engineer and programmer, between code files and prompt files, and between prompts and their execution logic. For this purpose, it introduces a new language called the 💙 Book.
Book allows you to focus on the business logic without having to write code or deal with the technicalities of LLMs.
Forget about low-level details like choosing the right model, tokens, context size, temperature, top-k, top-p, or kernel sampling. Just write your intent and persona who should be responsible for the task and let the library do the rest.
Sometimes even the best prompts with the best framework like Promptbook :) can't avoid the problems. In this case, the library has built-in anomaly detection and logging to help you find and fix the problems.
Versioning is build in. You can test multiple A/B versions of pipelines and see which one works best.
Promptbook is designed to use RAG (Retrieval-Augmented Generation) and other advanced techniques to bring the context of your business to generic LLM. You can use knowledge to improve the quality of the output.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When you have a simple, single prompt for ChatGPT, GPT-4, Anthropic Claude, Google Gemini, Llama 3, or whatever, it doesn't matter how you integrate it. Whether it's calling a REST API directly, using the SDK, hardcoding the prompt into the source code, or importing a text file, the process remains the same.
But often you will struggle with the limitations of LLMs, such as hallucinations, off-topic responses, poor quality output, language and prompt drift, word repetition repetition repetition repetition or misuse, lack of context, or just plain w𝒆𝐢rd resp0nses. When this happens, you generally have three options:
In all of these situations, but especially in 3., the ✨ Promptbook can make your life waaaaaaaaaay easier.
temperature
,top-k
,top-p
, or kernel sampling. Just write your intent and persona who should be responsible for the task and let the library do the rest.:)
can't avoid the problems. In this case, the library has built-in anomaly detection and logging to help you find and fix the problems.Beta Was this translation helpful? Give feedback.
All reactions