Replies: 1 comment 1 reply
-
Hey @vladdie, I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together! To use
from transformers import AutoTokenizer
class CustomLLM:
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def get_num_tokens_from_messages(self, messages):
# Custom logic to count tokens using the provided tokenizer
return sum(len(self.tokenizer.tokenize(message.content)) for message in messages)
# Initialize your custom tokenizer
custom_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# Create an instance of your custom LLM with the tokenizer
custom_llm = CustomLLM(tokenizer=custom_tokenizer)
from langchain.memory import ConversationSummaryBufferMemory
# Initialize ConversationSummaryBufferMemory with the custom LLM
memory = ConversationSummaryBufferMemory(llm=custom_llm, max_token_limit=100)
# Save context to memory
memory.save_context({"input": "Hello"}, {"output": "What's up"})
from langchain_community.chat_models import ChatOllama
# Initialize the ChatOllama model
ollama = ChatOllama(model="llama3:8b-instruct-fp16")
# Example messages
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "What's up"}
]
# Convert messages to Ollama format
ollama_messages = ollama._convert_messages_to_ollama_messages(messages)
# Add messages to the memory
for message in ollama_messages:
memory.add_message(message)
# Now you can use the memory with the ChatOllama model
response = ollama._generate(messages=memory.get_messages())
print(response) This setup ensures that |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I tried to use ConversationSummaryBufferMemory with local open source LLM with ollama, but I got the error: " Can't load tokenizer for 'gpt2'." It seems that ConversationSummaryBufferMemory only supports ChatOpenAI()? Is there anyway I can set a customized tokenizer?
Many thanks!
System Info
langchain=0.2.3
Windows 11
Beta Was this translation helpful? Give feedback.
All reactions