How we can use custom opensource huggingface LLM in GraphCypherQAChain in langchain and Neo4J DB #17626
Replies: 2 comments 1 reply
-
🤖 Hey @KaifAhmad1! Great to see you back. Diving into another adventure with LangChain, I see? Let's get cracking! Based on the error message you're seeing, it seems like the However, you're interested in using a custom open-source language model from Hugging Face instead of the from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
import warnings
warnings.filterwarnings('ignore')
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", device_map='auto', torch_dtype=torch.float16, use_auth_token=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", max_new_tokens = 720, do_sample=True, top_k=30, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
from langchain import HuggingFacePipeline
llm = HuggingFacePipeline(pipeline = pipe, model_kwargs = {'temperature':0.25, 'max_tokens':4000, 'stop_sequence': "\n\n"}) Then, you can use chain = GraphCypherQAChain.from_llm(llm, graph=graph, verbose=True) This code logs into HuggingFace, suppresses warnings, loads the model and tokenizer, creates a pipeline, and then wraps the pipeline in a Please replace You can find more details in the discussion on the LangChain repository. I hope this helps! If you have any other questions, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
I am using the above example. But it does not work somehow. It is running on cpu and torch_dtype is the default instead of torch.bfloat16. Here is the below stack trace.
Python - 3.9.13 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
System Info
pip == 24.0
python == 3.10.10
Cuda == 12.1
Beta Was this translation helpful? Give feedback.
All reactions