Offline LLMs #813
Replies: 5 comments 2 replies
-
Same question, I can't find the way to do it |
Beta Was this translation helpful? Give feedback.
-
Are you trying with LM Studio? |
Beta Was this translation helpful? Give feedback.
-
@Hrishikesh007788 Just import the local LLM you want to use via langchain and pass it to a Example: from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=100)
llm = HuggingFacePipeline(pipeline=pipe)
df = SmartDataframe(..., config={"llm": llm}) Hope this helps. |
Beta Was this translation helpful? Give feedback.
-
I tried this, and always get empty LLM object....
|
Beta Was this translation helpful? Give feedback.
-
okay, what eventually worked for me is to disable caching. If I don't disable caching, then the smartdataframe basically hangs and quits
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
@gventuri Is there a way to connect pandasai with LLM in our local machine completely offline? And is there is way to serve the local LLM in a server such that anyone can access it?
Beta Was this translation helpful? Give feedback.
All reactions