You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am fairly new in the langchain environment and I really need some help. What I am trying to accomplish is to make a csv chatbot using opensource LLM combined with pandas dataframe. Everything works fine except for the last step where I invoke the agent to get a response. I get 'ExLlamaV2BaseGenerator' object has no attribute 'set_stop_conditions' as an error. I tried to work around it but still no luck. Down below is the code I have been using.
Code
%%capture
!pipinstallexllamav2
!pipinstallhttps://github.com/turboderp/exllamav2/releases/download/v0.0.12/exllamav2-0.0.12+cu121-cp311-cp311-linux_x86_64.whl
!pipinstalllangchain-community
!pipinstalllangchain_experimentalfromlangchain.chains.llmimportLLMChainfromlangchain_community.llms.exllamav2importExLlamaV2fromhuggingface_hubimportsnapshot_downloaddefdownload_GPTQ_model(model_name: str, models_dir: str="./models/") ->str:
ifnotos.path.exists(models_dir):
os.makedirs(models_dir)
_model_name=model_name.replace("/", "_")
model_path=os.path.join(models_dir, _model_name)
if_model_namenotinos.listdir(models_dir):
snapshot_download(repo_id=model_name, local_dir=model_path, local_dir_use_symlinks=False)
else:
print(f"{model_name} already exists in the models directory")
returnmodel_pathmodel_path=download_GPTQ_model("TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ")
fromexllamav2.generatorimportExLlamaV2Sampler# Set generation settingssettings=ExLlamaV2Sampler.Settings()
settings.temperature=0.85settings.top_k=50settings.top_p=0.8settings.token_repetition_penalty=1.05# Define the stop token ID (use the actual token ID for EOS or a custom stop token)stop_token_id=32000# Replace this with the actual stop token ID, if needed.# Initialize the model with stop conditionsllm=ExLlamaV2(
model_path=model_path, # Specify your model pathsettings=settings,
verbose=True,
streaming=False,
max_new_tokens=200,
stop_conditions=[stop_token_id] # Pass stop token ID here
)
importpandasaspddata=pd.read_csv("/kaggle/input/world-happiness/2019.csv")
df=pd.DataFrame(data)
fromlangchain_experimental.agents.agent_toolkitsimportcreate_pandas_dataframe_agent#from langchain_experimental.agents import create_pandas_dataframe_agentfromlangchain.agents.agent_typesimportAgentTypeagent=create_pandas_dataframe_agent(llm,
df,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
allow_dangerous_code=True)
agent.invoke("WHo is ranked first")
System Info
I am working on Kaggle with the free gpu that they provide.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Description
Hi all,
I am fairly new in the langchain environment and I really need some help. What I am trying to accomplish is to make a csv chatbot using opensource LLM combined with pandas dataframe. Everything works fine except for the last step where I invoke the agent to get a response. I get 'ExLlamaV2BaseGenerator' object has no attribute 'set_stop_conditions' as an error. I tried to work around it but still no luck. Down below is the code I have been using.
Code
System Info
I am working on Kaggle with the free gpu that they provide.
Beta Was this translation helpful? Give feedback.
All reactions