Where is the LLM configured? #2202
Unanswered
habsfanongit
asked this question in
Q&A
Replies: 1 comment
-
In case anyone has similar question, I ended up using from crewai.llm import LLM
llm = LLM(model='ollama/gemma2:27b', base_url="http://localhost:11434" ) Then using in the agent @agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
llm=llm
) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Sorry if this is a stupid question, but when I create a crew using the CLI, I don't see how the LLM is confirmed. All I see is the .even file with the model name and URL. I am assuming crewAI creates some default LLM using these values, but I am wondering how do I change or see the default values in Python code in the project.
In my new project using latest, I changed the model in the env file to
MODEL=ollama/deepseek-r1:70b,
but I want one of my agents to use the smaller version so I modified most of my agents use the lm_model propertyIs there any way, crewAI can tell me which model it used for a specific task as a debug value or something like that?
Beta Was this translation helpful? Give feedback.
All reactions