Support for Sonnet-3.7 thinking process through init_chat_model #30018
Closed
smartinezbragado
announced in
Ideas
Replies: 1 comment
-
This is already supported: from langchain.chat_models import init_chat_model
llm = init_chat_model(
"anthropic:claude-3-7-sonnet-latest",
max_tokens=5_000,
thinking={"type": "enabled", "budget_tokens": 2_000},
)
response = llm.invoke("What is 3*3?")
response.content See docs for ChatAnthropic. Let me know if this isn't what you meant! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
I would love to be able to use the thinking process of sonnet 3.7 through the init_chat_model feature - It's a game changer for our agents
Motivation
I would love to be able to use the thinking process of sonnet 3.7 through the init_chat_model feature - It's a game changer for our agents
Proposal (If applicable)
No response
Beta Was this translation helpful? Give feedback.
All reactions