Skip to content

ChatOllama: AIMessageChunk missing reasoning_content in additional_kwargs (concatenated into content instead) #9089

@djholt

Description

@djholt

Checked other resources

  • This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

const llm = new ChatOllama({ 
  baseUrl: 'http://localhosr:11434',
  model: 'gpt-oss:20b',
  think: false // or true, same issue
});

const message = new HumanMessage({
  content: [
    {
      type: 'text',
      text: 'Hello'
    }
  ]
});

const response = await llm.invoke([message]);
// response.content includes reasoning/thinking from gpt-oss:20b

Error Message and Stack Trace (if applicable)

No response

Description

Looking at convertOllamaMessagesToLangChain, I can see that thinking and content are merely concatenated. This differs from langchain Python wherein thinking is returned as reasoning_content in additional_kwargs. This is the expected behavior.

System Info

"@langchain/core": "^0.3.78",
"@langchain/langgraph": "^0.4.9",
"@langchain/ollama": "^0.2.4",

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions