Replies: 2 comments 5 replies
-
To the specific issue you've encountered perhaps you should consider the examples not as ready made code that one could copy paste into a Python file but as Jupyter notebooks suitable to playing around with the LLM of your choice through LangChain and then, when you have something suitable going back to your Python file. In that light, once you load the example as a notebook it'll start making more sense - for e.g., the piece of code "AIMessage(content="Why don't bears..." is just the output of the previous cell of the notebook. LangChain has set itself a very ambitious goal to be a framework to interface with any LLM. And as we've seen with gen AI models, the output is not deterministic both across model queries and across models. LangChain makes this experimentation easy, but it comes at a price of some increased surface complexity. Additionally there has been a significant refactor of the LangChain codebase, see here. So yes, some parts of the documentation may still use chains, but they are being changed to LCEL. And due to the refactor of LangChain into different packages some of the imports in the documentation may be wrong (again specifically, the one you pointed out is correct). So I'd suggest you plug at it for a bit. It does provide a powerful abstraction to working with LLMs. And since it is opensource, in a couple of weeks you too could be (contributing)[https://python.langchain.com/docs/contributing] to the documentation and code! good luck! |
Beta Was this translation helpful? Give feedback.
-
Thanks for the feedback here - will make sure the Python maintainers see it! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Langchain is extremely difficult to start using.
All the "popular templates" go straight into some advanced topics.
The code snippets in the documentation are fragmented code instead of complete pieces of code that I can copy paste.
It's not obvious what's the code and what's the return. Like here, the example is written in three code blocks for some reason:
The first two blocks are the code I'm supposed to execute, but the third block is just an example return value.
The code uses obsolete imports (
from langchain_openai import ChatOpenAI
no longer works, I think).?The code has no comments, it's not type-annotated, there are no assignments. Nobody really calls code like
chain.invoke({"foo": "bears"})
if that piece returns something.would be much clearer (with some comments), and of course the response block should not look the same as the example code.
This is just a simple example, but the same problem persists throughout.
Some code snippets use LCEL, some use other notation.
It's very discouraging.
Beta Was this translation helpful? Give feedback.
All reactions