Connect to a model via wrappers and api #25540
Replies: 8 comments 6 replies
-
To connect to an LLM model via wrappers using LangChain, follow these steps:
These examples demonstrate how to connect to an LLM model using the OpenLLM, CTranslate2, Ollama, and Llama.cpp wrappers in LangChain, either by connecting to a server, running the model locally, or using a converted model for efficient inference [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
-
Very helpful!
…On Mon, 19 Aug 2024, 6:05 pm dosubot[bot], ***@***.***> wrote:
To convert your provided code for connecting to a model using HMAC
authentication and sending requests to an equivalent approach in LangChain,
you need to create a custom LLM class. Below is an example implementation:
1.
*Create a Custom LLM Class*: Implement the _call method to handle the
HMAC authentication and send requests to your model.
2.
*Implement Required Methods*: Implement the _call and _llm_type
methods as required by LangChain.
Here is an example implementation:
from typing import Any, Dict, List, Optionalimport hmacimport hashlibimport base64import jsonimport requestsimport timeimport uuidfrom langchain_core.callbacks.manager import CallbackManagerForLLMRunfrom langchain_core.language_models.llms import LLM
class HMACAuthenticatedLLM(LLM):
"""A custom LLM that uses HMAC authentication to connect to a model."""
api_key: str
secret_key: str
endpoint: str
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Run the LLM on the given input with HMAC authentication. Args: prompt: The prompt to generate from. stop: Stop words to use when generating. run_manager: Callback manager for the run. **kwargs: Arbitrary additional keyword arguments. Returns: The model output as a string. """
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
# Create request body
request_body = {
"messages": [
{"content": "", "role": "system"},
{"content": prompt, "role": "user"},
],
"frequency_penalty": 0,
"max_tokens": 1000,
"n": 1,
"presence_penalty": 0,
"response_format": {"type": "text"},
"stream": False,
"temperature": 0.6,
"top_p": 1
}
# Create HMAC signature
timestamp = int(time.time() * 1000)
request_id = uuid.uuid4()
hmac_source_data = self.api_key + str(request_id) + str(timestamp) + json.dumps(request_body)
computed_hash = hmac.new(self.secret_key.encode(), hmac_source_data.encode(), hashlib.sha256)
hmac_signature = base64.b64encode(computed_hash.digest()).decode()
# Send request to the model
headers = {
"api-key": self.api_key,
"Client-Request-Id": str(request_id),
"Timestamp": str(timestamp),
"Authorization": hmac_signature,
"Accept": "application/json",
}
response = requests.post(self.endpoint + "/chat/completions", headers=headers, json=request_body)
response.raise_for_status()
return response.json().get('choices', [{}])[0].get('message', {}).get('content', '')
@Property
def _llm_type(self) -> str:
"""Get the type of language model used by this chat model. Used for logging purposes only."""
return "hmac_authenticated"
# Example usagellm = HMACAuthenticatedLLM(api_key='your_api_key', secret_key='your_secret_key', endpoint='https://api.yourmodel.com')print(llm.invoke("This is a test prompt"))
This custom LLM class uses HMAC authentication to connect to a model and
send requests. You can now use this class in your LangChain programs [1]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/custom_llm.ipynb>
[2]
<https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/sparkllm.py>
[3]
<https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/sparkllm.py>
[4]
<https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/chat_models.py>
.
*To continue talking to Dosu <https://dosu.dev>, mention @dosu
<https://github.com/dosu>.*
—
Reply to this email directly, view it on GitHub
<#25540 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADWQOTECIP5ZLBJO6HNDF33ZSHRBRAVCNFSM6AAAAABMXTIIKOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZYGMZDSNY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Also please help in adding embedding process to this flow
…On Mon, 19 Aug 2024, 8:53 pm veera v, ***@***.***> wrote:
Very helpful!
On Mon, 19 Aug 2024, 6:05 pm dosubot[bot], ***@***.***>
wrote:
> To convert your provided code for connecting to a model using HMAC
> authentication and sending requests to an equivalent approach in LangChain,
> you need to create a custom LLM class. Below is an example implementation:
>
> 1.
>
> *Create a Custom LLM Class*: Implement the _call method to handle the
> HMAC authentication and send requests to your model.
> 2.
>
> *Implement Required Methods*: Implement the _call and _llm_type
> methods as required by LangChain.
>
> Here is an example implementation:
>
> from typing import Any, Dict, List, Optionalimport hmacimport hashlibimport base64import jsonimport requestsimport timeimport uuidfrom langchain_core.callbacks.manager import CallbackManagerForLLMRunfrom langchain_core.language_models.llms import LLM
> class HMACAuthenticatedLLM(LLM):
> """A custom LLM that uses HMAC authentication to connect to a model."""
>
> api_key: str
> secret_key: str
> endpoint: str
>
> def _call(
> self,
> prompt: str,
> stop: Optional[List[str]] = None,
> run_manager: Optional[CallbackManagerForLLMRun] = None,
> **kwargs: Any,
> ) -> str:
> """Run the LLM on the given input with HMAC authentication. Args: prompt: The prompt to generate from. stop: Stop words to use when generating. run_manager: Callback manager for the run. **kwargs: Arbitrary additional keyword arguments. Returns: The model output as a string. """
> if stop is not None:
> raise ValueError("stop kwargs are not permitted.")
>
> # Create request body
> request_body = {
> "messages": [
> {"content": "", "role": "system"},
> {"content": prompt, "role": "user"},
> ],
> "frequency_penalty": 0,
> "max_tokens": 1000,
> "n": 1,
> "presence_penalty": 0,
> "response_format": {"type": "text"},
> "stream": False,
> "temperature": 0.6,
> "top_p": 1
> }
>
> # Create HMAC signature
> timestamp = int(time.time() * 1000)
> request_id = uuid.uuid4()
> hmac_source_data = self.api_key + str(request_id) + str(timestamp) + json.dumps(request_body)
> computed_hash = hmac.new(self.secret_key.encode(), hmac_source_data.encode(), hashlib.sha256)
> hmac_signature = base64.b64encode(computed_hash.digest()).decode()
>
> # Send request to the model
> headers = {
> "api-key": self.api_key,
> "Client-Request-Id": str(request_id),
> "Timestamp": str(timestamp),
> "Authorization": hmac_signature,
> "Accept": "application/json",
> }
> response = requests.post(self.endpoint + "/chat/completions", headers=headers, json=request_body)
> response.raise_for_status()
>
> return response.json().get('choices', [{}])[0].get('message', {}).get('content', '')
>
> @Property
> def _llm_type(self) -> str:
> """Get the type of language model used by this chat model. Used for logging purposes only."""
> return "hmac_authenticated"
> # Example usagellm = HMACAuthenticatedLLM(api_key='your_api_key', secret_key='your_secret_key', endpoint='https://api.yourmodel.com')print(llm.invoke("This is a test prompt"))
>
> This custom LLM class uses HMAC authentication to connect to a model and
> send requests. You can now use this class in your LangChain programs [1]
> <https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/custom_llm.ipynb>
> [2]
> <https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/sparkllm.py>
> [3]
> <https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/sparkllm.py>
> [4]
> <https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/chat_models.py>
> .
>
> *To continue talking to Dosu <https://dosu.dev>, mention @dosu
> <https://github.com/dosu>.*
>
> —
> Reply to this email directly, view it on GitHub
> <#25540 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADWQOTECIP5ZLBJO6HNDF33ZSHRBRAVCNFSM6AAAAABMXTIIKOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZYGMZDSNY>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***
> com>
>
|
Beta Was this translation helpful? Give feedback.
-
Please add Embedding with vectorization
…On Mon, 19 Aug 2024, 8:59 pm dosubot[bot], ***@***.***> wrote:
To add an embedding process to the custom HMACAuthenticatedLLM class in
LangChain, you can follow the structure and methods used in the
HypotheticalDocumentEmbedder class. Here is an example of how you can
integrate the embedding process:
1. *Define the Embedding Class*: Create a class that inherits from
BaseModel and Embeddings.
2. *Initialize Required Parameters*: Set up the necessary parameters
such as api_key, app_id, and api_secret.
3. *Implement the Embedding Methods*: Implement methods to handle the
embedding process, similar to embed_documents and embed_query in the
HypotheticalDocumentEmbedder class.
Here is a sample implementation:
import base64import hashlibimport hmacimport jsonimport loggingfrom datetime import datetimefrom time import mktimefrom typing import Any, Dict, List, Optionalfrom urllib.parse import urlencodefrom wsgiref.handlers import format_date_time
import numpy as npimport requestsfrom langchain_core.embeddings import Embeddingsfrom langchain_core.pydantic_v1 import BaseModel, Field, SecretStrfrom langchain_core.utils import secret_from_envfrom numpy import ndarray
logger = logging.getLogger(__name__)
class HMACAuthenticatedLLM(BaseModel, Embeddings):
"""Custom HMAC Authenticated LLM with embedding capabilities."""
app_id: SecretStr = Field(default_factory=secret_from_env("APP_ID"))
api_key: Optional[SecretStr] = Field(default_factory=secret_from_env("API_KEY", default=None))
api_secret: Optional[SecretStr] = Field(default_factory=secret_from_env("API_SECRET", default=None))
base_url: str = Field(default="https://api.example.com/")
domain: str = Field(default="para")
class Config:
allow_population_by_field_name = True
def _assemble_ws_auth_url(self, request_url: str, method: str = "POST") -> str:
u = self._parse_url(request_url)
host = u.host
path = u.path
now = datetime.now()
date = format_date_time(mktime(now.timetuple()))
signature_origin = f"host: {host}\ndate: {date}\n{method} {path} HTTP/1.1"
signature_sha = hmac.new(
self.api_secret.get_secret_value().encode("utf-8"),
signature_origin.encode("utf-8"),
digestmod=hashlib.sha256,
).digest()
signature_sha_str = base64.b64encode(signature_sha).decode("utf-8")
authorization_origin = (
f'api_key="{self.api_key.get_secret_value()}", algorithm="hmac-sha256", headers="host date request-line", signature="{signature_sha_str}"'
)
authorization = base64.b64encode(authorization_origin.encode("utf-8")).decode("utf-8")
values = {"host": host, "date": date, "authorization": authorization}
return request_url + "?" + urlencode(values)
@staticmethod
def _parse_url(request_url: str) -> Url:
stidx = request_url.index("://")
host = request_url[stidx + 3:]
schema = request_url[:stidx + 3]
edidx = host.index("/")
if edidx <= 0:
raise ValueError("Invalid request URL")
path = host[edidx:]
host = host[:edidx]
return Url(host, path, schema)
def _get_body(self, appid: str, text: dict) -> Dict[str, Any]:
body = {
"header": {"app_id": appid, "uid": "39769795890", "status": 3},
"parameter": {
"emb": {"domain": self.domain, "feature": {"encoding": "utf8"}}
},
"payload": {
"messages": {
"text": base64.b64encode(json.dumps(text).encode("utf-8")).decode()
}
},
}
return body
@staticmethod
def _parser_message(message: str) -> Optional[ndarray]:
data = json.loads(message)
code = data["header"]["code"]
if code != 0:
logger.warning(f"Request error: {code}, {data}")
return None
else:
text_base = data["payload"]["feature"]["text"]
text_data = base64.b64decode(text_base)
dt = np.dtype(np.float32)
dt = dt.newbyteorder("<")
text = np.frombuffer(text_data, dtype=dt)
return text[:2560] if len(text) > 2560 else text
def _embed(self, texts: List[str]) -> Optional[List[List[float]]]:
app_id = self.app_id.get_secret_value()
url = self._assemble_ws_auth_url(self.base_url)
embed_result = []
for text in texts:
query_context = {"messages": [{"content": text, "role": "user"}]}
content = self._get_body(app_id, query_context)
response = requests.post(url, json=content, headers={"content-type": "application/json"}).text
res_arr = self._parser_message(response)
embed_result.append(res_arr.tolist() if res_arr is not None else None)
return embed_result
def embed_documents(self, texts: List[str]) -> Optional[List[List[float]]]:
return self._embed(texts)
def embed_query(self, text: str) -> Optional[List[float]]:
result = self._embed([text])
return result[0] if result is not None else None
This implementation follows the structure and methods used in the
HypotheticalDocumentEmbedder class to add an embedding process to the
custom HMACAuthenticatedLLM class [1]
<https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/sparkllm.py>
[2]
<https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/llm_rails.py>
[3]
<https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/hyde/base.py>
.
*To continue talking to Dosu <https://dosu.dev>, mention @dosu
<https://github.com/dosu>.*
—
Reply to this email directly, view it on GitHub
<#25540 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADWQOTEJHPQ5UOTOFDMSEPTZSIFN7AVCNFSM6AAAAABMXTIIKOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZYGU2TINI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
How to store the Embeddings in vector database and retrieve it
…On Mon, 19 Aug 2024, 10:02 pm dosubot[bot], ***@***.***> wrote:
To add embedding with vectorization to the custom HMACAuthenticatedLLM
class in LangChain, you can follow the structure and methods used in the
LLMRailsEmbeddings class. Here is an example of how you might integrate
embedding functionality into your custom class:
from typing import Dict, List, Optionalimport requestsfrom langchain.embeddings import Embeddingsfrom langchain.pydantic_v1 import BaseModel, SecretStrfrom langchain.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
class HMACAuthenticatedLLM(BaseModel, Embeddings):
"""Custom LLM class with HMAC authentication and embedding capabilities."""
model: str = "embedding-english-v1"
"""Model name to use."""
api_key: Optional[SecretStr] = None
"""API key for authentication."""
class Config:
extra = "forbid"
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
api_key = convert_to_secret_str(
get_from_dict_or_env(values, "api_key", "YOUR_API_KEY_ENV_VARIABLE")
)
values["api_key"] = api_key
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Call out to the embedding endpoint. Args: texts: The list of texts to embed. Returns: List of embeddings, one for each text. """
response = requests.post(
"https://api.yourservice.com/v1/embeddings",
headers={"X-API-KEY": self.api_key.get_secret_value()}, # type: ignore[union-attr]
json={"input": texts, "model": self.model},
timeout=60,
)
return [item["embedding"] for item in response.json()["data"]]
def embed_query(self, text: str) -> List[float]:
"""Call out to the embedding endpoint. Args: text: The text to embed. Returns: Embeddings for the text. """
return self.embed_documents([text])[0]
This example shows how to integrate embedding functionality into your
custom HMACAuthenticatedLLM class by following the pattern used in the
LLMRailsEmbeddings class. You need to replace "
https://api.yourservice.com/v1/embeddings" with the actual endpoint of
your embedding service and "YOUR_API_KEY_ENV_VARIABLE" with the
appropriate environment variable for your API key. Additionally, you can
refer to the langchain_community.embeddings module for more examples of
embedding models and their implementations [1]
<https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/llm_rails.py>
[2]
<https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/embeddings/__init__.py>
[3]
<https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/__init__.py>
.
*To continue talking to Dosu <https://dosu.dev>, mention @dosu
<https://github.com/dosu>.*
—
Reply to this email directly, view it on GitHub
<#25540 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADWQOTDGCXRF44LJVRSFGHDZSIM3NAVCNFSM6AAAAABMXTIIKOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZYGYZDQOA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Please provide me a consolidated code starting from reading documents,
embedding, storing to vector database and retrieve with documents name
…On Mon, 19 Aug 2024, 10:10 pm dosubot[bot], ***@***.***> wrote:
To store embeddings in a vector database and retrieve them using
LangChain, you can follow these steps:
1.
*Create a Vector Store with Texts*: Use the InMemoryVectorStore to
store your text embeddings.
2.
*Use the Vector Store as a Retriever*: Convert the vector store into a
retriever to query and retrieve the most similar texts.
Here is the code to achieve this:
# Create a vector store with a sample textfrom langchain_core.vectorstores import InMemoryVectorStorefrom langchain_ai21 import AI21Embeddings
# Initialize the AI21 embeddingsembeddings = AI21Embeddings()
text = "LangChain is the framework for building context-aware reasoning applications"
vectorstore = InMemoryVectorStore.from_texts(
[text],
embedding=embeddings,
)
# Use the vectorstore as a retrieverretriever = vectorstore.as_retriever()
# Retrieve the most similar textretrieved_documents = retriever.invoke("What is LangChain?")
# Show the retrieved document's contentprint(retrieved_documents[0].page_content)
This code snippet demonstrates how to index a sample document and retrieve
it using the AI21Embeddings object initialized earlier. The
InMemoryVectorStore is used to store the embeddings, and the retriever is
used to query the stored embeddings [1]
<https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/text_embedding.ipynb>
[2]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/concepts.mdx>
[3]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/text_embedding/mistralai.ipynb>
[4]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/text_embedding/fireworks.ipynb>
[5]
<https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/text_embedding/ai21.ipynb>
.
For more details on how to use vector stores and retrievers, you can refer
to the LangChain documentation on vector stores
<https://docs.langchain.com/docs/how_to/#vector-stores> and retrievers
<https://docs.langchain.com/docs/how_to/#retrievers>. Additionally, for
detailed documentation on AI21Embeddings features and configuration
options, please refer to the API reference
<https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html>
.
*To continue talking to Dosu <https://dosu.dev>, mention @dosu
<https://github.com/dosu>.*
—
Reply to this email directly, view it on GitHub
<#25540 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADWQOTGTNGQ2V6SZOFFGLGLZSINXLAVCNFSM6AAAAABMXTIIKOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZYGY2DAMY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Please help me with the consolidated code for the below requirement
1. Embedding
2. Vector database
3. Retrieve from vector database
4. Source document
…On Tue, 20 Aug 2024, 6:37 am veera v, ***@***.***> wrote:
Please provide me a consolidated code starting from reading documents,
embedding, storing to vector database and retrieve with documents name
On Mon, 19 Aug 2024, 10:10 pm dosubot[bot], ***@***.***>
wrote:
> To store embeddings in a vector database and retrieve them using
> LangChain, you can follow these steps:
>
> 1.
>
> *Create a Vector Store with Texts*: Use the InMemoryVectorStore to
> store your text embeddings.
> 2.
>
> *Use the Vector Store as a Retriever*: Convert the vector store into
> a retriever to query and retrieve the most similar texts.
>
> Here is the code to achieve this:
>
> # Create a vector store with a sample textfrom langchain_core.vectorstores import InMemoryVectorStorefrom langchain_ai21 import AI21Embeddings
> # Initialize the AI21 embeddingsembeddings = AI21Embeddings()
> text = "LangChain is the framework for building context-aware reasoning applications"
> vectorstore = InMemoryVectorStore.from_texts(
> [text],
> embedding=embeddings,
> )
> # Use the vectorstore as a retrieverretriever = vectorstore.as_retriever()
> # Retrieve the most similar textretrieved_documents = retriever.invoke("What is LangChain?")
> # Show the retrieved document's contentprint(retrieved_documents[0].page_content)
>
> This code snippet demonstrates how to index a sample document and
> retrieve it using the AI21Embeddings object initialized earlier. The
> InMemoryVectorStore is used to store the embeddings, and the retriever
> is used to query the stored embeddings [1]
> <https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/text_embedding.ipynb>
> [2]
> <https://github.com/langchain-ai/langchain/blob/master/docs/docs/concepts.mdx>
> [3]
> <https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/text_embedding/mistralai.ipynb>
> [4]
> <https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/text_embedding/fireworks.ipynb>
> [5]
> <https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/text_embedding/ai21.ipynb>
> .
>
> For more details on how to use vector stores and retrievers, you can
> refer to the LangChain documentation on vector stores
> <https://docs.langchain.com/docs/how_to/#vector-stores> and retrievers
> <https://docs.langchain.com/docs/how_to/#retrievers>. Additionally, for
> detailed documentation on AI21Embeddings features and configuration
> options, please refer to the API reference
> <https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html>
> .
>
> *To continue talking to Dosu <https://dosu.dev>, mention @dosu
> <https://github.com/dosu>.*
>
> —
> Reply to this email directly, view it on GitHub
> <#25540 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADWQOTGTNGQ2V6SZOFFGLGLZSINXLAVCNFSM6AAAAABMXTIIKOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMZYGY2DAMY>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***
> com>
>
|
Beta Was this translation helpful? Give feedback.
-
Able to connect to the model with the solution provided |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I need to connect to the llm model via wrappers using langchain. Please provide the steps and code in python
System Info
I need to connect to the llm model via wrappers using langchain. Please provide the steps and code in python
Beta Was this translation helpful? Give feedback.
All reactions