|
| 1 | +AutoGen Integration |
| 2 | +******************* |
| 3 | + |
| 4 | +ADS provides custom LLM clients for `AutoGen <https://microsoft.github.io/autogen/0.2/>`_. This custom client allows you to use LangChain chat models for AutoGen. |
| 5 | + |
| 6 | +.. admonition:: Requirements |
| 7 | + :class: note |
| 8 | + |
| 9 | + The LangChain integration requires ``python>=3.9``, ``langchain-community>=0.3`` and ``langchain-openai``. |
| 10 | + |
| 11 | + .. code-block:: |
| 12 | +
|
| 13 | + pip install "langchain-community>0.3" langchain-openai |
| 14 | +
|
| 15 | +
|
| 16 | +Custom Client Registration |
| 17 | +========================== |
| 18 | + |
| 19 | +AutoGen requires custom clients to be registered with each agent after the agent is created. To simplify the process, ADS provides a global ``register_model_client()`` method to register the client globally. Once registered with ADS, all new agents created subsequently will have the custom client registered automatically. |
| 20 | + |
| 21 | +The following code shows how you can import the custom client and register it with AutoGen. |
| 22 | + |
| 23 | +.. code-block:: python3 |
| 24 | +
|
| 25 | + from ads.llm.autogen.client_v02 import LangChainModelClient, register_custom_client |
| 26 | +
|
| 27 | + # Register the custom LLM globally |
| 28 | + register_custom_client(LangChainModelClient) |
| 29 | +
|
| 30 | +If you don't want the custom client to be registered for all agents. You may skip the above code and still use the ``register_model_client()`` method from each agent. |
| 31 | + |
| 32 | + |
| 33 | +LLM Config |
| 34 | +========== |
| 35 | + |
| 36 | +The LLM config for the ``LangChainModelClient`` should have the following keys: |
| 37 | + |
| 38 | +* ``model_client_cls``, the name of the client class, which should always be ``LangChainModelClient``. |
| 39 | +* ``langchain_cls``, the LangChain chat model class with the full path. |
| 40 | +* ``model``, the model name for AutoGen to identify the model. |
| 41 | +* ``client_params``, the parameters for initializing the LangChain client. |
| 42 | + |
| 43 | +The following keys are optional: |
| 44 | +* ``invoke_params``, the parameters for invoking the chat model. |
| 45 | +* ``function_call_params``, the parameters for invoking the chat model with functions/tools. |
| 46 | + |
| 47 | +Data Science Model Deployment |
| 48 | +----------------------------- |
| 49 | + |
| 50 | +Following is an example LLM config for LLM deployed with AI Quick Action on OCI Data Science Model Deployment: |
| 51 | + |
| 52 | +.. code-block:: python3 |
| 53 | +
|
| 54 | + import ads |
| 55 | + from ads.llm.chat_template import ChatTemplates |
| 56 | +
|
| 57 | + # You may use ADS to config the authentication globally |
| 58 | + ads.set_auth("security_token", profile="DEFAULT") |
| 59 | +
|
| 60 | + { |
| 61 | + "model_client_cls": "LangChainModelClient", |
| 62 | + "langchain_cls": "ads.llm.ChatOCIModelDeploymentVLLM", |
| 63 | + # Note that you may use a different model name for the `model` in `client_params`. |
| 64 | + "model": "Mistral-7B", |
| 65 | + # client_params will be used to initialize the LangChain ChatOCIModelDeploymentVLLM class. |
| 66 | + "client_params": { |
| 67 | + "model": "odsc-llm" |
| 68 | + "endpoint": "<ODSC_ENDPOINT>", |
| 69 | + "model_kwargs": { |
| 70 | + "temperature": 0, |
| 71 | + "max_tokens": 500 |
| 72 | + }, |
| 73 | + } |
| 74 | + # function_call_params will only be added to the API call when function/tools are added. |
| 75 | + "function_call_params": { |
| 76 | + "tool_choice": "auto", |
| 77 | + "chat_template": ChatTemplates.hermes() |
| 78 | + } |
| 79 | + } |
| 80 | +
|
| 81 | +
|
| 82 | +OCI Generative AI |
| 83 | +----------------- |
| 84 | + |
| 85 | +Following is an example LLM config for the OCI Generative AI service: |
| 86 | + |
| 87 | +.. code-block:: python3 |
| 88 | +
|
| 89 | + { |
| 90 | + "model_client_cls": "LangChainModelClient", |
| 91 | + "langchain_cls": "langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI", |
| 92 | + "model": "cohere.command-r-plus", |
| 93 | + # client_params will be used to initialize the LangChain ChatOCIGenAI class. |
| 94 | + "client_params": { |
| 95 | + "model_id": "cohere.command-r-plus", |
| 96 | + "compartment_id": COMPARTMENT_OCID, |
| 97 | + "model_kwargs": {"temperature": 0, "max_tokens": 4000}, |
| 98 | + "service_endpoint": "https://inference.generativeai.us-chicago-1.oci.oraclecloud.com" |
| 99 | + "auth_type": "SECURITY_TOKEN", |
| 100 | + "auth_profile": "DEFAULT", |
| 101 | + }, |
| 102 | + } |
| 103 | +
|
0 commit comments