You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our agents can be deployed with either OpenAI API or your local models.
79
3
80
-
### [Option 1] Using OpenAI API
81
-
Assessing the OpenAI API requires the API key, which you may obtained from [here](https://platform.openai.com/account/api-keys). We here provide instructions for different OS.
4
+
## [Option 1] Using OpenAI API
5
+
Accessing the OpenAI API requires the API key, which you could get from [here](https://platform.openai.com/account/api-keys). We here provide instructions for different OS.
These commands on Windows will set the environment variable for the duration of that particular Command Prompt or PowerShell session only. You may use `setx` or change the system properties dialog for the change to take place in all the new sessions.
112
36
37
+
### General method
113
38
114
-
### [Option 2] Using Local Models
115
-
In the current landscape, for those seeking highly stable content generation, OpenAI's GPT-3.5 turbo, GPT-4o are often recommended. However, the field is rich with many other outstanding open-source models that also yield commendable results. CAMEL can support developers to delve into integrating these open-source large language models (LLMs) to achieve project outputs based on unique input ideas.
116
-
117
-
#### Example: Using Ollama to set Llama 3 locally
39
+
Create a file named `.env` in your project directory, with the following setting.
118
40
119
-
- Download [Ollama](https://ollama.com/download).
120
-
- After setting up Ollama, pull the Llama3 model by typing the following command into the terminal:
121
41
```bash
122
-
ollama pull llama3
42
+
OPENAI_API_KEY=<your-openai-api-key>
123
43
```
124
-
- Create a ModelFile similar the one below in your project directory.
125
-
```bash
126
-
FROM llama3
127
44
128
-
# Set parameters
129
-
PARAMETER temperature 0.8
130
-
PARAMETER stop Result
45
+
Then, load the environment variables in your python script:
131
46
132
-
# Sets a custom system message to specify the behavior of the chat assistant
47
+
```python
48
+
from dotenv import load_dotenv
49
+
import os
133
50
134
-
# Leaving it blank for now.
51
+
load_dotenv()
135
52
136
-
SYSTEM """ """
53
+
OPENAI_API_KEY= os.getenv("OPENAI_API_KEY")
137
54
```
138
-
- Create a script to get the base model (llama3) and create a custom model using the ModelFile above. Save this as a .sh file:
139
-
```bash
140
-
#!/bin/zsh
141
55
142
-
# variables
143
-
model_name="llama3"
144
-
custom_model_name="camel-llama3"
145
56
146
-
#get the base model
147
-
ollama pull $model_name
57
+
## [Option 2] Using other APIs
58
+
59
+
If you are using other APIs that are not provided by OpenAI, you can refer to [Models/Using Models by API calling](../key_modules/models.md#using-models-by-api-calling)
60
+
61
+
## [Option 3] Using Local Models
62
+
If you are using local models, you can refer to [Models/Using Local Models](../key_modules/models.md#using-on-device-open-source-models)
- Navigate to the directory where the script and ModelFile are located and run the script. Enjoy your Llama3 model, enhanced by CAMEL's excellent agents.
Here is the example code to use a chosen model. To utilize a different model, you can simply change three parameters the define your model to be used: `model_platform`, `model_type`, `model_config_dict` .
55
+
Here is an example code to use a specific model (gpt-4o-mini). If you want to use another model, you can simply change these three parameters: `model_platform`, `model_type`, `model_config_dict` .
55
56
56
57
```python
57
58
from camel.models import ModelFactory
@@ -69,15 +70,28 @@ model = ModelFactory.create(
69
70
70
71
# Define an assitant message
71
72
system_msg = BaseMessage.make_assistant_message(
72
-
role_name="Assistant",
73
-
content="You are a helpful assistant.",
73
+
role_name="Assistant",
74
+
content="You are a helpful assistant.",
74
75
)
75
76
76
77
# Initialize the agent
77
78
ChatAgent(system_msg, model=model)
78
79
```
79
80
80
-
## 4. Open Source LLMs
81
+
And if you want to use an OpenAI-compatible API, you can replace the `model` with the following code:
82
+
83
+
```python
84
+
from camel.models.openai_compatibility_model import OpenAICompatibilityModel
In the current landscape, for those seeking highly stable content generation, OpenAI’s gpt-4o-mini, gpt-4o are often recommended. However, the field is rich with many other outstanding open-source models that also yield commendable results. CAMEL can support developers to delve into integrating these open-source large language models (LLMs) to achieve project outputs based on unique input ideas.
82
96
83
97
While proprietary models like gpt-4o-mini and gpt-4o have set high standards for content generation, open-source alternatives offer viable solutions for experimentation and practical use. These models, supported by active communities and continuous improvements, provide flexibility and cost-effectiveness for developers and researchers.
0 commit comments