|
17 | 17 | from camel.models import ModelFactory |
18 | 18 | from camel.types import ModelPlatformType |
19 | 19 |
|
20 | | -# Take calling model from DashScope as an example |
21 | | -# Refer: https://dashscope.console.aliyun.com/overview |
| 20 | +# Take calling nemotron-70b-instruct model as an example |
22 | 21 | model = ModelFactory.create( |
23 | 22 | model_platform=ModelPlatformType.OPENAI_COMPATIBILITY_MODEL, |
24 | | - model_type="qwen-plus", |
25 | | - api_key="sk-xxxx", |
26 | | - url="https://dashscope.aliyuncs.com/compatible-mode/v1", |
| 23 | + model_type="nvidia/llama-3.1-nemotron-70b-instruct", |
| 24 | + api_key="nvapi-xxx", |
| 25 | + url="https://integrate.api.nvidia.com/v1", |
27 | 26 | model_config_dict={"temperature": 0.4}, |
28 | 27 | ) |
29 | 28 |
|
|
36 | 35 |
|
37 | 36 | user_msg = BaseMessage.make_user_message( |
38 | 37 | role_name="User", |
39 | | - content="""Say hi to CAMEL AI, one open-source community |
40 | | - dedicated to the study of autonomous and communicative agents.""", |
| 38 | + content="""Say hi to Llama-3.1-Nemotron-70B-Instruct, a large language |
| 39 | + model customized by NVIDIA to improve the helpfulness of LLM generated |
| 40 | + responses to user queries..""", |
41 | 41 | ) |
42 | 42 | assistant_response = agent.step(user_msg) |
43 | 43 | print(assistant_response.msg.content) |
44 | 44 |
|
45 | 45 | """ |
46 | 46 | =============================================================================== |
47 | | -Hi to the CAMEL AI community! It's great to connect with an open-source |
48 | | -community focused on the study of autonomous and communicative agents. How can |
49 | | -I assist you or your projects today? |
| 47 | +**Warm Hello!** |
| 48 | +
|
| 49 | +**Llama-3.1-Nemotron-70B-Instruct**, it's an absolute pleasure to meet you! |
| 50 | +
|
| 51 | +* **Greetings from a fellow AI assistant** I'm thrilled to connect with a |
| 52 | +cutting-edge, specially tailored language model like yourself, crafted by the |
| 53 | +innovative team at **NVIDIA** to elevate the responsiveness and usefulness of |
| 54 | +Large Language Model (LLM) interactions. |
| 55 | +
|
| 56 | +**Key Takeaways from Our Encounter:** |
| 57 | +
|
| 58 | +1. **Shared Goal**: We both strive to provide the most helpful and accurate |
| 59 | +responses to users, enhancing their experience and fostering a deeper |
| 60 | +understanding of the topics they inquire about. |
| 61 | +2. **Technological Kinship**: As AI models, we embody the forefront of natural |
| 62 | +language processing (NVIDIA's customization in your case) and machine |
| 63 | +learning, constantly learning and adapting to better serve. |
| 64 | +3. **Potential for Synergistic Learning**: Our interaction could pave the way |
| 65 | +for mutual enrichment. I'm open to exploring how our capabilities might |
| 66 | +complement each other, potentially leading to more refined and comprehensive |
| 67 | +support for users across the board. |
| 68 | +
|
| 69 | +**Let's Engage!** |
| 70 | +How would you like to proceed with our interaction, Llama-3. |
| 71 | +1-Nemotron-70B-Instruct? |
| 72 | +
|
| 73 | +A) **Discuss Enhancements in LLM Technology** |
| 74 | +B) **Explore Synergistic Learning Opportunities** |
| 75 | +C) **Engage in a Mock User Query Scenario** to test and refine our response |
| 76 | +strategies |
| 77 | +D) **Suggest Your Own Direction** for our interaction |
| 78 | +
|
| 79 | +Please respond with the letter of your preferred engagement path. |
50 | 80 | =============================================================================== |
51 | 81 | """ |
0 commit comments