-
Notifications
You must be signed in to change notification settings - Fork 21
Description
Issue Description
Compared to OpenAI gpt-4o
( or gpt-4
), llama3.1
(via ollama
) is so unstable to generate the executable command based on the user request. the generated answer is kinda off, and even it does not make really sense showed as following.
root@tomoyafujita:~/ros2_ws/colcon_ws# unset OPENAI_API_KEY
root@tomoyafujita:~/ros2_ws/colcon_ws# export OPENAI_MODEL_NAME=llama3.1
root@tomoyafujita:~/ros2_ws/colcon_ws# export OPENAI_ENDPOINT=http://localhost:11434/v1
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock /tf /tf_static /parameter_events /rosout /rosout_agg /rosgraph/initial_node_config /topic_info /param_changed /class_loader/class_list /cmd_vel /odom /imu_data /joint_states'
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock'
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock '
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock /cmd_vel /image /joint_states /topic /type_support_msgs/string__multiarray____1_5 /rosout / rosgraph /clock /parameter_events /time'
AI model can be really different and network size could be significantly different, since local LLM llama3.1
uses 4.7GB. even though i am not sure how much we can adjust the parameter, it would be worth to try Ollama Modelfile.
Consideration
Originally i though System Role
configuration was missing in llama3.1
against gpt-4o
but according to https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1#supported-roles and https://ollama.com/blog/openai-compatibility, both supports the same system roles for the chat completion.