Getting Codex Working with Ollama #864
-
As an experiment I am trying to use a local LLM namely qwen:14b and deepseek-r1-tool-calling:14b running against Ollama. I run codex in an activated python venv and I give a simple prompt such as "make me a hello world space invaders game in python". Codex thinks and then comes back with a workable solution but stalls when it trys to call a tool to patch the file it’s trying to create. When it stalls codex returns the prompt back to the user and displays a lump of json with the patch details in it. The output looks slightly different between DeepSeek and Ollama. Qwen output Abbreviated for brevity below. Is there anything I can do to address this? I am guessing that codex is not supporting the tool calls that Ollama is returning.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I have answered my own question. I had gone directly from using the default openAI model to a local model on my Mac mini. My expectations were a little off :-) I Eventually I managed to get a local model to write a 1 line hello world in python after a lot of thinking, and trying to work out how to write a file. So my answer to myself is Codex CLI is fine, you just need a much more powerful model! Thanks so much to the entire team and community for codex and all things open source |
Beta Was this translation helpful? Give feedback.
I have answered my own question. I had gone directly from using the default openAI model to a local model on my Mac mini. My expectations were a little off :-) I Eventually I managed to get a local model to write a 1 line hello world in python after a lot of thinking, and trying to work out how to write a file.
So my answer to myself is Codex CLI is fine, you just need a much more powerful model!
Thanks so much to the entire team and community for codex and all things open source