Replies: 4 comments 16 replies
-
This isn't built into the plugin. It's a great suggestion though and something I would like to add in for models that support it. It's on my roadmap but not sure when I'll get around to it. Could you share an ideal workflow for how you'd send an image to an LLM with CodeCompanion? Would be good to understand more how people would like it to function. |
Beta Was this translation helpful? Give feedback.
-
There is some really interesting info here: Using vision input in Copilot Chat with Claude and Gemini is now in public preview Also: |
Beta Was this translation helpful? Give feedback.
-
hey guys, I tried to work on this, I tried to build the simplest form of it, which is, just inputing the image url and share it with the AI, and only in openai compatible adapter. it seems I can get it working. I tried to share this image with the AI: https://images.unsplash.com/photo-1744167602422-245a1d448e5e?q=80&w=1887&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D image2.mp4I feel the code is a bit hacky though, here is the snippet of the specific part to add image ---@param chat CodeCompanion.Chat
function M.slash_add_image_url(chat)
local function callback(input)
if input then
local id = "<image_url>" .. input .. "</image_url>"
local new_message = {
{
type = "text",
text = "the user is sharing this image with you. be ready for a query or task regarding this image",
},
{
type = "image_url",
image_url = {
url = input,
},
},
}
local constants = require("codecompanion.config").config.constants
chat:add_message({
role = constants.USER_ROLE,
content = vim.fn.json_encode(new_message),
}, { reference = id, visible = false })
chat.references:add({
id = id,
source = "adapter.image_url",
})
end
end
vim.ui.input({ prompt = "> Enter image url", default = "", completion = "dir" }, callback)
end
because content (or maybe internally message (?)) only support string, then I need to do json.encode on the add_message call and re parse to json on the from_message handler. form_messages = function(self, messages)
local result = openai.handlers.form_messages(self, messages)
local fun = require("hanipcode.local.fun")
fun.map(result.messages, function(v)
local ok, json_res = pcall(function()
return vim.fn.json_decode(v.content)
end, "not a json")
if ok then
v.content = json_res
return v
end
return v
end)
return result
it works and doesn't need interface/type change. but I am not sure if this will work in other non openai compatible models. also not sure if there are other unintended implication (I am not fully familiar with the codecompanion internal) I'll try to tinker it more to make it work with image from clipboard |
Beta Was this translation helpful? Give feedback.
-
What about creating images? How are you guys working with that? I guess neovim doesn't display the created image response |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
in frontend applications we always need to attach images. I couldn't find this information any place.
I'm using the /file but I'm not sure if there is any other solution.
Beta Was this translation helpful? Give feedback.
All reactions