-
Notifications
You must be signed in to change notification settings - Fork 384
Open
Labels
bugSomething isn't workingSomething isn't working
Description
First check
- I added a descriptive title to this issue.
- I used the GitHub search to try to find a similar issue and didn't find one.
- I searched the Marvin documentation for this issue.
Bug summary
I've been attempting to utilize the capabilities of marvin.fn, but I'm not receiving the expected return results. It persistently spews out errors. Regardless of whether I alternate between different models, or test out the examples detailed in the README, nothing appears to function properly.
Reproduction
import marvin
marvin.settings.log_verbose = True
marvin.settings.log_level = "DEBUG"
marvin.settings.openai.chat.completions.model = "gpt-4"
@marvin.fn
def add(a: int, b: int) -> int:
"""
Add two numbers together
"""
print(add(1, 2))Error
[05/24/24 19:38:25] DEBUG marvin.ai.text: Request: { logging.py:89
"tools": [
{
"type": "function",
"function": {
"name": "FormatResponse",
"description": "Format the response with valid JSON.",
"parameters": {
"description": "Format the response with valid JSON.",
"properties": {
"value": {
"description": "The formatted response",
"title": "Value",
"type": "integer"
}
},
"required": [
"value"
],
"type": "object"
}
}
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "FormatResponse"
}
},
"logit_bias": null,
"max_tokens": null,
"response_format": null,
"messages": [
{
"content": [
{
"type": "text",
"text": "Your job is to generate likely outputs for a Python
function with the\nfollowing definition:\n\ndef add(a: int, b: int) -> int:\n
\"\"\"\n Add two numbers together\n \"\"\"\n\nThe user will provide
function inputs (if any) and you must respond with\nthe most likely
result.\n\ne.g. `list_fruits(n: int) -> list[str]` (3) -> \"apple\",
\"banana\", \"cherry\""
}
],
"role": "system"
},
{
"content": [
{
"type": "text",
"text": "## Function inputs\n\nThe function was called with the
following inputs:\n- a: 1\n- b: 2\n\n\nWhat is the function's output?"
}
],
"role": "user"
},
{
"content": [
{
"type": "text",
"text": "The output is"
}
],
"role": "assistant"
}
],
"model": "gpt-4",
"frequency_penalty": 0.0,
"n": 1,
"presence_penalty": 0.0,
"seed": null,
"stop": null,
"stream": false,
"temperature": 1.0,
"top_p": 1.0,
"user": null
}
[05/24/24 19:38:30] DEBUG marvin.ai.text: Response: { logging.py:89
"id": "chatcmpl-7ulw8DACFdtFgXuWXd7UUhUtu1q8C",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "I'm sorry, but without more information about the function
you're referring to, I can't provide an accurate output. The inputs you've
provided are 'a' with a value of 1 and 'b' with a value of 2, but the
function's behavior or formula is not specified. Could you please provide more
details about the function?",
"role": "assistant",
"function_call": null,
"tool_calls": null
}
}
],
"created": 1716550705,
"model": "gpt-4",
"object": "chat.completion",
"system_fingerprint": "fp_9ca467b813",
"usage": {
"completion_tokens": 0,
"prompt_tokens": 0,
"total_tokens": 0
}
}
Traceback (most recent call last):
File "/home/admin/zhengshoujian.zsj/pluto-lang/pluto/examples/prog-sig/app/main.py", line 19, in <module>
print(add(1, 2))
^^^^^^^^^
File "/home/admin/.pyenv/versions/3.12.3/lib/python3.12/site-packages/marvin/ai/text.py", line 614, in sync_wrapper
return run_sync(async_wrapper(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/admin/.pyenv/versions/3.12.3/lib/python3.12/site-packages/marvin/utilities/asyncio.py", line 106, in run_sync
return context.run(asyncio.run, coroutine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/admin/.pyenv/versions/3.12.3/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/home/admin/.pyenv/versions/3.12.3/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/admin/.pyenv/versions/3.12.3/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/admin/.pyenv/versions/3.12.3/lib/python3.12/site-packages/marvin/ai/text.py", line 592, in async_wrapper
result = await _generate_typed_llm_response_with_tool(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/admin/.pyenv/versions/3.12.3/lib/python3.12/site-packages/marvin/ai/text.py", line 194, in _generate_typed_llm_response_with_tool
return response.tool_outputs[0]
~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of rangeVersions
Version: 2.3.4
Python version: 3.12.3
OS/Arch: linux/x86_64
Additional context
No response
jeremy-feng
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working