Skip to content

BUG: llm/openai fails with 400 error in multi-turn conversations #53

Open
@iambenkay

Description

@iambenkay

While testing the durability behavior in multi-turn conversations, I discovered that the OpenAI component fails with a 400 Bad Request.
The root cause is unclear and the error message is not very descriptive but the wit contract is upheld so I expect it to respond correctly. I ran the same test on my bedrock implementation (#27 ) and it responds as expected.
I will run the same test on the other crates to see how they behave.

How to reproduce?

run this test case:

fn test8() -> String {
      let config = llm::Config {
          model: MODEL.to_string(),
          temperature: Some(0.2),
          max_tokens: None,
          stop_sequences: None,
          tools: vec![],
          tool_choice: None,
          provider_options: vec![],
      };

      let mut messages = vec![llm::Message {
          role: llm::Role::User,
          name: Some("vigoo".to_string()),
          content: vec![llm::ContentPart::Text(
              "Do you know what a haiku is?".to_string(),
          )],
      }];

      let stream = llm::stream(&messages, &config);

      let mut result = String::new();

      let name = std::env::var("GOLEM_WORKER_NAME").unwrap();

      loop {
          match consume_next_event(&stream) {
              Some(delta) => {
                  result.push_str(&delta);
              }
              None => break,
          }
      }

      messages.push(llm::Message {
          role: llm::Role::Assistant,
          name: Some("assistant".to_string()),
          content: vec![llm::ContentPart::Text(result)],
      });

      messages.push(llm::Message {
          role: llm::Role::User,
          name: Some("vigoo".to_string()),
          content: vec![llm::ContentPart::Text(
              "Can you write one for me?".to_string(),
          )],
      });

      println!("Message: {messages:?}");

      let stream = llm::stream(&messages, &config);

      let mut result = String::new();

      let name = std::env::var("GOLEM_WORKER_NAME").unwrap();
      let mut round = 0;

      loop {
          match consume_next_event(&stream) {
              Some(delta) => {
                  result.push_str(&delta);
              }
              None => break,
          }

          if round == 2 {
              atomically(|| {
                  let client = TestHelperApi::new(&name);
                  let answer = client.blocking_inc_and_get();
                  if answer == 1 {
                      panic!("Simulating crash")
                  }
              });
          }

          round += 1;
      }

      result
  }

This function (useful for parsing the event stream) also is needed for the above snippet:

fn consume_next_event(stream: &llm::ChatStream) -> Option<String> {
    let events = stream.blocking_get_next();

    if events.is_empty() {
        return None;
    }

    let mut result = String::new();

    for event in events {
        println!("Received {event:?}");

        match event {
            llm::StreamEvent::Delta(delta) => {
                for content in delta.content.unwrap_or_default() {
                    match content {
                        llm::ContentPart::Text(txt) => {
                            result.push_str(&txt);
                        }
                        llm::ContentPart::Image(image_ref) => match image_ref {
                            llm::ImageReference::Url(url_data) => {
                                result.push_str(&format!(
                                    "IMAGE URL: {} ({:?})\n",
                                    url_data.url, url_data.detail
                                ));
                            }
                            llm::ImageReference::Inline(inline_data) => {
                                result.push_str(&format!(
                                    "INLINE IMAGE: {} bytes, mime: {}, detail: {:?}\n",
                                    inline_data.data.len(),
                                    inline_data.mime_type,
                                    inline_data.detail
                                ));
                            }
                        },
                    }
                }
            }
            llm::StreamEvent::Finish(finish) => {}
            llm::StreamEvent::Error(error) => {
                result.push_str(&format!(
                    "\nERROR: {:?} {} ({})\n",
                    error.code,
                    error.message,
                    error.provider_error_json.unwrap_or_default()
                ));
            }
        }
    }

    Some(result)
}

Expected Output

Following the passed prompts, it should return a haiku in the multi-turn conversation.

Actual Output

It returns a 400 Error failing to create the SSE stream

Image

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions