Skip to content

hostOverride setting causes incorrect requests in AI Gateway #11225

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
zhengkezhou1 opened this issue May 16, 2025 · 3 comments · May be fixed by #11282
Open

hostOverride setting causes incorrect requests in AI Gateway #11225

zhengkezhou1 opened this issue May 16, 2025 · 3 comments · May be fixed by #11282
Labels
Type: Bug Something isn't working

Comments

@zhengkezhou1
Copy link

zhengkezhou1 commented May 16, 2025

kgateway version

v1.2.1

Kubernetes Version

kind@v0.27.0

Describe the bug

I am exploring AiGateway and in the configure an LLM provider step, I want to use Openrouter instead of directly using OpenAI. From EP-10494: Add Support for AI Gateway APIs, I learned about using proxy requests:

apiVersion: gateway.kgateway.dev/v1alpha1
kind: Backend
metadata:
  labels:
    app: ai-kgateway
  name: openrouter
  namespace: kgateway-system
spec:
  type: AI
  ai:
    llm:
      hostOverride:
        host: openrouter.ai
        port: 443
      provider:
        openai:
          model: gpt-4o
          authToken:
            kind: SecretRef
            secretRef:
              name: openrouter-secret

HTTPRoute is as follows:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: openrouter
  namespace: kgateway-system
  labels:
    app: ai-kgateway
spec:
  parentRefs:
    - name: ai-gateway
      namespace: kgateway-system
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /openai
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplaceFullPath
          replaceFullPath: /api/v1/chat/completions
    backendRefs:
    - name: openrouter
      namespace: kgateway-system
      group: gateway.kgateway.dev
      kind: Backend

It is not actually requesting the openrouter API, and content-type: text/html.

* Host localhost:8080 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:8080...
* Connected to localhost (::1) port 8080
> POST /openai HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.7.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 133
>
* upload completely sent off: 133 bytes
< HTTP/1.1 200 OK
< date: Fri, 16 May 2025 14:13:53 GMT
< content-type: text/html; charset=utf-8
< cache-control: private, no-cache, no-store, max-age=0, must-revalidate

Expected Behavior

content-type: text/html; charset=utf-8 should be Content-Type: application/json

Steps to reproduce the bug

  1. Use follow config.
apiVersion: v1
kind: Secret
metadata:
  name: openrouter-secret
  namespace: kgateway-system
  labels:
    app: ai-kgateway
type: Opaque
stringData:
  Authorization: 

---
apiVersion: gateway.kgateway.dev/v1alpha1
kind: Backend
metadata:
  labels:
    app: ai-kgateway
  name: openrouter
  namespace: kgateway-system
spec:
  type: AI
  ai:
    llm:
      hostOverride:
        host: openrouter.ai
        port: 443
      provider:
        openai:
          model: gpt-4o
          authToken:
            kind: SecretRef
            secretRef:
              name: openrouter-secret

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: openrouter
  namespace: kgateway-system
  labels:
    app: ai-kgateway
spec:
  parentRefs:
    - name: ai-gateway
      namespace: kgateway-system
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /openai
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplaceFullPath
          replaceFullPath: /api/v1/chat/completions
    backendRefs:
    - name: openrouter
      namespace: kgateway-system
      group: gateway.kgateway.dev
      kind: Backend
  1. sent follow request
 curl -v "localhost:8080/openai" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "openai/gpt-4o",
  "messages": [
    {
      "role": "user",
      "content": "What is the meaning of life?"
    }
  ]
}'

Additional Environment Detail

No response

Additional Context

Image
@zhengkezhou1 zhengkezhou1 added the Type: Bug Something isn't working label May 16, 2025
@haoqixu
Copy link

haoqixu commented May 19, 2025

This is because the openai LLM provider always replaces the full path with the hard-coded /v1/chat/completions. I think we need an option to allow users to specify the path.

@zhengkezhou1
Copy link
Author

Yeah, but i found Ollama works fine.

@npolshakova
Copy link
Contributor

npolshakova commented May 22, 2025

Have you tried doing the path replacement to match OpenRouter's expected /api/v1 route?

apiVersion: gateway.kgateway.dev/v1alpha1
kind: Backend
metadata:
  labels:
    app: ai-kgateway
  name: openrouter
  namespace: kgateway-system
spec:
  type: AI
  ai:
    llm:
      hostOverride:
        host: openrouter.ai
        port: 443
      provider:
        openai:
          model: gpt-4o
          authToken:
            kind: SecretRef
            secretRef:
              name: openrouter-secret
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: openrouter
  namespace: kgateway-system
  labels:
    app: ai-kgateway
spec:
  parentRefs:
    - name: ai-gateway
      namespace: kgateway-system
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /openai
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplaceFullPath
          replaceFullPath: /api/v1
    backendRefs:
    - name: openrouter
      namespace: kgateway-system
      group: gateway.kgateway.dev
      kind: Backend

Edit: Ah, nevermind I see you're setting replaceFullPath in the example already.

Right now we're forcing the path to /v1/chat/completions and that happens at the upstream filter. That means you cannot rewrite it with normal url rewrite in the http filter chain.

There are a couple options:

  • Add a hostOverride path field for overriding the path for AI Backends
  • Use another transformation filter AFTER the AI Upstream Transformation to rewrite the url

@andy-fong might also have thoughts on the best approach here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Bug Something isn't working
Projects
None yet
3 participants