Skip to content

Add support for OpenRouter #1

@malaksedarous

Description

@malaksedarous

Feature: Add support for OpenRouter

Original request reference: https://www.reddit.com/r/mcp/comments/1mmmw16/comment/n81wiwe/


Goal

Add "openrouter" as a first-class LLM provider so users can route requests through OpenRouter’s unified API and access many upstream models (OpenAI, Anthropic, Google, open models, etc.) with a single key.

High-Level Overview

OpenRouter exposes an OpenAI-compatible chat completions endpoint at:

POST https://openrouter.ai/api/v1/chat/completions

Headers:

  • Authorization: Bearer <OPENROUTER_API_KEY>
  • HTTP-Referer:
  • X-Title:
  • Content-Type: application/json

Body (OpenAI-style):

{
  "model": "openai/gpt-4o",  // or any supported id (e.g. "anthropic/claude-3.5-sonnet", "google/gemini-2.5-flash")
  "messages": [{"role": "user", "content": "Hello"}],
  "temperature": 0.1,
  "max_tokens": 4000
}

If model is omitted OpenRouter uses the account default (model routing). For initial implementation we will REQUIRE a model to reduce ambiguity (aligns with existing provider defaults) but we’ll supply a defaultModel constant.


Scope (MVP)

Supported in this first PR:

  1. New provider: OpenRouterProvider implementing BaseLLMProvider using native fetch (no extra dependency) OR OpenAI SDK with baseURL override (choose native fetch for explicit control & lighter footprint).
  2. Config changes to allow CONTEXT_OPT_LLM_PROVIDER=openrouter and CONTEXT_OPT_OPENROUTER_KEY.
  3. Schema + validation updates (add provider union member + key). Fail fast if key missing.
  4. Provider factory registration.
  5. Basic request (non-streaming) returning first text completion.
  6. Error normalization (network errors, HTTP non-2xx, malformed response, empty choices).
  7. Tests (unit + integration style behind env guard).
  8. Docs (README, API keys reference, changelog entry).

Deferred (future issues):

  • Streaming (SSE) support (stream: true).
  • Dynamic model listing via GET https://openrouter.ai/api/v1/models with caching.
  • Automatic retry on transient 5xx / rate limit responses.
  • Usage/token accounting mapping to internal metrics.
  • Assistant prefill / multi-turn context management.
  • Passing through advanced parameters (top_p, frequency_penalty, etc.).

Files to Modify / Add

  1. src/config/schema.ts
    • Extend provider union: 'gemini' | 'claude' | 'openai' | 'openrouter'.
    • Add optional openrouterKey?: string; to llm block.
  2. src/config/manager.ts
    • Accept openrouter in getLLMProvider() valid list and error messages.
    • Include ...(process.env.CONTACT_OPT_OPENROUTER_KEY && { openrouterKey: process.env.CONTEXT_OPT_OPENROUTER_KEY }) when building config.
    • validProviders arrays updated to include openrouter.
    • Validation: ensure openrouterKey required if provider is openrouter.
    • getSanitizedConfig() add hasOpenrouterKey boolean.
  3. src/providers/openrouter.ts (NEW)
    • Class OpenRouterProvider extends BaseLLMProvider.
    • name = 'OpenRouter'.
    • defaultModel = 'openai/gpt-4o' (rationale: widely available; can be adjusted later) OR choose a cheaper default like openai/gpt-4o-mini if cost-sensitive. (Pick openai/gpt-4o-mini to align with existing OpenAI default style.)
    • apiKeyUrl = 'https://openrouter.ai/' (landing page where keys managed).
    • apiKeyPrefix = '' (keys aren’t standardized with a fixed prefix; leave empty / undefined if not meaningful).
    • processRequest(prompt: string, model?: string, apiKey?: string):
      1. Validate apiKey presence.
      2. Construct body using createStandardRequest helper for consistency (but adapt property names: max_tokens, messages).
      3. Use fetch('https://openrouter.ai/api/v1/chat/completions', {...}) with method POST.
      4. Headers: Authorization, Content-Type, and optionally pass HTTP-Referer + X-Title if environment vars present (define optional env vars: CONTEXT_OPT_APP_URL, CONTEXT_OPT_APP_NAME — OPTIONAL; only send if defined, do NOT add to schema for now).
      5. Parse JSON. Expected shape (subset): { choices: [{ message: { content: string } }] } similar to OpenAI. Fallback if not found -> error.
      6. On non-2xx: attempt to parse error JSON: maybe shape { error: { message } } else text.
      7. Return success/error via helper methods.
    • Consider small timeout (e.g., use AbortController with 60s) — OPTIONAL. For MVP rely on global fetch; leave todo comment.
  4. src/providers/factory.ts
    • Add case 'openrouter' mapping to new provider.
  5. Tests:
    • test/openrouter.test.ts (unit):
      • Mocks global.fetch to return a sample success JSON.
      • Tests error when API key missing.
      • Tests error path when response has no content.
      • Tests non-2xx status handling.
    • test/openrouter.integration.test.ts (optional) behind process.env.CONTEXT_OPT_OPENROUTER_KEY presence and maybe a TEST_LIVE_OPENROUTER flag. Skip if not set.
    • Update test/config-test.ts if it asserts provider lists.
  6. Docs:
    • README.md: add OpenRouter in provider list + quick start env var snippet.
    • docs/reference/api-keys.md: add section: "OpenRouter" with instructions to obtain key & note optional headers.
    • docs/reference/changelog.md: New entry e.g. Added OpenRouter provider (#1).
    • (Optional) docs/architecture.md: brief note providers are pluggable and now includes OpenRouter.

Environment Variables (New / Updated)

Required when using OpenRouter:

  • CONTEXT_OPT_LLM_PROVIDER=openrouter
  • CONTEXT_OPT_OPENROUTER_KEY=<your key>
    Optional (if we choose to support branding headers):
  • CONTEXT_OPT_APP_URL=https://your-site.example -> sent as HTTP-Referer
  • CONTEXT_OPT_APP_NAME=Context Optimizer -> sent as X-Title

(No changes needed to existing keys for other providers.)


Acceptance Criteria

  • Selecting openrouter provider with valid key returns model output for a simple prompt.
  • Missing key triggers clear configuration error on startup.
  • Invalid HTTP response returns a structured error (success=false, error populated) without throwing unhandled exceptions.
  • Factory can instantiate OpenRouter provider via LLMProviderFactory.createProvider('openrouter').
  • All existing tests still pass; new tests added and green.
  • Documentation updated (README + api-keys + changelog).
  • No sensitive key values logged (sanitized config shows only boolean flags).

Implementation Steps

  1. Update config schema (src/config/schema.ts). Add 'openrouter' to provider union, plus openrouterKey?: string;.
  2. Update configuration manager (src/config/manager.ts):
    • Add environment variable load line for CONTEXT_OPT_OPENROUTER_KEY.
    • Update provider validation arrays to include openrouter.
    • Ensure openrouterKey is required when provider is openrouter (mirrors existing logic).
    • Add hasOpenrouterKey in getSanitizedConfig output.
  3. Create new provider file src/providers/openrouter.ts implementing class as described.
  4. Add provider registration in src/providers/factory.ts switch.
  5. Write unit tests:
    • Create test/openrouter.test.ts.
    • Mock global.fetch (store original, restore after). Provide sample JSON: { choices: [{ message: { content: "Test reply" } }] }.
    • Test: success case returns success=true and expected content.
    • Test: missing apiKey returns error message from provider.
    • Test: non-2xx (e.g., 400) returns structured error (simulate { error: { message: 'Bad Request' } }).
    • Test: malformed JSON (e.g., empty object) returns error No response from OpenRouter.
  6. Integration test (optional in this PR – can skip if no live key):
    • If environment variable CONTEXT_OPT_OPENROUTER_KEY is set, perform a real request with a minimal prompt to ensure pipeline works. Mark with it.skip if not defined.
  7. Update docs & changelog.
  8. Run test suite and ensure all pass.
  9. Self-review for style consistency (naming, error messages match patterns in other providers).
  10. Open PR referencing this issue and summarizing changes.

Sample Provider Implementation (Skeleton)

// src/providers/openrouter.ts
import { BaseLLMProvider, LLMResponse } from './base';

export class OpenRouterProvider extends BaseLLMProvider {
  readonly name = 'OpenRouter';
  readonly defaultModel = 'openai/gpt-4o-mini';
  readonly apiKeyUrl = 'https://openrouter.ai/';
  readonly apiKeyPrefix = undefined; // Not standardized

  async processRequest(prompt: string, model?: string, apiKey?: string): Promise<LLMResponse> {
    if (!apiKey) {
      return this.createErrorResponse('OpenRouter API key not configured');
    }
    try {
      const body = this.createStandardRequest(prompt, model || this.defaultModel);
      const headers: Record<string,string> = {
        'Authorization': `Bearer ${apiKey}`,
        'Content-Type': 'application/json'
      };
      if (process.env.CONTEXT_OPT_APP_URL) headers['HTTP-Referer'] = process.env.CONTEXT_OPT_APP_URL;
      if (process.env.CONTEXT_OPT_APP_NAME) headers['X-Title'] = process.env.CONTEXT_OPT_APP_NAME;

      const res = await fetch('https://openrouter.ai/api/v1/chat/completions', {
        method: 'POST',
        headers,
        body: JSON.stringify(body)
      });

      if (!res.ok) {
        let errorMsg = `HTTP ${res.status}`;
        try { const errJson: any = await res.json(); errorMsg = errJson?.error?.message || errorMsg; } catch { /* ignore */ }
        return this.createErrorResponse(`OpenRouter request failed: ${errorMsg}`);
      }

      const json: any = await res.json();
      const content = json?.choices?.[0]?.message?.content;
      if (!content) {
        return this.createErrorResponse('No response from OpenRouter');
      }
      return this.createSuccessResponse(content);
    } catch (e: any) {
      return this.createErrorResponse(`OpenRouter processing failed: ${e?.message || 'Unknown error'}`);
    }
  }
}

Testing Notes

  • Follow existing test style (see openai or claude provider tests for patterns). There are currently provider tests; mimic structure.
  • Ensure fetch mock counts invocations and that headers include Authorization (but DO NOT assert exact key value; just presence pattern if needed).
  • Validate error messaging consistency with other providers (prefix with provider name in failure path).

Security / Privacy Considerations

  • Never log the raw API key (only existence booleans).
  • Keep calls server-side (no exposure to client code).
  • Provide guidance in docs that optional headers (HTTP-Referer, X-Title) are purely metadata and safe to include.

Future Enhancements (Follow-up Issues)

  1. Streaming support using EventSource or manual SSE parsing (split lines, ignore lines starting with : which are comments; assemble delta tokens).
  2. Model metadata cache (GET /api/v1/models) with refresh interval (e.g., 24h) and filtering by supported parameters.
  3. Retry/backoff on 5xx or rate-limits (respect Retry-After header if provided).
  4. Parameter passthrough (temperature, top_p, stop, etc.) via configuration or request options.
  5. Usage stats surfaced in responses (token counts) for user display or logging.

Definition of Done

  • Code merged to main with green CI.
  • Documentation updated and published.
  • Changelog entry present.
  • Able to run a manual prompt using OpenRouter provider and receive coherent output.

Open Questions

  • Default model final choice (openai/gpt-4o-mini vs a cheaper open model). (Assume openai/gpt-4o-mini unless directed otherwise.)
  • Include optional branding headers now? (Plan: yes, conditionally if env vars present.)

Metadata

Metadata

Assignees

Labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions