Skip to content

feat: add new generic AI communication model #15409

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion packages/ai-chat/src/common/chat-session-naming-service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ export class ChatSessionNamingAgent implements Agent {

const sessionId = generateUuid();
const requestId = generateUuid();
const request: UserRequest = {
const request: UserRequest & { agentId: string } = {
messages: [{
actor: 'user',
text: message,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ export class FrontendLanguageModelServiceImpl extends LanguageModelServiceImpl {
}
}

export const mergeRequestSettings = (requestSettings: RequestSetting[], modelId: string, providerId: string, agentId: string): RequestSetting => {
export const mergeRequestSettings = (requestSettings: RequestSetting[], modelId: string, providerId: string, agentId?: string): RequestSetting => {
const prioritizedSettings = Prioritizeable.prioritizeAllSync(requestSettings,
setting => getRequestSettingSpecificity(setting, {
modelId,
Expand Down
93 changes: 93 additions & 0 deletions packages/ai-core/src/common/language-model-interaction-model.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
// *****************************************************************************
// Copyright (C) 2025 STMicroelectronics and others.
//
// This program and the accompanying materials are made available under the
// terms of the Eclipse Public License v. 2.0 which is available at
// http://www.eclipse.org/legal/epl-2.0.
//
// This Source Code may also be made available under the following Secondary
// Licenses when the conditions for such availability set forth in the Eclipse
// Public License v. 2.0 are satisfied: GNU General Public License, version 2
// with the GNU Classpath Exception which is available at
// https://www.gnu.org/software/classpath/license.html.
//
// SPDX-License-Identifier: EPL-2.0 OR GPL-2.0-only WITH Classpath-exception-2.0
// *****************************************************************************
import {
LanguageModelParsedResponse,
LanguageModelRequest,
LanguageModelStreamResponsePart,
LanguageModelTextResponse
} from './language-model';

/**
* A session tracking raw exchanges with language models, organized into exchange units.
*/
export interface LanguageModelSession {
/**
* Identifier of this Language Model Session. Corresponds to Chat session ids
*/
id: string;
/**
* All exchange units part of this session
*/
exchanges: LanguageModelExchange[];
}

/**
* An exchange unit representing a logical operation which may involve multiple model requests.
*/
export interface LanguageModelExchange {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a nitpick: I am ok with LanguageModelExchange but I was wondering what you think about LanguageModelCompletion as a more direct term for what actually happens (i.e. an LLM request completion). Exchange sounds to me a bit bi-directional and not a perfect fit.

Copy link
Member Author

@sdirix sdirix May 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exchange consists of request-response pairs, so it feels bidirectional to me. Personally I like "exchange" better than "completion". I would have never guessed that a "LanguageModelCompletion" is a set of request-response pairs, unrelated to "normal" LLM completion.

/**
* Identifier of the exchange unit.
*/
id: string;
/**
* All requests that constitute this exchange
*/
requests: LanguageModelExchangeRequest[];
/**
* Arbitrary metadata for the exchange
*/
metadata: {
agent?: string;
[key: string]: unknown;
}
}

/**
* Alternative to the LanguageModelStreamResponse, suited for inspection
*/
export interface LanguageModelMonitoredStreamResponse {
parts: LanguageModelStreamResponsePart[];
}

/**
* Represents a request to a language model within an exchange unit, capturing the request and its response.
*/
export interface LanguageModelExchangeRequest {
/**
* Identifier of the request. Might share the id with the parent exchange if there's only one request.
*/
id: string;
/**
* The actual request sent to the language model
*/
request: LanguageModelRequest;
/**
* Arbitrary metadata for the request. Might contain an agent id and timestamp.
*/
metadata: {
agent?: string;
timestamp?: number;
[key: string]: unknown;
};
/**
* The identifier of the language model the request was sent to
*/
languageModel: string;
/**
* The recorded response
*/
response: LanguageModelTextResponse | LanguageModelParsedResponse | LanguageModelMonitoredStreamResponse;
}
107 changes: 101 additions & 6 deletions packages/ai-core/src/common/language-model-service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,29 @@
// *****************************************************************************

import { inject } from '@theia/core/shared/inversify';
import { LanguageModel, LanguageModelRegistry, LanguageModelResponse, UserRequest } from './language-model';
import { CommunicationRecordingService } from './communication-recording-service';
import { isLanguageModelStreamResponse, LanguageModel, LanguageModelRegistry, LanguageModelResponse, LanguageModelStreamResponsePart, UserRequest } from './language-model';
import { LanguageModelExchangeRequest, LanguageModelSession } from './language-model-interaction-model';
import { Emitter } from '@theia/core';

export interface RequestAddedEvent {
type: 'requestAdded',
id: string;
}
export interface ResponseCompletedEvent {
type: 'responseCompleted',
requestId: string;
}
export type SessionEvent = RequestAddedEvent | ResponseCompletedEvent;

export const LanguageModelService = Symbol('LanguageModelService');
export interface LanguageModelService {
onSessionChanged: Emitter<SessionEvent>['event'];
/**
* Collection of all recorded LanguageModelSessions.
*/
sessions: LanguageModelSession[];
/**
* Submit a language model request in the context of the given `chatRequest`.
* Submit a language model request, it will automatically be recorded within a LanguageModelSession.
*/
sendRequest(
languageModel: LanguageModel,
Expand All @@ -33,8 +49,10 @@ export class LanguageModelServiceImpl implements LanguageModelService {
@inject(LanguageModelRegistry)
protected languageModelRegistry: LanguageModelRegistry;

@inject(CommunicationRecordingService)
protected recordingService: CommunicationRecordingService;
sessions: LanguageModelSession[] = [];

protected sessionChangedEmitter = new Emitter<SessionEvent>();
onSessionChanged = this.sessionChangedEmitter.event;

async sendRequest(
languageModel: LanguageModel,
Expand All @@ -53,7 +71,84 @@ export class LanguageModelServiceImpl implements LanguageModelService {
return true;
});

return languageModel.request(languageModelRequest, languageModelRequest.cancellationToken);
let response = await languageModel.request(languageModelRequest, languageModelRequest.cancellationToken);
let storedResponse: LanguageModelExchangeRequest['response'];
if (isLanguageModelStreamResponse(response)) {
const parts: LanguageModelStreamResponsePart[] = [];
response = {
...response,
stream: createLoggingAsyncIterable(response.stream,
parts,
() => this.sessionChangedEmitter.fire({ type: 'responseCompleted', requestId: languageModelRequest.subRequestId ?? languageModelRequest.requestId }))
};
storedResponse = { parts };
} else {
storedResponse = response;
}
this.storeRequest(languageModel, languageModelRequest, storedResponse);

return response;
}

protected storeRequest(languageModel: LanguageModel, languageModelRequest: UserRequest, response: LanguageModelExchangeRequest['response']): void {
// Find or create the session for this request
let session = this.sessions.find(s => s.id === languageModelRequest.sessionId);
if (!session) {
session = {
id: languageModelRequest.sessionId,
exchanges: []
};
this.sessions.push(session);
}

// Find or create the exchange for this request
let exchange = session.exchanges.find(r => r.id === languageModelRequest.requestId);
if (!exchange) {
exchange = {
id: languageModelRequest.requestId,
requests: [],
metadata: { agent: languageModelRequest.agentId }
};
session.exchanges.push(exchange);
}

// Create and add the LanguageModelExchangeRequest to the exchange
const exchangeRequest: LanguageModelExchangeRequest = {
id: languageModelRequest.subRequestId ?? languageModelRequest.requestId,
request: languageModelRequest,
languageModel: languageModel.id,
response: response,
metadata: {}
};

exchange.requests.push(exchangeRequest);

exchangeRequest.metadata.agent = languageModelRequest.agentId;
exchangeRequest.metadata.timestamp = Date.now();

this.sessionChangedEmitter.fire({ type: 'requestAdded', id: languageModelRequest.subRequestId ?? languageModelRequest.requestId });
}

}

/**
* Creates an AsyncIterable wrapper that stores each yielded item while preserving the
* original AsyncIterable behavior.
*/
async function* createLoggingAsyncIterable(
stream: AsyncIterable<LanguageModelStreamResponsePart>,
parts: LanguageModelStreamResponsePart[],
streamFinished: () => void
): AsyncIterable<LanguageModelStreamResponsePart> {
try {
for await (const part of stream) {
parts.push(part);
yield part;
}
} catch (error) {
parts.push({ content: `[NOT FROM LLM] An error occured: ${error.message}` });
throw error;
} finally {
streamFinished();
}
}
24 changes: 23 additions & 1 deletion packages/ai-core/src/common/language-model.ts
Original file line number Diff line number Diff line change
Expand Up @@ -159,10 +159,32 @@ export interface ResponseFormatJsonSchema {
};
}

/**
* The UserRequest extends the "pure" LanguageModelRequest for cancelling support as well as
* logging metadata.
* The additional metadata might also be used for other use cases, for example to query default
* request settings based on the agent id, merging with the request settings handed over.
*/
export interface UserRequest extends LanguageModelRequest {
/**
* Identifier of the Ai/ChatSession
*/
sessionId: string;
/**
* Identifier of the semantic request. Corresponds to request id in Chat sessions
*/
requestId: string;
agentId: string;
/**
* Id of a sub request in case a semantic request consists of multiple sub requests
*/
subRequestId?: string;
/**
* Optional agent identifier in case the request was sent by an agent
*/
agentId?: string;
/**
* Cancellation support
*/
cancellationToken?: CancellationToken;
}

Expand Down
Loading