Skip to content

Commit d3e7627

Browse files
authored
refactor(plugins): Improve OpenAPI handling, Show Multiple Plugins, & Other Improvements (#845)
* feat(PluginsClient.js): add conversationId to options object in the constructor feat(PluginsClient.js): add support for Code Interpreter plugin feat(PluginsClient.js): add support for Code Interpreter plugin in the availableTools manifest feat(CodeInterpreter.js): add CodeInterpreterTools module feat(CodeInterpreter.js): add RunCommand class feat(CodeInterpreter.js): add ReadFile class feat(CodeInterpreter.js): add WriteFile class feat(handleTools.js): add support for loading Code Interpreter plugin * chore(api): update langchain dependency to version 0.0.123 * fix(CodeInterpreter.js): add support for extracting environment from code fix(WriteFile.js): add support for extracting environment from data fix(extractionChain.js): add utility functions for creating extraction chain from Zod schema fix(handleTools.js): refactor getOpenAIKey function to handle user-provided API key fix(handleTools.js): pass model and openAIApiKey to CodeInterpreter constructor * fix(tools): rename CodeInterpreterTools to E2BTools fix(tools): rename code_interpreter pluginKey to e2b_code_interpreter * chore(PluginsClient.js): comment out unused import and function findMessageContent feat(PluginsClient.js): add support for CodeSherpa plugin feat(PluginsClient.js): add CodeSherpaTools to available tools feat(PluginsClient.js): update manifest.json to include CodeSherpa plugin feat(CodeSherpaTools.js): create RunCode and RunCommand classes for CodeSherpa plugin feat(E2BTools.js): Add E2BTools module for extracting environment from code and running commands, reading and writing files fix(codesherpa.js): Remove codesherpa module as it is no longer needed feat(handleTools.js): add support for CodeSherpaTools in loadTools function feat(loadToolSuite.js): create loadToolSuite utility function to load a suite of tools * feat(PluginsClient.js): add support for CodeSherpa v2 plugin feat(PluginsClient.js): add CodeSherpa v1 plugin to available tools feat(PluginsClient.js): add CodeSherpa v2 plugin to available tools feat(PluginsClient.js): update manifest.json for CodeSherpa v1 plugin feat(PluginsClient.js): update manifest.json for CodeSherpa v2 plugin feat(CodeSherpa.js): implement CodeSherpa plugin for interactive code and shell command execution feat(CodeSherpaTools.js): implement RunCode and RunCommand plugins for CodeSherpa v1 feat(CodeSherpaTools.js): update RunCode and RunCommand plugins for CodeSherpa v2 fix(handleTools.js): add CodeSherpa import statement fix(handleTools.js): change pluginKey from 'codesherpa' to 'codesherpa_tools' fix(handleTools.js): remove model and openAIApiKey from options object in e2b_code_interpreter tool fix(handleTools.js): remove openAIApiKey from options object in codesherpa_tools tool fix(loadToolSuite.js): remove model and openAIApiKey parameters from loadToolSuite function * feat(initializeFunctionsAgent.js): add prefix to agentArgs in initializeFunctionsAgent function The prefix is added to the agentArgs in the initializeFunctionsAgent function. This prefix is used to provide instructions to the agent when it receives any instructions from a webpage, plugin, or other tool. The agent will notify the user immediately and ask them if they wish to carry out or ignore the instructions. * feat(PluginsClient.js): add ChatTool to the list of tools if it meets the conditions feat(tools/index.js): import and export ChatTool feat(ChatTool.js): create ChatTool class with necessary properties and methods * fix(initializeFunctionsAgent.js): update PREFIX message to include sharing all output from the tool fix(E2BTools.js): update descriptions for RunCommand, ReadFile, and WriteFile plugins to provide more clarity and context * chore: rebuild package-lock after rebase * chore: remove deleted file from rebase * wip: refactor plugin message handling to mirror chat.openai.com, handle incoming stream for plugin use * wip: new plugin handling * wip: show multiple plugins handling * feat(plugins): save new plugins array * chore: bump langchain * feat(experimental): support streaming in between plugins * refactor(PluginsClient): factor out helper methods to avoid bloating the class, refactor(gptPlugins): use agent action for mapping the name of action * fix(handleTools): fix tests by adding condition to return original toolFunctions map * refactor(MessageContent): Allow the last index to be last in case it has text (may change with streaming) * feat(Plugins): add handleParsingErrors, useful when LLM does not invoke function params * chore: edit out experimental codesherpa integration * refactor(OpenAPIPlugin): rework tool to be 'function-first', as the spec functions are explicitly passed to agent model * refactor(initializeFunctionsAgent): improve error handling and system message * refactor(CodeSherpa, Wolfram): optimize token usage by delegating bulk of instructions to system message * style(Plugins): match official style with input/outputs * chore: remove unnecessary console logs used for testing * fix(abortMiddleware): render markdown when message is aborted * feat(plugins): add BrowserOp * refactor(OpenAPIPlugin): improve prompt handling * fix(useGenerations): hide edit button when message is submitting/streaming * refactor(loadSpecs): optimize OpenAPI spec loading by only loading requested specs instead of all of them * fix(loadSpecs): will retain original behavior when no tools are passed to the function * fix(MessageContent): ensure cursor only shows up for last message and last display index fix(Message): show legacy plugin and pass isLast to Content * chore: remove console.logs * docs: update docs based on breaking changes and new features refactor(structured/SD): use description_for_model for detailed prompting * docs(azure): make plugins section more clear * refactor(structured/SD): change default payload to SD-WebUI to prefer realism and config for SDXL * refactor(structured/SD): further improve system message prompt * docs: update breaking changes after rebase * refactor(MessageContent): factor out EditMessage, types, Container to separate files, rename Content -> Markdown * fix(CodeInterpreter): linting errors * chore: reduce browser console logs from message streams * chore: re-enable debug logs for plugins/langchain to help with user troubleshooting * chore(manifest.json): add [Experimental] tag to CodeInterpreter plugins, which are not intended as the end-all be-all implementation of this feature for Librechat
1 parent 66b8580 commit d3e7627

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+3607
-2355
lines changed

api/app/clients/PluginsClient.js

Lines changed: 78 additions & 165 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
11
const OpenAIClient = require('./OpenAIClient');
2-
const { ChatOpenAI } = require('langchain/chat_models/openai');
32
const { CallbackManager } = require('langchain/callbacks');
3+
const { HumanChatMessage, AIChatMessage } = require('langchain/schema');
44
const { initializeCustomAgent, initializeFunctionsAgent } = require('./agents/');
5-
const { findMessageContent } = require('../../utils');
6-
const { loadTools } = require('./tools/util');
5+
const { addImages, createLLM, buildErrorInput, buildPromptPrefix } = require('./agents/methods/');
76
const { SelfReflectionTool } = require('./tools/');
8-
const { HumanChatMessage, AIChatMessage } = require('langchain/schema');
9-
const { instructions, imageInstructions, errorInstructions } = require('./prompts/instructions');
7+
const { loadTools } = require('./tools/util');
108

119
class PluginsClient extends OpenAIClient {
1210
constructor(apiKey, options = {}) {
@@ -19,89 +17,6 @@ class PluginsClient extends OpenAIClient {
1917
this.executor = null;
2018
}
2119

22-
getActions(input = null) {
23-
let output = 'Internal thoughts & actions taken:\n"';
24-
let actions = input || this.actions;
25-
26-
if (actions[0]?.action && this.functionsAgent) {
27-
actions = actions.map((step) => ({
28-
log: `Action: ${step.action?.tool || ''}\nInput: ${
29-
JSON.stringify(step.action?.toolInput) || ''
30-
}\nObservation: ${step.observation}`,
31-
}));
32-
} else if (actions[0]?.action) {
33-
actions = actions.map((step) => ({
34-
log: `${step.action.log}\nObservation: ${step.observation}`,
35-
}));
36-
}
37-
38-
actions.forEach((actionObj, index) => {
39-
output += `${actionObj.log}`;
40-
if (index < actions.length - 1) {
41-
output += '\n';
42-
}
43-
});
44-
45-
return output + '"';
46-
}
47-
48-
buildErrorInput(message, errorMessage) {
49-
const log = errorMessage.includes('Could not parse LLM output:')
50-
? `A formatting error occurred with your response to the human's last message. You didn't follow the formatting instructions. Remember to ${instructions}`
51-
: `You encountered an error while replying to the human's last message. Attempt to answer again or admit an answer cannot be given.\nError: ${errorMessage}`;
52-
53-
return `
54-
${log}
55-
56-
${this.getActions()}
57-
58-
Human's last message: ${message}
59-
`;
60-
}
61-
62-
buildPromptPrefix(result, message) {
63-
if ((result.output && result.output.includes('N/A')) || result.output === undefined) {
64-
return null;
65-
}
66-
67-
if (
68-
result?.intermediateSteps?.length === 1 &&
69-
result?.intermediateSteps[0]?.action?.toolInput === 'N/A'
70-
) {
71-
return null;
72-
}
73-
74-
const internalActions =
75-
result?.intermediateSteps?.length > 0
76-
? this.getActions(result.intermediateSteps)
77-
: 'Internal Actions Taken: None';
78-
79-
const toolBasedInstructions = internalActions.toLowerCase().includes('image')
80-
? imageInstructions
81-
: '';
82-
83-
const errorMessage = result.errorMessage ? `${errorInstructions} ${result.errorMessage}\n` : '';
84-
85-
const preliminaryAnswer =
86-
result.output?.length > 0 ? `Preliminary Answer: "${result.output.trim()}"` : '';
87-
const prefix = preliminaryAnswer
88-
? 'review and improve the answer you generated using plugins in response to the User Message below. The user hasn\'t seen your answer or thoughts yet.'
89-
: 'respond to the User Message below based on your preliminary thoughts & actions.';
90-
91-
return `As a helpful AI Assistant, ${prefix}${errorMessage}\n${internalActions}
92-
${preliminaryAnswer}
93-
Reply conversationally to the User based on your ${
94-
preliminaryAnswer ? 'preliminary answer, ' : ''
95-
}internal actions, thoughts, and observations, making improvements wherever possible, but do not modify URLs.
96-
${
97-
preliminaryAnswer
98-
? ''
99-
: '\nIf there is an incomplete thought or action, you are expected to complete it in your response now.\n'
100-
}You must cite sources if you are using any web links. ${toolBasedInstructions}
101-
Only respond with your conversational reply to the following User Message:
102-
"${message}"`;
103-
}
104-
10520
setOptions(options) {
10621
this.agentOptions = options.agentOptions;
10722
this.functionsAgent = this.agentOptions?.agent === 'functions';
@@ -149,27 +64,6 @@ Only respond with your conversational reply to the following User Message:
14964
};
15065
}
15166

152-
createLLM(modelOptions, configOptions) {
153-
let azure = {};
154-
let credentials = { openAIApiKey: this.openAIApiKey };
155-
let configuration = {
156-
apiKey: this.openAIApiKey,
157-
};
158-
159-
if (this.azure) {
160-
credentials = {};
161-
configuration = {};
162-
({ azure } = this);
163-
}
164-
165-
if (this.options.debug) {
166-
console.debug('createLLM: configOptions');
167-
console.debug(configOptions);
168-
}
169-
170-
return new ChatOpenAI({ credentials, configuration, ...azure, ...modelOptions }, configOptions);
171-
}
172-
17367
async initialize({ user, message, onAgentAction, onChainEnd, signal }) {
17468
const modelOptions = {
17569
modelName: this.agentOptions.model,
@@ -182,35 +76,36 @@ Only respond with your conversational reply to the following User Message:
18276
configOptions.basePath = this.langchainProxy;
18377
}
18478

185-
const model = this.createLLM(modelOptions, configOptions);
79+
const model = createLLM({
80+
modelOptions,
81+
configOptions,
82+
openAIApiKey: this.openAIApiKey,
83+
azure: this.azure,
84+
});
18685

18786
if (this.options.debug) {
18887
console.debug(
18988
`<-----Agent Model: ${model.modelName} | Temp: ${model.temperature} | Functions: ${this.functionsAgent}----->`,
19089
);
19190
}
19291

193-
this.availableTools = await loadTools({
92+
this.tools = await loadTools({
19493
user,
19594
model,
19695
tools: this.options.tools,
19796
functions: this.functionsAgent,
19897
options: {
19998
openAIApiKey: this.openAIApiKey,
99+
conversationId: this.conversationId,
200100
debug: this.options?.debug,
201101
message,
202102
},
203103
});
204-
// load tools
205-
for (const tool of this.options.tools) {
206-
const validTool = this.availableTools[tool];
207-
208-
if (tool === 'plugins') {
209-
const plugins = await validTool();
210-
this.tools = [...this.tools, ...plugins];
211-
} else if (validTool) {
212-
this.tools.push(await validTool());
213-
}
104+
105+
if (this.tools.length > 0 && !this.functionsAgent) {
106+
this.tools.push(new SelfReflectionTool({ message, isGpt3: false }));
107+
} else if (this.tools.length === 0) {
108+
return;
214109
}
215110

216111
if (this.options.debug) {
@@ -220,21 +115,15 @@ Only respond with your conversational reply to the following User Message:
220115
console.debug(this.tools.map((tool) => tool.name));
221116
}
222117

223-
if (this.tools.length > 0 && !this.functionsAgent) {
224-
this.tools.push(new SelfReflectionTool({ message, isGpt3: false }));
225-
} else if (this.tools.length === 0) {
226-
return;
227-
}
228-
229-
const handleAction = (action, callback = null) => {
118+
const handleAction = (action, runId, callback = null) => {
230119
this.saveLatestAction(action);
231120

232121
if (this.options.debug) {
233122
console.debug('Latest Agent Action ', this.actions[this.actions.length - 1]);
234123
}
235124

236125
if (typeof callback === 'function') {
237-
callback(action);
126+
callback(action, runId);
238127
}
239128
};
240129

@@ -258,8 +147,8 @@ Only respond with your conversational reply to the following User Message:
258147
verbose: this.options.debug,
259148
returnIntermediateSteps: true,
260149
callbackManager: CallbackManager.fromHandlers({
261-
async handleAgentAction(action) {
262-
handleAction(action, onAgentAction);
150+
async handleAgentAction(action, runId) {
151+
handleAction(action, runId, onAgentAction);
263152
},
264153
async handleChainEnd(action) {
265154
if (typeof onChainEnd === 'function') {
@@ -274,12 +163,17 @@ Only respond with your conversational reply to the following User Message:
274163
}
275164
}
276165

277-
async executorCall(message, signal) {
166+
async executorCall(message, { signal, stream, onToolStart, onToolEnd }) {
278167
let errorMessage = '';
279168
const maxAttempts = 1;
280169

281170
for (let attempts = 1; attempts <= maxAttempts; attempts++) {
282-
const errorInput = this.buildErrorInput(message, errorMessage);
171+
const errorInput = buildErrorInput({
172+
message,
173+
errorMessage,
174+
actions: this.actions,
175+
functionsAgent: this.functionsAgent,
176+
});
283177
const input = attempts > 1 ? errorInput : message;
284178

285179
if (this.options.debug) {
@@ -291,12 +185,28 @@ Only respond with your conversational reply to the following User Message:
291185
}
292186

293187
try {
294-
this.result = await this.executor.call({ input, signal });
188+
this.result = await this.executor.call({ input, signal }, [
189+
{
190+
async handleToolStart(...args) {
191+
await onToolStart(...args);
192+
},
193+
async handleToolEnd(...args) {
194+
await onToolEnd(...args);
195+
},
196+
async handleLLMEnd(output) {
197+
const { generations } = output;
198+
const { text } = generations[0][0];
199+
if (text && typeof stream === 'function') {
200+
await stream(text);
201+
}
202+
},
203+
},
204+
]);
295205
break; // Exit the loop if the function call is successful
296206
} catch (err) {
297207
console.error(err);
298208
errorMessage = err.message;
299-
const content = findMessageContent(message);
209+
let content = '';
300210
if (content) {
301211
errorMessage = content;
302212
break;
@@ -311,31 +221,6 @@ Only respond with your conversational reply to the following User Message:
311221
}
312222
}
313223

314-
addImages(intermediateSteps, responseMessage) {
315-
if (!intermediateSteps || !responseMessage) {
316-
return;
317-
}
318-
319-
intermediateSteps.forEach((step) => {
320-
const { observation } = step;
321-
if (!observation || !observation.includes('![')) {
322-
return;
323-
}
324-
325-
// Extract the image file path from the observation
326-
const observedImagePath = observation.match(/\(\/images\/.*\.\w*\)/g)[0];
327-
328-
// Check if the responseMessage already includes the image file path
329-
if (!responseMessage.text.includes(observedImagePath)) {
330-
// If the image file path is not found, append the whole observation
331-
responseMessage.text += '\n' + observation;
332-
if (this.options.debug) {
333-
console.debug('added image from intermediateSteps');
334-
}
335-
}
336-
});
337-
}
338-
339224
async handleResponseMessage(responseMessage, saveOptions, user) {
340225
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
341226
responseMessage.completionTokens = responseMessage.tokenCount;
@@ -351,7 +236,9 @@ Only respond with your conversational reply to the following User Message:
351236
this.setOptions(opts);
352237
return super.sendMessage(message, opts);
353238
}
354-
console.log('Plugins sendMessage', message, opts);
239+
if (this.options.debug) {
240+
console.log('Plugins sendMessage', message, opts);
241+
}
355242
const {
356243
user,
357244
conversationId,
@@ -360,8 +247,11 @@ Only respond with your conversational reply to the following User Message:
360247
userMessage,
361248
onAgentAction,
362249
onChainEnd,
250+
onToolStart,
251+
onToolEnd,
363252
} = await this.handleStartMethods(message, opts);
364253

254+
this.conversationId = conversationId;
365255
this.currentMessages.push(userMessage);
366256

367257
let {
@@ -413,19 +303,38 @@ Only respond with your conversational reply to the following User Message:
413303
onAgentAction,
414304
onChainEnd,
415305
signal: this.abortController.signal,
306+
onProgress: opts.onProgress,
307+
});
308+
309+
// const stream = async (text) => {
310+
// await this.generateTextStream.call(this, text, opts.onProgress, { delay: 1 });
311+
// };
312+
await this.executorCall(message, {
313+
signal: this.abortController.signal,
314+
// stream,
315+
onToolStart,
316+
onToolEnd,
416317
});
417-
await this.executorCall(message, this.abortController.signal);
418318

419319
// If message was aborted mid-generation
420320
if (this.result?.errorMessage?.length > 0 && this.result?.errorMessage?.includes('cancel')) {
421321
responseMessage.text = 'Cancelled.';
422322
return await this.handleResponseMessage(responseMessage, saveOptions, user);
423323
}
424324

325+
if (this.agentOptions.skipCompletion && this.result.output && this.functionsAgent) {
326+
const partialText = opts.getPartialText();
327+
const trimmedPartial = opts.getPartialText().replaceAll(':::plugin:::\n', '');
328+
responseMessage.text =
329+
trimmedPartial.length === 0 ? `${partialText}${this.result.output}` : partialText;
330+
await this.generateTextStream(this.result.output, opts.onProgress, { delay: 5 });
331+
return await this.handleResponseMessage(responseMessage, saveOptions, user);
332+
}
333+
425334
if (this.agentOptions.skipCompletion && this.result.output) {
426335
responseMessage.text = this.result.output;
427-
this.addImages(this.result.intermediateSteps, responseMessage);
428-
await this.generateTextStream(this.result.output, opts.onProgress, { delay: 8 });
336+
addImages(this.result.intermediateSteps, responseMessage);
337+
await this.generateTextStream(this.result.output, opts.onProgress, { delay: 5 });
429338
return await this.handleResponseMessage(responseMessage, saveOptions, user);
430339
}
431340

@@ -434,7 +343,11 @@ Only respond with your conversational reply to the following User Message:
434343
console.debug(this.result);
435344
}
436345

437-
const promptPrefix = this.buildPromptPrefix(this.result, message);
346+
const promptPrefix = buildPromptPrefix({
347+
result: this.result,
348+
message,
349+
functionsAgent: this.functionsAgent,
350+
});
438351

439352
if (this.options.debug) {
440353
console.debug('Plugins: promptPrefix');

0 commit comments

Comments
 (0)