Skip to content

Commit b680d67

Browse files
committed
docs: Add examples for llm connector
1 parent 47f99ad commit b680d67

File tree

10 files changed

+3792
-4548
lines changed

10 files changed

+3792
-4548
lines changed

docs/examples/gemini_integration.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
---
2+
sidebar_position: 9
3+
title: Gemini Integration
4+
description: gemini integration chatbot example
5+
keywords: [react, chat, chatbot, chatbotify]
6+
---
7+
8+
# Gemini Integration
9+
10+
The following is an example showing how to integrate [**Google Gemini**](https://ai.google.dev/gemini-api/docs) into React ChatBotify. It leverages on the [**LLM Connector Plugin**](https://www.npmjs.com/package/@rcb-plugins/llm-connector), which is maintained separately on the [**React ChatBotify Plugins**](https://github.com/orgs/React-ChatBotify-Plugins) organization. This example also taps on the [**GeminiProvider**](https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/Gemini.md), which ships by default with the LLM Connector Plugin. If you require support with the plugin, please reach out to support on the [**plugins discord**](https://discord.gg/J6pA4v3AMW) instead.
11+
12+
:::tip
13+
14+
The plugin also comes with other default providers, which you can try out in the [**LLM Conversation Example**](/docs/examples/llm_conversation.md) and [**OpenAI Integration Example**](/docs/examples/openai_integration.md).
15+
16+
:::
17+
18+
:::tip
19+
20+
If you expect your LLM responses to contain markdown, consider using the [**Markdown Renderer Plugin**](https://www.npmjs.com/package/@rcb-plugins/markdown-renderer) as well!
21+
22+
:::
23+
24+
:::caution
25+
26+
This example uses 'direct' mode for demonstration purposes which exposes API keys client-side. In production, you should look to proxy your request and have your API keys stored server-side. A lightweight demo project for an LLM proxy can be found [**here**](https://github.com/tjtanjin/llm-proxy). You may also refer to [**this article**](https://tjtanjin.medium.com/how-to-build-and-integrate-a-react-chatbot-with-llms-a-react-chatbotify-guide-part-4-b40cd59fd6e6) for more details.
27+
28+
:::
29+
30+
```jsx live noInline title=MyChatBot.js
31+
const MyChatBot = () => {
32+
// gemini api key, required since we're using 'direct' mode for testing
33+
let apiKey = "";
34+
35+
// initialize the plugin
36+
const plugins = [LlmConnector()];
37+
38+
// example flow for testing
39+
const flow: Flow = {
40+
start: {
41+
message: "Hello! Make sure you've set your API key before getting started!",
42+
options: ["I am ready!"],
43+
chatDisabled: true,
44+
path: async (params) => {
45+
if (!apiKey) {
46+
await params.simulateStreamMessage("You have not set your API key!");
47+
return "start";
48+
}
49+
await params.simulateStreamMessage("Ask away!");
50+
return "gemini";
51+
},
52+
},
53+
gemini: {
54+
llmConnector: {
55+
// provider configuration guide:
56+
// https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/Gemini.md
57+
provider: new GeminiProvider({
58+
mode: 'direct',
59+
model: 'gemini-1.5-flash',
60+
responseFormat: 'stream',
61+
apiKey: geminiApiKey,
62+
}),
63+
outputType: 'character',
64+
},
65+
},
66+
};
67+
68+
return (
69+
<ChatBot
70+
settings={{general: {embedded: true}, chatHistory: {storageKey: "example_gemini_integration"}}}
71+
plugins={plugins}
72+
flow={flow}
73+
></ChatBot>
74+
);
75+
};
76+
77+
render(<MyChatBot/>)
78+
```

docs/examples/llm_conversation.md

Lines changed: 79 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,75 +1,106 @@
11
---
2-
sidebar_position: 10
2+
sidebar_position: 8
33
title: LLM Conversation
44
description: llm conversation chatbot example
55
keywords: [react, chat, chatbot, chatbotify]
66
---
77

88
# LLM Conversation
99

10-
The following is an example showing how to use React ChatBotify to front conversations with LLMs (demonstrated using OpenAI/ChatGPT). If you wish to try out this example, you will have to obtain and provide an [OpenAI API key](https://platform.openai.com/docs/introduction) (note that OpenAI charges for API key use). Alternatively, you may refer to the [**real-time stream**](/docs/examples/real_time_stream) example which uses [**Google Gemini**](https://ai.google.dev/) that comes with free API keys.
10+
The following is an example showing how to integrate in-browser models (e.g. via [**WebLlm**](https://webllm.mlc.ai/)/[**Wllama**](https://www.npmjs.com/package/@wllama/wllama)) into React ChatBotify. It leverages on the [**LLM Connector Plugin**](https://www.npmjs.com/package/@rcb-plugins/llm-connector), which is maintained separately on the [**React ChatBotify Plugins**](https://github.com/orgs/React-ChatBotify-Plugins) organization. This example also taps on the [**WebLlmProvider**](https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/WebLlm.md) and [**WllamaProvider**](https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/Wllama.md), both of which ships by default with the LLM Connector Plugin. If you require support with the plugin, please reach out to support on the [**plugins discord**](https://discord.gg/J6pA4v3AMW) instead.
11+
12+
:::tip
13+
14+
The plugin also comes with other default providers, which you can try out in the [**OpenAI Integration Example**](/docs/examples/openai_integration.md) and [**Gemini Integration Example**](/docs/examples/gemini_integration.md).
15+
16+
:::
17+
18+
:::tip
19+
20+
If you expect your LLM responses to contain markdown, consider using the [**Markdown Renderer Plugin**](https://www.npmjs.com/package/@rcb-plugins/markdown-renderer) as well!
21+
22+
:::
1123

1224
:::caution
1325

14-
This is for testing purposes only, **do not** embed your API keys on your website in production. You may refer to [**this article**](https://tjtanjin.medium.com/how-to-build-and-integrate-a-react-chatbot-with-llms-a-react-chatbotify-guide-part-4-b40cd59fd6e6) for more details.
26+
Running models in the browser can be sluggish (especially if a large model is chosen). In production, you should pick a reasonably sized model or look to proxy your request to a backend. A lightweight demo project for an LLM proxy can be found [**here**](https://github.com/tjtanjin/llm-proxy). You may also refer to [**this article**](https://tjtanjin.medium.com/how-to-build-and-integrate-a-react-chatbot-with-llms-a-react-chatbotify-guide-part-4-b40cd59fd6e6) for more details.
1527

1628
:::
1729

1830
```jsx live noInline title=MyChatBot.js
1931
const MyChatBot = () => {
20-
let apiKey = null;
21-
let modelType = "gpt-3.5-turbo";
22-
let hasError = false;
23-
24-
// example openai conversation
25-
// you can replace with other LLMs such as Google Gemini
26-
const call_openai = async (params) => {
27-
try {
28-
const openai = new OpenAI({
29-
apiKey: apiKey,
30-
dangerouslyAllowBrowser: true // required for testing on browser side, not recommended
31-
});
32-
33-
// for streaming responses in parts (real-time), refer to real-time stream example
34-
const chatCompletion = await openai.chat.completions.create({
35-
// conversation history is not shown in this example as message length is kept to 1
36-
messages: [{ role: 'user', content: params.userInput }],
37-
model: modelType,
38-
});
39-
40-
await params.injectMessage(chatCompletion.choices[0].message.content);
41-
} catch (error) {
42-
await params.injectMessage("Unable to load model, is your API Key valid?");
43-
hasError = true;
32+
// initialize the plugin
33+
const plugins = [LlmConnector()];
34+
35+
// checks user message stop condition to end llm conversation
36+
const onUserMessageCheck = async (message: Message) => {
37+
if (
38+
typeof message.content === 'string' &&
39+
message.content.toUpperCase() === 'RESTART'
40+
) {
41+
return 'start';
4442
}
43+
};
44+
45+
// checks key down stop condition to end llm conversation
46+
const onKeyDownCheck = async (event: KeyboardEvent) => {
47+
if (event.key === 'Escape') {
48+
return 'start';
49+
}
50+
return null;
4551
}
46-
const flow={
52+
53+
// example flow for testing
54+
const flow: Flow = {
4755
start: {
48-
message: "Enter your OpenAI api key and start asking away!",
49-
path: "api_key",
50-
isSensitive: true
56+
message: "Hello, pick a model runtime to get started!",
57+
options: ["WebLlm", "Wllama"],
58+
chatDisabled: true,
59+
path: async (params) => {
60+
await params.simulateStreamMessage("Type 'RESTART' or hit 'ESC` to pick another runtime!");
61+
await params.simulateStreamMessage("Ask away!");
62+
return params.userInput.toLowerCase();
63+
},
5164
},
52-
api_key: {
53-
message: (params) => {
54-
apiKey = params.userInput.trim();
55-
return "Ask me anything!";
65+
webllm: {
66+
llmConnector: {
67+
// provider configuration guide:
68+
// https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/WebLlm.md
69+
provider: new WebLlmProvider({
70+
model: 'Qwen2-0.5B-Instruct-q4f16_1-MLC',
71+
}),
72+
outputType: 'character',
73+
stopConditions: {
74+
onUserMessage: onUserMessageCheck,
75+
onKeyDown: onKeyDownCheck,
76+
},
5677
},
57-
path: "loop",
5878
},
59-
loop: {
60-
message: async (params) => {
61-
await call_openai(params);
79+
wllama: {
80+
llmConnector: {
81+
// provider configuration guide:
82+
// https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/Wllama.md
83+
provider: new WllamaProvider({
84+
modelUrl: 'https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct-GGUF/resolve/main/smollm2-360m-instruct-q8_0.gguf',
85+
loadModelConfig: {
86+
n_ctx: 8192,
87+
},
88+
}),
89+
outputType: 'character',
90+
stopConditions: {
91+
onUserMessage: onUserMessageCheck,
92+
onKeyDown: onKeyDownCheck,
93+
},
6294
},
63-
path: () => {
64-
if (hasError) {
65-
return "start"
66-
}
67-
return "loop"
68-
}
69-
}
70-
}
95+
},
96+
};
97+
7198
return (
72-
<ChatBot settings={{general: {embedded: true}, chatHistory: {storageKey: "example_llm_conversation"}}} flow={flow}/>
99+
<ChatBot
100+
settings={{general: {embedded: true}, chatHistory: {storageKey: "example_openai_integration"}}}
101+
plugins={plugins}
102+
flow={flow}
103+
></ChatBot>
73104
);
74105
};
75106

docs/examples/openai_integration.md

Lines changed: 45 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,26 @@
11
---
2-
sidebar_position: 9
2+
sidebar_position: 10
33
title: OpenAI Integration
44
description: openai integration chatbot example
55
keywords: [react, chat, chatbot, chatbotify]
66
---
77

88
# OpenAI Integration
99

10-
The following is an example showing how to integrate OpenAI into React ChatBotify. It leverages on the [**LLM Connector Plugin**](https://www.npmjs.com/package/@rcb-plugins/llm-connector), which is maintained separately on the [**React ChatBotify Plugins**](https://github.com/orgs/React-ChatBotify-Plugins) organization. If you require support with the plugin, please reach out to support on the [**plugins discord**](https://discord.gg/J6pA4v3AMW) instead.
10+
The following is an example showing how to integrate [**OpenAI**](https://platform.openai.com/) into React ChatBotify. It leverages on the [**LLM Connector Plugin**](https://www.npmjs.com/package/@rcb-plugins/llm-connector), which is maintained separately on the [**React ChatBotify Plugins**](https://github.com/orgs/React-ChatBotify-Plugins) organization. This example also taps on the [**OpenaiProvider**](https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/OpenAI.md), which ships by default with the LLM Connector Plugin. If you require support with the plugin, please reach out to support on the [**plugins discord**](https://discord.gg/J6pA4v3AMW) instead.
1111

1212
:::tip
1313

1414
The plugin also comes with other default providers, which you can try out in the [**LLM Conversation Example**](/docs/examples/llm_conversation.md) and [**Gemini Integration Example**](/docs/examples/gemini_integration.md).
1515

1616
:::
1717

18+
:::tip
19+
20+
If you expect your LLM responses to contain markdown, consider using the [**Markdown Renderer Plugin**](https://www.npmjs.com/package/@rcb-plugins/markdown-renderer) as well!
21+
22+
:::
23+
1824
:::caution
1925

2026
This example uses 'direct' mode for demonstration purposes which exposes API keys client-side. In production, you should look to proxy your request and have your API keys stored server-side. A lightweight demo project for an LLM proxy can be found [**here**](https://github.com/tjtanjin/llm-proxy). You may also refer to [**this article**](https://tjtanjin.medium.com/how-to-build-and-integrate-a-react-chatbot-with-llms-a-react-chatbotify-guide-part-4-b40cd59fd6e6) for more details.
@@ -23,59 +29,48 @@ This example uses 'direct' mode for demonstration purposes which exposes API key
2329

2430
```jsx live noInline title=MyChatBot.js
2531
const MyChatBot = () => {
26-
let apiKey = null;
27-
let modelType = "gpt-3.5-turbo";
28-
let hasError = false;
29-
30-
// example openai conversation
31-
// you can replace with other LLMs such as Google Gemini
32-
const call_openai = async (params) => {
33-
try {
34-
const openai = new OpenAI({
35-
apiKey: apiKey,
36-
dangerouslyAllowBrowser: true // required for testing on browser side, not recommended
37-
});
38-
39-
// for streaming responses in parts (real-time), refer to real-time stream example
40-
const chatCompletion = await openai.chat.completions.create({
41-
// conversation history is not shown in this example as message length is kept to 1
42-
messages: [{ role: 'user', content: params.userInput }],
43-
model: modelType,
44-
});
45-
46-
await params.injectMessage(chatCompletion.choices[0].message.content);
47-
} catch (error) {
48-
await params.injectMessage("Unable to load model, is your API Key valid?");
49-
hasError = true;
50-
}
51-
}
52-
const flow={
32+
// openai api key, required since we're using 'direct' mode for testing
33+
let apiKey = "";
34+
35+
// initialize the plugin
36+
const plugins = [LlmConnector()];
37+
38+
// example flow for testing
39+
const flow: Flow = {
5340
start: {
54-
message: "Enter your OpenAI api key and start asking away!",
55-
path: "api_key",
56-
isSensitive: true
57-
},
58-
api_key: {
59-
message: (params) => {
60-
apiKey = params.userInput.trim();
61-
return "Ask me anything!";
41+
message: "Hello! Make sure you've set your API key before getting started!",
42+
options: ["I am ready!"],
43+
chatDisabled: true,
44+
path: async (params) => {
45+
if (!apiKey) {
46+
await params.simulateStreamMessage("You have not set your API key!");
47+
return "start";
48+
}
49+
await params.simulateStreamMessage("Ask away!");
50+
return "openai";
6251
},
63-
path: "loop",
6452
},
65-
loop: {
66-
message: async (params) => {
67-
await call_openai(params);
53+
openai: {
54+
llmConnector: {
55+
// provider configuration guide:
56+
// https://github.com/React-ChatBotify-Plugins/llm-connnector/blob/main/docs/providers/OpenAI.md
57+
provider: new OpenaiProvider({
58+
mode: 'direct',
59+
model: 'gpt-4.1-nano',
60+
responseFormat: 'stream',
61+
apiKey: apiKey,
62+
}),
63+
outputType: 'character',
6864
},
69-
path: () => {
70-
if (hasError) {
71-
return "start"
72-
}
73-
return "loop"
74-
}
75-
}
76-
}
65+
},
66+
};
67+
7768
return (
78-
<ChatBot settings={{general: {embedded: true}, chatHistory: {storageKey: "example_llm_conversation"}}} flow={flow}/>
69+
<ChatBot
70+
settings={{general: {embedded: true}, chatHistory: {storageKey: "example_openai_integration"}}}
71+
plugins={plugins}
72+
flow={flow}
73+
></ChatBot>
7974
);
8075
};
8176

docusaurus.config.js

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,32 @@ const darkCodeTheme = themes.dracula;
77

88
/** @type {import('@docusaurus/types').Config} */
99
const config = {
10+
// Extend the Webpack configuration
11+
plugins: [
12+
function myWebpackPlugin() {
13+
return {
14+
name: 'custom-webpack-plugin',
15+
configureWebpack(config, isServer) {
16+
return {
17+
module: {
18+
rules: [
19+
{
20+
test: /\.tsx?$/, // Match TypeScript files
21+
use: {
22+
loader: require.resolve('babel-loader'),
23+
options: {
24+
presets: [require.resolve('@babel/preset-typescript')],
25+
},
26+
},
27+
},
28+
],
29+
},
30+
};
31+
},
32+
};
33+
},
34+
],
35+
1036
title: 'React ChatBotify',
1137
tagline: 'A modern React library for creating flexible and extensible chatbots.',
1238
favicon: 'img/favicon.ico',

0 commit comments

Comments
 (0)