Skip to content

Commit f86b724

Browse files
committed
Feat #748 custom instruction merge with main
2 parents c49f804 + 21c9b41 commit f86b724

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

57 files changed

+2373
-322
lines changed

.env

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,9 @@ PUBLIC_ANNOUNCEMENT_BANNERS=`[
109109

110110
PARQUET_EXPORT_DATASET=
111111
PARQUET_EXPORT_HF_TOKEN=
112-
PARQUET_EXPORT_SECRET=
112+
ADMIN_API_SECRET=# secret to admin API calls, like computing usage stats or exporting parquet data
113+
114+
PARQUET_EXPORT_SECRET=#DEPRECATED, use ADMIN_API_SECRET instead
113115

114116
RATE_LIMIT= # requests per minute
115117
MESSAGES_BEFORE_LOGIN=# how many messages a user can send in a conversation before having to login. set to 0 to force login right away

.env.template

Lines changed: 42 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@ MODELS=`[
44
{
55
"name" : "mistralai/Mixtral-8x7B-Instruct-v0.1",
66
"description" : "The latest MoE model from Mistral AI! 8x7B and outperforms Llama 2 70B in most benchmarks.",
7+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/mistral-logo.png",
78
"websiteUrl" : "https://mistral.ai/news/mixtral-of-experts/",
9+
"modelUrl": "https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1",
810
"preprompt" : "",
911
"chatPromptTemplate": "<s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}}</s> {{/ifAssistant}}{{/each}}",
1012
"parameters" : {
@@ -29,10 +31,39 @@ MODELS=`[
2931
}
3032
]
3133
},
32-
{
34+
{
35+
"name" : "google/gemma-7b-it",
36+
"description": "Gemma 7B belongs to a family of lightweight models built by Google, based on the same research and technology used to create the Gemini models.",
37+
"websiteUrl" : "https://blog.google/technology/developers/gemma-open-models/",
38+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/google-logo.png",
39+
"modelUrl": "https://huggingface.co/google/gemma-7b-it",
40+
"preprompt": "",
41+
"chatPromptTemplate" : "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}",
42+
"promptExamples": [
43+
{
44+
"title": "Write an email from bullet list",
45+
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
46+
}, {
47+
"title": "Code a snake game",
48+
"prompt": "Code a basic snake game in python, give explanations for each step."
49+
}, {
50+
"title": "Assist in a task",
51+
"prompt": "How do I make a delicious lemon cheesecake?"
52+
}
53+
],
54+
"parameters": {
55+
"do_sample": true,
56+
"truncate": 7168,
57+
"max_new_tokens": 1024,
58+
"stop" : ["<end_of_turn>"]
59+
}
60+
},
61+
{
3362
"name": "meta-llama/Llama-2-70b-chat-hf",
3463
"description": "The latest and biggest model from Meta, fine-tuned for chat.",
64+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/meta-logo.png",
3565
"websiteUrl": "https://ai.meta.com/llama/",
66+
"modelUrl": "https://huggingface.co/meta-llama/Llama-2-70b-chat-hf",
3667
"preprompt": " ",
3768
"chatPromptTemplate" : "<s>[INST] <<SYS>>\n{{preprompt}}\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}}",
3869
"promptExamples": [
@@ -60,7 +91,9 @@ MODELS=`[
6091
{
6192
"name" : "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
6293
"description" : "Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM.",
94+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/nous-logo.png",
6395
"websiteUrl" : "https://nousresearch.com/",
96+
"modelUrl": "https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
6497
"chatPromptTemplate" : "{{#if @root.preprompt}}<|im_start|>system\n{{@root.preprompt}}<|im_end|>\n{{/if}}{{#each messages}}{{#ifUser}}<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n{{/ifUser}}{{#ifAssistant}}{{content}}<|im_end|>\n{{/ifAssistant}}{{/each}}",
6598
"promptExamples": [
6699
{
@@ -88,7 +121,9 @@ MODELS=`[
88121
"name": "codellama/CodeLlama-70b-Instruct-hf",
89122
"displayName": "codellama/CodeLlama-70b-Instruct-hf",
90123
"description": "Code Llama, a state of the art code model from Meta. Now in 70B!",
124+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/meta-logo.png",
91125
"websiteUrl": "https://ai.meta.com/blog/code-llama-large-language-model-coding/",
126+
"modelUrl": "https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf",
92127
"preprompt": "",
93128
"chatPromptTemplate" : "<s>{{#if @root.preprompt}}Source: system\n\n {{@root.preprompt}} <step> {{/if}}{{#each messages}}{{#ifUser}}Source: user\n\n {{content}} <step> {{/ifUser}}{{#ifAssistant}}Source: assistant\n\n {{content}} <step> {{/ifAssistant}}{{/each}}Source: assistant\nDestination: user\n\n ",
94129
"promptExamples": [
@@ -117,7 +152,9 @@ MODELS=`[
117152
"name": "mistralai/Mistral-7B-Instruct-v0.1",
118153
"displayName": "mistralai/Mistral-7B-Instruct-v0.1",
119154
"description": "Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.",
155+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/mistral-logo.png",
120156
"websiteUrl": "https://mistral.ai/news/announcing-mistral-7b/",
157+
"modelUrl": "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1",
121158
"preprompt": "",
122159
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}",
123160
"parameters": {
@@ -147,7 +184,9 @@ MODELS=`[
147184
"name": "mistralai/Mistral-7B-Instruct-v0.2",
148185
"displayName": "mistralai/Mistral-7B-Instruct-v0.2",
149186
"description": "Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.",
187+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/mistral-logo.png",
150188
"websiteUrl": "https://mistral.ai/news/announcing-mistral-7b/",
189+
"modelUrl": "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2",
151190
"preprompt": "",
152191
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}",
153192
"parameters": {
@@ -176,7 +215,9 @@ MODELS=`[
176215
"name": "openchat/openchat-3.5-0106",
177216
"displayName": "openchat/openchat-3.5-0106",
178217
"description": "OpenChat 3.5 is the #1 model on MT-Bench, with only 7B parameters.",
218+
"logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/openchat-logo.png",
179219
"websiteUrl": "https://huggingface.co/openchat/openchat-3.5-0106",
220+
"modelUrl": "https://huggingface.co/openchat/openchat-3.5-0106",
180221
"preprompt": "",
181222
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}GPT4 Correct User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Correct Assistant:{{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}}",
182223
"parameters": {

.github/workflows/deploy-release.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ jobs:
2626
MONGODB_URL: ${{ secrets.MONGODB_URL }}
2727
HF_DEPLOYMENT_TOKEN: ${{ secrets.HF_DEPLOYMENT_TOKEN }}
2828
WEBHOOK_URL_REPORT_ASSISTANT: ${{ secrets.WEBHOOK_URL_REPORT_ASSISTANT }}
29+
ADMIN_API_SECRET: ${{ secrets.ADMIN_API_SECRET }}
2930
run: npm run updateProdEnv
3031
sync-to-hub:
3132
runs-on: ubuntu-latest

PROMPTS.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,3 +61,9 @@ System: {{preprompt}}\nUser:{{#each messages}}{{#ifUser}}{{content}}\nFalcon:{{/
6161
```env
6262
<s>{{#if @root.preprompt}}Source: system\n\n {{@root.preprompt}} <step> {{/if}}{{#each messages}}{{#ifUser}}Source: user\n\n {{content}} <step> {{/ifUser}}{{#ifAssistant}}Source: assistant\n\n {{content}} <step> {{/ifAssistant}}{{/each}}Source: assistant\nDestination: user\n\n ``
6363
```
64+
65+
## Gemma
66+
67+
```env
68+
{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}
69+
```

README.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -616,3 +616,37 @@ npm run updateLocalEnv
616616
```
617617

618618
This will replace your `.env.local` file with the one that will be used in prod (simply taking `.env.template + .env.SECRET_CONFIG`).
619+
620+
### Populate database
621+
622+
> [!WARNING]
623+
> The `MONGODB_URL` used for this script will be fetched from `.env.local`. Make sure it's correct! The command runs directly on the database.
624+
625+
You can populate the database using faker data using the `populate` script:
626+
627+
```bash
628+
npm run populate <flags here>
629+
```
630+
631+
At least one flag must be specified, the following flags are available:
632+
633+
- `reset` - resets the database
634+
- `all` - populates all tables
635+
- `users` - populates the users table
636+
- `settings` - populates the settings table for existing users
637+
- `assistants` - populates the assistants table for existing users
638+
- `conversations` - populates the conversations table for existing users
639+
640+
For example, you could use it like so:
641+
642+
```bash
643+
npm run populate reset
644+
```
645+
646+
to clear out the database. Then login in the app to create your user and run the following command:
647+
648+
```bash
649+
npm run populate users settings assistants conversations
650+
```
651+
652+
to populate the database with fake data, including fake conversations and assistants for your user.

0 commit comments

Comments
 (0)