Skip to content

Commit c36e81d

Browse files
yangli2tjleeyon
andauthored
examples : add chat-vicuna.sh (#1854)
Co-authored-by: Yang Li <yangliyl@google.com>
1 parent 3559433 commit c36e81d

File tree

2 files changed

+44
-3
lines changed

2 files changed

+44
-3
lines changed

examples/chat-vicuna.sh

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
#!/bin/bash
2+
3+
set -e
4+
5+
cd "$(dirname "$0")/.." || exit
6+
7+
MODEL="${MODEL:-./models/ggml-vic13b-uncensored-q5_0.bin}"
8+
PROMPT_TEMPLATE=${PROMPT_TEMPLATE:-./prompts/chat.txt}
9+
USER_NAME="### Human"
10+
AI_NAME="### Assistant"
11+
12+
# Adjust to the number of CPU cores you want to use.
13+
N_THREAD="${N_THREAD:-8}"
14+
# Number of tokens to predict (made it larger than default because we want a long interaction)
15+
N_PREDICTS="${N_PREDICTS:-2048}"
16+
17+
# Note: you can also override the generation options by specifying them on the command line:
18+
# For example, override the context size by doing: ./chatLLaMa --ctx_size 1024
19+
GEN_OPTIONS="${GEN_OPTIONS:---ctx_size 2048 --temp 0.7 --top_k 40 --top_p 0.5 --repeat_last_n 256 --batch_size 1024 --repeat_penalty 1.17647}"
20+
21+
DATE_TIME=$(date +%H:%M)
22+
DATE_YEAR=$(date +%Y)
23+
24+
PROMPT_FILE=$(mktemp -t llamacpp_prompt.XXXXXXX.txt)
25+
26+
sed -e "s/\[\[USER_NAME\]\]/$USER_NAME/g" \
27+
-e "s/\[\[AI_NAME\]\]/$AI_NAME/g" \
28+
-e "s/\[\[DATE_TIME\]\]/$DATE_TIME/g" \
29+
-e "s/\[\[DATE_YEAR\]\]/$DATE_YEAR/g" \
30+
$PROMPT_TEMPLATE > $PROMPT_FILE
31+
32+
# shellcheck disable=SC2086 # Intended splitting of GEN_OPTIONS
33+
./bin/main $GEN_OPTIONS \
34+
--model "$MODEL" \
35+
--threads "$N_THREAD" \
36+
--n_predict "$N_PREDICTS" \
37+
--color --interactive \
38+
--file ${PROMPT_FILE} \
39+
--reverse-prompt "### Human:" \
40+
--in-prefix ' ' \
41+
"$@"

llama.h

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -244,9 +244,9 @@ extern "C" {
244244
LLAMA_API const char * llama_token_to_str(const struct llama_context * ctx, llama_token token);
245245

246246
// Special tokens
247-
LLAMA_API llama_token llama_token_bos();
248-
LLAMA_API llama_token llama_token_eos();
249-
LLAMA_API llama_token llama_token_nl();
247+
LLAMA_API llama_token llama_token_bos(); // beginning-of-sentence
248+
LLAMA_API llama_token llama_token_eos(); // end-of-sentence
249+
LLAMA_API llama_token llama_token_nl(); // next-line
250250

251251
// Sampling functions
252252

0 commit comments

Comments
 (0)