Skip to content

Commit e5c4cd9

Browse files
Update benchmark results and text
1 parent f0c61bf commit e5c4cd9

File tree

6 files changed

+122
-108
lines changed

6 files changed

+122
-108
lines changed

crates/bpe/README.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,7 @@ This benchmark compares several encoders:
210210
- The backtracking encoder uses the backtracking algorithm with memorisation based on top of a string matching automaton.
211211
- The heap encoder uses a priority heap and a bitmask to represent token positions to implement the traditional BPE algorithm.
212212
- The table encoder implements the raw dynamic programming algorithm proposed above.
213+
- The Huggingface BPE tokenizer.
213214

214215
Two additional encoders are included that are faster but deviate from the original BPE encoding strategy:
215216

@@ -219,10 +220,16 @@ Two additional encoders are included that are faster but deviate from the origin
219220
The benchmark measured the runtime of encoding of slices of lengths 10, 100, 1000, and 10000 from a random 20000 token original text using the o200k token set.
220221
(All encodings were computed from scratch for each slice.)
221222

223+
Be aware that this benchmark none of the tokenizers pre-tokenize the input.
224+
It therefore shows the true performance characteristics of the encoding logic itself.
225+
Unfortunately tiktoken does not allow us to disable pre-tokenization, which is why it is not included.
226+
Below we have a comparison with pre-tokenization that includes tiktoken as well.
227+
222228
The graph below shows encoding runtime vs slice length.
223229
All encoders (except the heap encoder) show the expected linear runtime complexity.
224230
The fully dynamic programming solution and the heap implementation are still quite competitive to the backtracking encoder.
225231
If the requirement of correct BPE output can be relaxed, then the Greedy approach or the minimal encoding approach are the clear winners.
232+
The backtracking encoder is about 10x faster than the Huggingface BPE tokenizer.
226233

227234
![encoding runtime comparison](./images/performance-encoding.svg)
228235

@@ -264,9 +271,13 @@ The interval encoder counts any interval in typically constant time.
264271
We compared the encoding performance of our encoder with two popular implementations, tiktoken and Huggingface tokenizers.
265272

266273
The benchmark measured the runtime of encoding of slices of lengths 10, 100, 1000, and 10000 from a random 20000 token original text using the o200k token set.
267-
In this benchmark, our own encoder includes a pre-tokenization step so that it produces exactly the same results as the other two.
268274
(All encodings were computed from scratch for each slice.)
269275

276+
In this benchmark all tokenizers pre-tokenize their input and produce the same tokens and decoded texts as the tiktoken tokenizer.
277+
An effect of pre-tokenization is that the inputs to the actual BPE logic are typically much smaller than the overall input size, especially for larger inputs.
278+
It is therefore difficult to judge the performance differences of the BPE logic fromt his benchmark.
279+
It does give a good indication of how the algorithms might perform in practice.
280+
270281
The graph below shows encoding runtime vs slice length.
271282
All encoders (except the heap encoder) show the expected linear runtime complexity.
272283
The backtracking encoder, the fastest encoder that still returns correct results, shows a performance gain of approximately 3.5x compared to tiktoken.
@@ -277,8 +288,8 @@ If the requirement of correct BPE output can be relaxed, then the Greedy approac
277288

278289
The graph below shows encoding results for input that is particularly challenging for tiktoken.
279290
The input consists of random ranges taken from the continuous list of all Unicode code points excluding whitespace.
280-
The performance of tiktoken suffers shows a quadratic growth with the input size.
281-
The Huggingface encoder scales better, but at a slower pace than our own encoder.
291+
The performance of tiktoken shows a quadratic growth with the input size.
292+
The Huggingface encoder scales better, but becomes slower and slower compared to our implementation as input size increases.
282293

283294
![worst-case encoding runtime comparison](./images/performance-worstcase.svg)
284295

0 commit comments

Comments
 (0)