You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: crates/bpe/README.md
+14-3Lines changed: 14 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -210,6 +210,7 @@ This benchmark compares several encoders:
210
210
- The backtracking encoder uses the backtracking algorithm with memorisation based on top of a string matching automaton.
211
211
- The heap encoder uses a priority heap and a bitmask to represent token positions to implement the traditional BPE algorithm.
212
212
- The table encoder implements the raw dynamic programming algorithm proposed above.
213
+
- The Huggingface BPE tokenizer.
213
214
214
215
Two additional encoders are included that are faster but deviate from the original BPE encoding strategy:
215
216
@@ -219,10 +220,16 @@ Two additional encoders are included that are faster but deviate from the origin
219
220
The benchmark measured the runtime of encoding of slices of lengths 10, 100, 1000, and 10000 from a random 20000 token original text using the o200k token set.
220
221
(All encodings were computed from scratch for each slice.)
221
222
223
+
Be aware that this benchmark none of the tokenizers pre-tokenize the input.
224
+
It therefore shows the true performance characteristics of the encoding logic itself.
225
+
Unfortunately tiktoken does not allow us to disable pre-tokenization, which is why it is not included.
226
+
Below we have a comparison with pre-tokenization that includes tiktoken as well.
227
+
222
228
The graph below shows encoding runtime vs slice length.
223
229
All encoders (except the heap encoder) show the expected linear runtime complexity.
224
230
The fully dynamic programming solution and the heap implementation are still quite competitive to the backtracking encoder.
225
231
If the requirement of correct BPE output can be relaxed, then the Greedy approach or the minimal encoding approach are the clear winners.
232
+
The backtracking encoder is about 10x faster than the Huggingface BPE tokenizer.
@@ -264,9 +271,13 @@ The interval encoder counts any interval in typically constant time.
264
271
We compared the encoding performance of our encoder with two popular implementations, tiktoken and Huggingface tokenizers.
265
272
266
273
The benchmark measured the runtime of encoding of slices of lengths 10, 100, 1000, and 10000 from a random 20000 token original text using the o200k token set.
267
-
In this benchmark, our own encoder includes a pre-tokenization step so that it produces exactly the same results as the other two.
268
274
(All encodings were computed from scratch for each slice.)
269
275
276
+
In this benchmark all tokenizers pre-tokenize their input and produce the same tokens and decoded texts as the tiktoken tokenizer.
277
+
An effect of pre-tokenization is that the inputs to the actual BPE logic are typically much smaller than the overall input size, especially for larger inputs.
278
+
It is therefore difficult to judge the performance differences of the BPE logic fromt his benchmark.
279
+
It does give a good indication of how the algorithms might perform in practice.
280
+
270
281
The graph below shows encoding runtime vs slice length.
271
282
All encoders (except the heap encoder) show the expected linear runtime complexity.
272
283
The backtracking encoder, the fastest encoder that still returns correct results, shows a performance gain of approximately 3.5x compared to tiktoken.
@@ -277,8 +288,8 @@ If the requirement of correct BPE output can be relaxed, then the Greedy approac
277
288
278
289
The graph below shows encoding results for input that is particularly challenging for tiktoken.
279
290
The input consists of random ranges taken from the continuous list of all Unicode code points excluding whitespace.
280
-
The performance of tiktoken suffers shows a quadratic growth with the input size.
281
-
The Huggingface encoder scales better, but at a slower pace than our own encoder.
291
+
The performance of tiktoken shows a quadratic growth with the input size.
292
+
The Huggingface encoder scales better, but becomes slower and slower compared to our implementation as input size increases.
0 commit comments