ShredWord is a byte-pair encoding (BPE) based tokenizer-trainer designed for fast, efficient, and flexible text processing & vocab training. It offers training, and text normalization functionalities and is backed by a C/C++ core with a Python interface for easy integration into machine learning workflows.
- Efficient Tokenization: Utilizes BPE for compressing text data and reducing the vocabulary size, making it well-suited for NLP tasks.
- Customizable Vocabulary: Allows users to define the target vocabulary size during training.
- Save and Load Models: Supports saving and loading trained tokenizers for reuse.
- Python Integration: Provides a Python interface for seamless integration and usability.
BPE is a subword tokenization algorithm that compresses a dataset by merging the most frequent pairs of characters or subwords into new tokens. This process continues until a predefined vocabulary size is reached.
Key steps:
- Initialize the vocabulary with all unique characters in the dataset.
- Count the frequency of character pairs.
- Merge the most frequent pair into a new token.
- Repeat until the target vocabulary size is achieved.
ShredWord implements this process efficiently in C/C++, exposing training, encoding, and decoding methods through Python.
- Python 3.11+
- GCC or a compatible compiler (for compiling the C/C++ code)
-
Install the Python package from PyPI.org:
pip install shredword-trainer
Below is a simple example demonstrating how to use ShredWord for training, encoding, and decoding text.
from shredword.trainer import BPETrainer
trainer = BPETrainer(target_vocab_size=500, min_pair_freq=1000)
trainer.load_corpus("test data/final.txt")
trainer.train()
trainer.save("model/merges_1k.model", "model/vocab_1k.vocab")
train(text, vocab_size)
: Train a tokenizer on the input text to a specified vocabulary size.save(file_path)
: Save the trained tokenizer to a file.
merges
: View or set the merge rules for tokenization.vocab
: Access the vocabulary as a dictionary of token IDs to strings.pattern
: View or set the regular expression pattern used for token splitting.special_tokens
: View or set special tokens used by the tokenizer.
Trained tokenizers can be saved to a file and reloaded for use in future tasks. The saved model includes merge rules and any special tokens or patterns defined during training.
# Save the trained model
tokenizer.save("vocab/trained_vocab.model")
# Load the model
tokenizer.load("vocab/trained_vocab.model")
Users can define special tokens or modify the merge rules and pattern directly using the provided properties.
# Set special tokens
special_tokens = [("<PAD>", 0), ("<UNK>", 1)]
tokenizer.special_tokens = special_tokens
# Update merge rules
merges = [(101, 32, 256), (32, 116, 257)]
tokenizer.merges = merges
We welcome contributions! Feel free to open an issue or submit a pull request if you have ideas for improvement.
This project is licensed under the MIT License. See the LICENSE
file for details.
ShredWord was inspired by the need for efficient and flexible tokenization in modern NLP pipelines. Special thanks to contributors and the open-source community for their support.