Skip to content

Feature/hstu #290

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 69 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
00d16ad
extended data_preparator
teodor-r May 26, 2025
a25c50a
first proba
teodor-r May 28, 2025
016f088
correct attention core $ timestamp preproc on ml
teodor-r Jun 2, 2025
f0cc912
version for benchmarks
teodor-r Jun 4, 2025
452b1ea
drop xavier everywhere
teodor-r Jun 5, 2025
23ddb80
fix gradient damping & added reverse pos encoding
teodor-r Jun 10, 2025
2bea7bd
hot fix
teodor-r Jun 10, 2025
6c7ca6b
+reco, xavier init, drop emb scale multiplicator
teodor-r Jun 11, 2025
0a6b8dc
spliter v2
teodor-r Jun 11, 2025
1abac47
add univeral reco poicy
teodor-r Jun 15, 2025
1fbc991
remove cv_hstu plug
teodor-r Jun 15, 2025
4eee397
add datetime spec
teodor-r Jun 15, 2025
b44f7d3
alpha stable version
teodor-r Jun 20, 2025
71deefe
before change TransformerTorchBackbone
teodor-r Jun 24, 2025
8c6ce83
common preproc extras & torch backbone
teodor-r Jun 24, 2025
3577d13
Revert "common preproc extras & torch backbone"
teodor-r Jun 25, 2025
198f26f
extra cols & prep_recommend & revert to hstu data preparator
teodor-r Jun 25, 2025
df968dc
prefinal hstu & prefinal base data preparator
teodor-r Jun 26, 2025
ce683ab
rude change data_preparator
teodor-r Jun 26, 2025
4858152
final data prepartor & separate logic HSTU init & previous collate fu…
teodor-r Jun 27, 2025
9b23bdc
common torch backbone & data preparator unix_ts
teodor-r Jun 27, 2025
8f036ed
minor fixs
teodor-r Jun 30, 2025
b54b4fe
a few minors & show val mask issue
teodor-r Jul 1, 2025
74b3ef9
solve issue double dropout
teodor-r Jul 2, 2025
c1313e9
for diff
teodor-r Jul 2, 2025
ce23e22
inherit SASRec data preparator & change preproc rec context & drop do…
teodor-r Jul 2, 2025
7885eb2
start guide
teodor-r Jul 3, 2025
28f8308
proba hstu guide
teodor-r Jul 4, 2025
66d1f50
guide & + docstrings
teodor-r Jul 6, 2025
a81b117
+ func quant
teodor-r Jul 7, 2025
692e452
drop optional quant function
teodor-r Jul 7, 2025
0f992e7
correct cosine similarity
teodor-r Jul 8, 2025
19ff846
drop hstu data prep & add tests & add utils in tutorials & changed hs…
teodor-r Jul 9, 2025
dd265dd
correct test & drop changing kwargs hstu config
teodor-r Jul 10, 2025
752f944
+linter
teodor-r Jul 10, 2025
48379d2
+ logits_t option & + clean
teodor-r Jul 10, 2025
27b7b36
swap to logits_t in guide
teodor-r Jul 10, 2025
1d2e9d2
fully correct tests & beta stable version
teodor-r Jul 10, 2025
f70be75
merge correct splitter
teodor-r Jul 11, 2025
a55d9f9
sync data prep versions afret merge
teodor-r Jul 14, 2025
ba96aab
+ pos_encoder_kwargs & changed context preproccesing logic
teodor-r Jul 15, 2025
ec0fa5f
check visual after drop styling tables
teodor-r Jul 15, 2025
57dae90
add dummy pad & extend test & guide change & + docstrings
teodor-r Jul 16, 2025
bb9ef80
linter & drop debug files
teodor-r Jul 16, 2025
65e044c
drop extra lines
teodor-r Jul 16, 2025
2b58d59
change readme
teodor-r Jul 16, 2025
c4a139c
context prep logic & tests
teodor-r Jul 17, 2025
f446500
test hot fix
teodor-r Jul 17, 2025
0ca6993
context tests hot fix
teodor-r Jul 17, 2025
008e32d
loo_mask & + tests & others tests fixed
teodor-r Jul 17, 2025
baabe63
hot fix
teodor-r Jul 17, 2025
75c674f
Feature/loo_mask (#292)
teodor-r Jul 17, 2025
04fa384
Feature/loo_mask (#292)
teodor-r Jul 18, 2025
79b4fb4
universal loo mask
teodor-r Jul 18, 2025
ae76c12
attn dropout force to null & change guide & fix flaws & merged mask
teodor-r Jul 18, 2025
6c08ebd
fix flaws
teodor-r Jul 21, 2025
4a2ca47
ml-20m results & fix flaws & show deter copmute cuda problem
teodor-r Jul 21, 2025
5044d70
fix test cross_val issue & docstring inspect
teodor-r Jul 22, 2025
a1f23b8
refusal reproduce due to unstable sort
teodor-r Jul 22, 2025
8ce54c0
merge loo mask final
teodor-r Jul 22, 2025
929b443
change structure guide
teodor-r Jul 23, 2025
e9a1c88
fix flaws
teodor-r Jul 24, 2025
e51959c
merge lock dependancies
teodor-r Jul 24, 2025
a9fa47b
weaken tests
teodor-r Jul 24, 2025
5035858
resolve ci test falling
teodor-r Jul 24, 2025
aa9fc7e
reviewed guide
teodor-r Jul 25, 2025
c814ef1
drop cell outputs & new bucket notation & rab_t picture
teodor-r Jul 25, 2025
c2337e1
fix flaws
teodor-r Jul 25, 2025
248a14d
updated readme
blondered Jul 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 8 additions & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## Unreleased

### Added
- `leave_one_out_mask` function (`rectools.models.nn.transformers.utils.leave_one_out_mask`) for applying leave-one-out validation during transformer models training ([#292](https://github.com/MobileTeleSystems/RecTools/pull/292))

- HSTU Model from "Actions Speak Louder then Words..." implemented in the class `HSTUModel` ([#290](https://github.com/MobileTeleSystems/RecTools/pull/290))
- `leave_one_out_mask` function (`rectools.models.nn.transformers.utils.leave_one_out_mask`) for applying leave-one-out validation during transformer models training.([#292](https://github.com/MobileTeleSystems/RecTools/pull/292))
- `logits_t` argument to `TransformerLightningModuleBase`. It is used to scale logits when computing the loss. ([#290](https://github.com/MobileTeleSystems/RecTools/pull/290))
- `use_scale_factor` argument to `LearnableInversePositionalEncoding`. It scales embeddings by the square root of their dimension — following the original approach from the "Attention Is All You Need" ([#290](https://github.com/MobileTeleSystems/RecTools/pull/290))
- Optional `context` argument to `recommend` method of models and `get_context` function to `rectools.dataset.context.py` ([#290](https://github.com/MobileTeleSystems/RecTools/pull/290))
### Fixed
- [Breaking] Corrected computation of `cosine` distance in `DistanceSimilarityModule`([#290](https://github.com/MobileTeleSystems/RecTools/pull/290))
- Installation issue with `cupy` extra on macOS ([#293](https://github.com/MobileTeleSystems/RecTools/pull/293))
- `torch.dtype object has no attribute 'kind'` error in `TorchRanker` ([#293](https://github.com/MobileTeleSystems/RecTools/pull/293))

### Removed
- [Breaking] `Dropout` module from `IdEmbeddingsItemNet`. This changes model behaviour during training, so model results starting from this release might slightly differ from previous RecTools versions even when the random seed is fixed.([#290](https://github.com/MobileTeleSystems/RecTools/pull/290))

## [0.15.0] - 17.07.2025

Expand All @@ -24,7 +29,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- [Breaking] Now `LastNSplitter` guarantees taking the last ordered interaction in dataframe in case of identical timestamps ([#288](https://github.com/MobileTeleSystems/RecTools/pull/288))


## [0.14.0] - 16.05.2025

### Added
Expand All @@ -33,7 +37,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `map_location` and `model_params_update` arguments for the function `load_from_checkpoint` for Transformer-based models. Use `map_location` to explicitly specify the computing new device and `model_params_update` to update original model parameters (e.g. remove training-specific parameters that are not needed anymore) ([#281](https://github.com/MobileTeleSystems/RecTools/pull/281))
- `get_val_mask_func_kwargs` and `get_trainer_func_kwargs` arguments for Transformer-based models to allow keyword arguments in custom functions used for model training. ([#280](https://github.com/MobileTeleSystems/RecTools/pull/280))


## [0.13.0] - 10.04.2025

### Added
Expand All @@ -53,7 +56,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed
- Interactions extra columns are not dropped in `Dataset.filter_interactions` method [#267](https://github.com/MobileTeleSystems/RecTools/pull/267)


## [0.11.0] - 17.02.2025

### Added
Expand All @@ -68,14 +70,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- `ImplicitRanker` `rank` method compatible with `Ranker` protocol. `use_gpu` and `num_threads` params moved from `rank` method to `__init__`. [#251](https://github.com/MobileTeleSystems/RecTools/pull/251)


## [0.10.0] - 16.01.2025

### Added
- `ImplicitBPRWrapperModel` model with algorithm description in extended baselines tutorial ([#232](https://github.com/MobileTeleSystems/RecTools/pull/232), [#239](https://github.com/MobileTeleSystems/RecTools/pull/239))
- All vector models and `EASEModel` support for enabling ranking on GPU and selecting number of threads for CPU ranking. Added `recommend_n_threads` and `recommend_use_gpu_ranking` parameters to `EASEModel`, `ImplicitALSWrapperModel`, `ImplicitBPRWrapperModel`, `PureSVDModel` and `DSSMModel`. Added `recommend_use_gpu_ranking` to `LightFMWrapperModel`. GPU and CPU ranking may provide different ordering of items with identical scores in recommendation table, so this could change ordering items in recommendations since GPU ranking is now used as a default one. ([#218](https://github.com/MobileTeleSystems/RecTools/pull/218))


## [0.9.0] - 11.12.2024

### Added
Expand Down Expand Up @@ -115,7 +115,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Removed
- [Breaking] `assume_external_ids` parameter in `recommend` and `recommend_to_items` model methods ([#177](https://github.com/MobileTeleSystems/RecTools/pull/177))


## [0.7.0] - 29.07.2024

### Added
Expand Down
49 changes: 24 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,17 +24,16 @@
RecTools is an easy-to-use Python library which makes the process of building recommender systems easier and
faster than ever before.

## ✨ Highlights: Transformer models released! ✨
## ✨ Highlights: HSTU model released! ✨

**BERT4Rec and SASRec are now available in RecTools:**
**HSTU arhictecture from ["Actions speak louder then words..."](https://arxiv.org/abs/2402.17152) is now available in RecTools as `HSTUModel`:**
- Fully compatible with our `fit` / `recommend` paradigm and require NO special data processing
- Explicitly described in our [Transformers Theory & Practice Tutorial](examples/tutorials/transformers_tutorial.ipynb): loss options, item embedding options, category features utilization and more!
- Supports context-aware recommendations in case Relative Time Bias is enabled
- Supports all loss options, item embedding options, category features utilization and other common modular functionality of RecTools transformer models
- In [HSTU tutorial](examples/tutorials/transformers_HSTU_tutorial.ipynb) we show that original metrics reported for HSTU on public Movielens datasets are actually **underestimated**!
- Configurable, customizable, callback-friendly, checkpoints-included, logs-out-of-the-box, custom-validation-ready, multi-gpu-compatible! See [Transformers Advanced Training User Guide](examples/tutorials/transformers_advanced_training_guide.ipynb) and [Transformers Customization Guide](examples/tutorials/transformers_customization_guide.ipynb)
- Public benchmarks which compare RecTools models to other open-source implementations following BERT4Rec replicability paper show that RecTools implementations achieve highest scores on multiple datasets: [Performance on public transformers benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results)




Plase note that we always compare the quality of our implementations to academic papers results. [Public benchmarks for transformer models SASRec and BERT4Rec](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) show that RecTools implementations achieve highest scores on multiple datasets compared to other published results.


## Get started
Expand All @@ -48,11 +47,10 @@ unzip ml-1m.zip

```python
import pandas as pd
from implicit.nearest_neighbours import TFIDFRecommender

from rectools import Columns
from rectools.dataset import Dataset
from rectools.models import ImplicitItemKNNWrapperModel
from rectools.models import SASRecModel

# Read the data
ratings = pd.read_csv(
Expand All @@ -67,7 +65,7 @@ ratings = pd.read_csv(
dataset = Dataset.construct(ratings)

# Fit model
model = ImplicitItemKNNWrapperModel(TFIDFRecommender(K=10))
model = SASRecModel(n_factors=64, epochs=100, loss="sampled_softmax")
model.fit(dataset)

# Make recommendations
Expand Down Expand Up @@ -105,22 +103,22 @@ pip install rectools[all]

## Recommender Models
The table below lists recommender models that are available in RecTools.
See [recommender baselines extended tutorial](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/tutorials/baselines_extended_tutorial.ipynb) for deep dive into theory & practice of our supported models.

| Model | Type | Description (🎏 for user/item features, 🔆 for warm inference, ❄️ for cold inference support) | Tutorials & Benchmarks |
|----|----|---------|--------|
| SASRec | Neural Network | `rectools.models.SASRecModel` - Transformer-based sequential model with unidirectional attention mechanism and "Shifted Sequence" training objective <br>🎏| 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 📘 [Customization guide](examples/tutorials/transformers_customization_guide.ipynb) <br> 🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) |
| BERT4Rec | Neural Network | `rectools.models.BERT4RecModel` - Transformer-based sequential model with bidirectional attention mechanism and "MLM" (masked item) training objective <br>🎏| 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 📘 [Customization guide](examples/tutorials/transformers_customization_guide.ipynb) <br> 🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) |
| [implicit](https://github.com/benfred/implicit) ALS Wrapper | Matrix Factorization | `rectools.models.ImplicitALSWrapperModel` - Alternating Least Squares Matrix Factorizattion algorithm for implicit feedback. <br>🎏| 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Implicit-ALS)<br> 🚀 [50% boost to metrics with user & item features](examples/5_benchmark_iALS_with_features.ipynb) |
| [implicit](https://github.com/benfred/implicit) BPR-MF Wrapper | Matrix Factorization | `rectools.models.ImplicitBPRWrapperModel` - Bayesian Personalized Ranking Matrix Factorization algorithm. | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Bayesian-Personalized-Ranking-Matrix-Factorization-(BPR-MF)) |

| Model | Type | Description (🎏 for user/item features, 🔆 for warm inference, ❄️ for cold inference support) | Tutorials & Benchmarks |
|---------------------|----|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|
| HSTU | Neural Network | `rectools.models.HSTUModel` - Sequential model with unidirectional pointwise aggregated attention mechanism, incorporating relative attention bias from positional and temporal information, introduced in ["Actions speak louder then words..."](https://arxiv.org/pdf/2402.17152), combined with "Shifted Sequence" training objective as in original public benchmarks<br>🎏 | 📓 [HSTU Theory & Practice](examples/tutorials/transformers_HSTU_tutorial.ipynb) <br> 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 🚀 [100% reproduces original public benchmark results](examples/tutorials/transformers_HSTU_tutorial.ipynb)
| SASRec | Neural Network | `rectools.models.SASRecModel` - Transformer-based sequential model with unidirectional attention mechanism and "Shifted Sequence" training objective <br>🎏 | 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 📘 [Customization guide](examples/tutorials/transformers_customization_guide.ipynb) <br> 🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) |
| BERT4Rec | Neural Network | `rectools.models.BERT4RecModel` - Transformer-based sequential model with bidirectional attention mechanism and "MLM" (masked item) training objective <br>🎏 | 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 📘 [Customization guide](examples/tutorials/transformers_customization_guide.ipynb) <br> 🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) |
| [implicit](https://github.com/benfred/implicit) ALS Wrapper | Matrix Factorization | `rectools.models.ImplicitALSWrapperModel` - Alternating Least Squares Matrix Factorizattion algorithm for implicit feedback. <br>🎏 | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Implicit-ALS)<br> 🚀 [50% boost to metrics with user & item features](examples/5_benchmark_iALS_with_features.ipynb) |
| [implicit](https://github.com/benfred/implicit) BPR-MF Wrapper | Matrix Factorization | `rectools.models.ImplicitBPRWrapperModel` - Bayesian Personalized Ranking Matrix Factorization algorithm. | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Bayesian-Personalized-Ranking-Matrix-Factorization-(BPR-MF)) |
| [implicit](https://github.com/benfred/implicit) ItemKNN Wrapper | Nearest Neighbours | `rectools.models.ImplicitItemKNNWrapperModel` - Algorithm that calculates item-item similarity matrix using distances between item vectors in user-item interactions matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#ItemKNN) |
| [LightFM](https://github.com/lyst/lightfm) Wrapper | Matrix Factorization | `rectools.models.LightFMWrapperModel` - Hybrid matrix factorization algorithm which utilises user and item features and supports a variety of losses.<br>🎏 🔆 ❄️| 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#LightFM)<br>🚀 [10-25 times faster inference with RecTools](examples/6_benchmark_lightfm_inference.ipynb)|
| EASE | Linear Autoencoder | `rectools.models.EASEModel` - Embarassingly Shallow Autoencoders implementation that explicitly calculates dense item-item similarity matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#EASE) |
| PureSVD | Matrix Factorization | `rectools.models.PureSVDModel` - Truncated Singular Value Decomposition of user-item interactions matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#PureSVD) |
| DSSM | Neural Network | `rectools.models.DSSMModel` - Two-tower Neural model that learns user and item embeddings utilising their explicit features and learning on triplet loss.<br>🎏 🔆 | - |
| Popular | Heuristic | `rectools.models.PopularModel` - Classic baseline which computes popularity of items and also accepts params like time window and type of popularity computation.<br>❄️| - |
| Popular in Category | Heuristic | `rectools.models.PopularInCategoryModel` - Model that computes poularity within category and applies mixing strategy to increase Diversity.<br>❄️| - |
| Random | Heuristic | `rectools.models.RandomModel` - Simple random algorithm useful to benchmark Novelty, Coverage, etc.<br>❄️| - |
| [LightFM](https://github.com/lyst/lightfm) Wrapper | Matrix Factorization | `rectools.models.LightFMWrapperModel` - Hybrid matrix factorization algorithm which utilises user and item features and supports a variety of losses.<br>🎏 🔆 ❄️ | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#LightFM)<br>🚀 [10-25 times faster inference with RecTools](examples/6_benchmark_lightfm_inference.ipynb)|
| EASE | Linear Autoencoder | `rectools.models.EASEModel` - Embarassingly Shallow Autoencoders implementation that explicitly calculates dense item-item similarity matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#EASE) |
| PureSVD | Matrix Factorization | `rectools.models.PureSVDModel` - Truncated Singular Value Decomposition of user-item interactions matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#PureSVD) |
| DSSM | Neural Network | `rectools.models.DSSMModel` - Two-tower Neural model that learns user and item embeddings utilising their explicit features and learning on triplet loss.<br>🎏 🔆 | - |
| Popular | Heuristic | `rectools.models.PopularModel` - Classic baseline which computes popularity of items and also accepts params like time window and type of popularity computation.<br>❄️ | - |
| Popular in Category | Heuristic | `rectools.models.PopularInCategoryModel` - Model that computes poularity within category and applies mixing strategy to increase Diversity.<br>❄️ | - |
| Random | Heuristic | `rectools.models.RandomModel` - Simple random algorithm useful to benchmark Novelty, Coverage, etc.<br>❄️ | - |

- All of the models follow the same interface. **No exceptions**
- No need for manual creation of sparse matrixes, torch dataloaders or mapping ids. Preparing data for models is as simple as `dataset = Dataset.construct(interactions_df)`
Expand Down Expand Up @@ -215,6 +213,7 @@ make clean
- [Grigoriy Gusarov](https://github.com/Gooogr)
- [Aki Ariga](https://github.com/chezou)
- [Nikolay Undalov](https://github.com/nsundalov)
- [Aleksey Kuzin](https://github.com/teodor-r)

Previous contributors: [Ildar Safilo](https://github.com/irsafilo) [ex-Maintainer], [Daniil Potapov](https://github.com/sharthZ23) [ex-Maintainer], [Alexander Butenko](https://github.com/iomallach), [Igor Belkov](https://github.com/OzmundSedler), [Artem Senin](https://github.com/artemseninhse), [Mikhail Khasykov](https://github.com/mkhasykov), [Julia Karamnova](https://github.com/JuliaKup), [Maxim Lukin](https://github.com/groundmax), [Yuri Ulianov](https://github.com/yukeeul), [Egor Kratkov](https://github.com/jegorus), [Azat Sibagatulin](https://github.com/azatnv), [Vadim Vetrov](https://github.com/Waujito)

Loading