Pinned Loading
-
Scaling-Laws-for-Quantized-LLMs
Scaling-Laws-for-Quantized-LLMs PublicLow-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens
Python
-
ADMIRE-BayesOpt
ADMIRE-BayesOpt PublicThe official codebase of ADMIRE-BayesOpt: Accelerated Data MIxture RE-weighting for Language Models with Bayesian Optimization
Python 2
-
efficient-NLP-multistage-training
efficient-NLP-multistage-training PublicSource code of paper "Efficient NLP Model Finetuning via Multistage Data Filtering" (IJCAI 2023)
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.