Skip to content

Commit 45d4bac

Browse files
committed
update readme
1 parent ebbf410 commit 45d4bac

File tree

2 files changed

+28
-8
lines changed

2 files changed

+28
-8
lines changed

README.md

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,13 @@
33
This is the implementation for [Full Parameter Fine-Tuning for Large Language Models with Limited Resources](https://arxiv.org/pdf/2306.09782.pdf)
44
and [AdaLomo: Low-memory Optimization with Adaptive Learning Rate](https://arxiv.org/pdf/2310.10195.pdf).
55

6-
LOMO and AdaLomo are integrated in [CoLLiE](https://github.com/OpenMOSS/collie) library, which supports Collaborative Training of Large Language Models in an Efficient Way.
7-
You can also install `lomo-optim` from PyPI using pip.
6+
# News
7+
- LOMO and AdaLomo were integrated in [`transformers`](https://huggingface.co/docs/transformers/main/en/trainer#lomo-optimizer) and [`accelerate`](https://huggingface.co/docs/accelerate/main/en/package_reference/accelerator#accelerate.Accelerator.lomo_backward).
8+
- PyPI package `lomo-optim` was released.
9+
- LOMO and AdaLomo were integrated in [`CoLLiE`](https://github.com/OpenMOSS/collie) library, which supports Collaborative Training of Large Language Models in an Efficient Way.
10+
11+
# Usage
12+
You can install `lomo-optim` from PyPI using pip.
813

914
```bash
1015
pip install lomo-optim
@@ -54,4 +59,10 @@ The code for AdaLomo is in [adalomo](adalomo) folder.
5459
journal={arXiv preprint arXiv:2306.09782},
5560
year={2023}
5661
}
62+
@article{lv2023adalomo,
63+
title={AdaLomo: Low-memory Optimization with Adaptive Learning Rate},
64+
author={Lv, Kai and Yan, Hang and Guo, Qipeng and Lv, Haijun and Qiu, Xipeng},
65+
journal={arXiv preprint arXiv:2310.10195},
66+
year={2023}
67+
}
5768
```

README_ZH.md

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,14 @@
11
[**English**](./README.md) | [**中文**](./README_ZH.md)
22

3-
# LOMO: LOw-Memory Optimization
4-
53
论文 [Full Parameter Fine-Tuning for Large Language Models with Limited Resources](https://arxiv.org/pdf/2306.09782.pdf)[AdaLomo: Low-memory Optimization with Adaptive Learning Rate](https://arxiv.org/pdf/2310.10195.pdf) 的实现.
64

7-
LOMO和AdaLomo已经集成到了 [CoLLiE](https://github.com/OpenLMLab/collie) (Collaborative Training of Large Language Models in an Efficient Way) 中。
8-
也可以使用 pip 从 PyPI 安装 `lomo-optim` 包。
5+
# 新闻
6+
- LOMO 和 AdaLomo 集成到了 [`transformers`](https://huggingface.co/docs/transformers/main/en/trainer#lomo-optimizer)[`accelerate`](https://huggingface.co/docs/accelerate/main/en/package_reference/accelerator#accelerate.Accelerator.lomo_backward) 中.
7+
- 发布了 PyPI 包 `lomo-optim`.
8+
- LOMO 和 AdaLomo 已经集成到了 [`CoLLiE`](https://github.com/OpenLMLab/collie) (Collaborative Training of Large Language Models in an Efficient Way) 中。
9+
10+
# Usage
11+
可以使用 pip 从 PyPI 安装 `lomo-optim` 包。
912

1013
```bash
1114
pip install lomo-optim
@@ -17,8 +20,8 @@ pip install lomo-optim
1720
from lomo_optim import Lomo
1821
from lomo_optim import AdaLomo
1922
```
20-
`Lomo``AdaLomo`的使用方法与PyTorch的优化器类似,但不完全相同([示例](https://github.com/OpenMOSS/CoLLiE/blob/726ec80d263c1e1c56344dfde5b3c24897daa94d/collie/controller/trainer.py#L469))。
21-
推荐使用`AdaLomo`并且不加`gradnorm`来获得更好的性能同时维持更高的吞吐量。
23+
`Lomo``AdaLomo` 的使用方法与 PyTorch 的优化器类似,但不完全相同([示例](https://github.com/OpenMOSS/CoLLiE/blob/726ec80d263c1e1c56344dfde5b3c24897daa94d/collie/controller/trainer.py#L469))。
24+
推荐使用 `AdaLomo` 并且不加 `gradnorm` 来获得更好的性能同时维持更高的吞吐量。
2225

2326
# LOMO: LOw-Memory Optimization
2427

@@ -52,4 +55,10 @@ AdaLomo的代码在 [adalomo](adalomo) 文件夹中。
5255
journal={arXiv preprint arXiv:2306.09782},
5356
year={2023}
5457
}
58+
@article{lv2023adalomo,
59+
title={AdaLomo: Low-memory Optimization with Adaptive Learning Rate},
60+
author={Lv, Kai and Yan, Hang and Guo, Qipeng and Lv, Haijun and Qiu, Xipeng},
61+
journal={arXiv preprint arXiv:2310.10195},
62+
year={2023}
63+
}
5564
```

0 commit comments

Comments
 (0)