Skip to content

Commit 914076e

Browse files
loftusaAlexander Loftus
and
Alexander Loftus
authored
update setup and requirements (#241)
* update requirements * update setup * add sentencepiece back in * black formatter * change 'dev' to 'eval', add scipy back in * move scipy * separate out requirements, update environment.yml * fix pycocotools * update README * moved tqdm to eval, wandb only in trainnig * update version * black format --------- Co-authored-by: Alexander Loftus <alex@creyonbio.com>
1 parent 51dff49 commit 914076e

File tree

7 files changed

+60
-38
lines changed

7 files changed

+60
-38
lines changed

README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,20 @@ or to create a conda environment for running OpenFlamingo, run
3636
conda env create -f environment.yml
3737
```
3838

39+
To install training or eval dependencies, run one of the first two commands. To install everything, run the third command.
40+
```
41+
pip install open-flamingo[training]
42+
pip install open-flamingo[eval]
43+
pip install open-flamingo[all]
44+
```
45+
46+
There are three `requirements.txt` files:
47+
- `requirements.txt`
48+
- `requirements-training.txt`
49+
- `requirements-eval.txt`
50+
51+
Depending on your use case, you can install any of these with `pip install -r <requirements-file.txt>`. The base file contains only the dependencies needed for running the model.
52+
3953
# Approach
4054
OpenFlamingo is a multimodal language model that can be used for a variety of tasks. It is trained on a large multimodal dataset (e.g. Multimodal C4) and can be used to generate text conditioned on interleaved images/text. For example, OpenFlamingo can be used to generate a caption for an image, or to generate a question given an image and a text passage. The benefit of this approach is that we are able to rapidly adapt to new tasks using in-context learning.
4155

environment.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,6 @@ dependencies:
77
- pip
88
- pip:
99
- -r requirements.txt
10+
- -r requirements-training.txt
11+
- -r requirements-eval.txt
1012
- -e .

requirements-dev.txt

Lines changed: 0 additions & 5 deletions
This file was deleted.

requirements-eval.txt

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
scipy
2+
torchvision
3+
nltk
4+
inflection
5+
pycocoevalcap
6+
pycocotools
7+
tqdm
8+
9+
black
10+
mypy
11+
pylint
12+
pytest
13+
requests

requirements-training.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
torchvision
2+
braceexpand
3+
webdataset
4+
tqdm
5+
wandb

requirements.txt

Lines changed: 2 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,6 @@ einops
22
einops-exts
33
transformers>=4.28.1
44
torch==2.0.1
5-
torchvision
65
pillow
7-
more-itertools
8-
datasets
9-
braceexpand
10-
webdataset
11-
wandb
12-
nltk
13-
scipy
14-
inflection
15-
sentencepiece==0.1.98
16-
pycocoevalcap
17-
pycocotools
18-
open_clip_torch>=2.16.0
6+
open_clip_torch>=2.16.0
7+
sentencepiece==0.1.98

setup.py

Lines changed: 24 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -6,47 +6,51 @@
66
with Path(Path(__file__).parent, "README.md").open(encoding="utf-8") as file:
77
long_description = file.read()
88

9-
# TODO: This is a hack to get around the fact that we can't read the requirements.txt file, we should fix this.
10-
# def _read_reqs(relpath):
11-
# fullpath = os.path.join(Path(__file__).parent, relpath)
12-
# with open(fullpath) as f:
13-
# return [
14-
# s.strip()
15-
# for s in f.readlines()
16-
# if (s.strip() and not s.startswith("#"))
17-
# ]
18-
199
REQUIREMENTS = [
2010
"einops",
2111
"einops-exts",
2212
"transformers>=4.28.1",
2313
"torch==2.0.1",
24-
"torchvision",
2514
"pillow",
26-
"more-itertools",
27-
"datasets",
28-
"braceexpand",
29-
"webdataset",
30-
"wandb",
31-
"nltk",
15+
"open_clip_torch>=2.16.0",
16+
"sentencepiece==0.1.98",
17+
]
18+
19+
EVAL = [
3220
"scipy",
21+
"torchvision",
22+
"nltk",
3323
"inflection",
34-
"sentencepiece==0.1.98",
35-
"open_clip_torch>=2.16.0",
24+
"pycocoevalcap",
25+
"pycocotools",
26+
"tqdm",
27+
]
28+
29+
TRAINING = [
30+
"wandb",
31+
"torchvision",
32+
"braceexpand",
33+
"webdataset",
34+
"tqdm",
3635
]
3736

3837
setup(
3938
name="open_flamingo",
4039
packages=find_packages(),
4140
include_package_data=True,
42-
version="2.0.0",
41+
version="2.0.1",
4342
license="MIT",
4443
description="An open-source framework for training large multimodal models",
4544
long_description=long_description,
4645
long_description_content_type="text/markdown",
4746
data_files=[(".", ["README.md"])],
4847
keywords=["machine learning"],
4948
install_requires=REQUIREMENTS,
49+
extras_require={
50+
"eval": EVAL,
51+
"training": TRAINING,
52+
"all": list(set(REQUIREMENTS + EVAL + TRAINING)),
53+
},
5054
classifiers=[
5155
"Development Status :: 4 - Beta",
5256
"Intended Audience :: Developers",

0 commit comments

Comments
 (0)