You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/RELEASE.md
+60-76Lines changed: 60 additions & 76 deletions
Original file line number
Diff line number
Diff line change
@@ -1,104 +1,103 @@
1
1
# Release Process
2
2
3
-
The app is published in twice, in different build formats.
3
+
The Invoke application is published as a python package on [PyPI]. This includes both a source distribution and built distribution (a wheel).
4
4
5
-
- A [PyPI] distribution. This includes both a source distribution and built distribution (a wheel). Users install with `pip install invokeai`. The updater uses this build.
6
-
- An installer on the [InvokeAI Releases Page]. This is a zip file with install scripts and a wheel. This is only used for new installs.
5
+
Most users install it with the [Launcher](https://github.com/invoke-ai/launcher/), others with `pip`.
7
6
8
-
## General Prep
7
+
The launcher uses GitHub as the source of truth for available releases.
8
+
9
+
## Broad Strokes
10
+
11
+
- Merge all changes and bump the version in the codebase.
12
+
- Tag the release commit.
13
+
- Wait for the release workflow to complete.
14
+
- Approve the PyPI publish jobs.
15
+
- Write GH release notes.
9
16
10
-
Make a developer call-out for PRs to merge. Merge and test things out.
17
+
## General Prep
11
18
12
-
While the release workflow does not include end-to-end tests, it does pause before publishing so you can download and test the final build.
19
+
Make a developer call-out for PRs to merge. Merge and test things out. Bump the version by editing `invokeai/version/invokeai_version.py`.
13
20
14
21
## Release Workflow
15
22
16
23
The `release.yml` workflow runs a number of jobs to handle code checks, tests, build and publish on PyPI.
17
24
18
-
It is triggered on **tag push**, when the tag matches `v*`. It doesn't matter if you've prepped a release branch like `release/v3.5.0` or are releasing from `main` - it works the same.
19
-
20
-
> Because commits are reference-counted, it is safe to create a release branch, tag it, let the workflow run, then delete the branch. So long as the tag exists, that commit will exist.
25
+
It is triggered on **tag push**, when the tag matches `v*`.
21
26
22
27
### Triggering the Workflow
23
28
24
-
Run `make tag-release` to tag the current commit and kick off the workflow.
29
+
Ensure all commits that should be in the release are merged, and you have pulled them locally.
30
+
31
+
Double-check that you have checked out the commit that will represent the release (typically the latest commit on `main`).
25
32
26
-
The release may also be dispatched [manually].
33
+
Run `make tag-release` to tag the current commit and kick off the workflow. You will be prompted to provide a message - use the version specifier.
34
+
35
+
If this version's tag already exists for some reason (maybe you had to make a last minute change), the script will overwrite it.
36
+
37
+
> In case you cannot use the Make target, the release may also be dispatched [manually] via GH.
27
38
28
39
### Workflow Jobs and Process
29
40
30
-
The workflow consists of a number of concurrently-run jobs, and two final publish jobs.
41
+
The workflow consists of a number of concurrently-run checks and tests, then two final publish jobs.
31
42
32
43
The publish jobs require manual approval and are only run if the other jobs succeed.
33
44
34
45
#### `check-version` Job
35
46
36
-
This job checks that the git ref matches the app version. It matches the ref against the `__version__` variable in `invokeai/version/invokeai_version.py`.
37
-
38
-
When the workflow is triggered by tag push, the ref is the tag. If the workflow is run manually, the ref is the target selected from the **Use workflow from** dropdown.
47
+
This job ensures that the `invokeai` python package version specifier matches the tag for the release. The version specifier is pulled from the `__version__` variable in `invokeai/version/invokeai_version.py`.
39
48
40
49
This job uses [samuelcolvin/check-python-version].
41
50
42
51
> Any valid [version specifier] works, so long as the tag matches the version. The release workflow works exactly the same for `RC`, `post`, `dev`, etc.
43
52
44
53
#### Check and Test Jobs
45
54
55
+
Next, these jobs run and must pass. They are the same jobs that are run for every PR.
56
+
46
57
-**`python-tests`**: runs `pytest` on matrix of platforms
47
58
-**`python-checks`**: runs `ruff` (format and lint)
48
59
-**`frontend-tests`**: runs `vitest`
49
60
-**`frontend-checks`**: runs `prettier` (format), `eslint` (lint), `dpdm` (circular refs), `tsc` (static type check) and `knip` (unused imports)
50
-
51
-
> **TODO** We should add `mypy` or `pyright` to the **`check-python`** job.
52
-
53
-
> **TODO** We should add an end-to-end test job that generates an image.
61
+
-**`typegen-checks`**: ensures the frontend and backend types are synced
54
62
55
63
#### `build-installer` Job
56
64
57
65
This sets up both python and frontend dependencies and builds the python package. Internally, this runs `installer/create_installer.sh` and uploads two artifacts:
58
66
59
67
-**`dist`**: the python distribution, to be published on PyPI
60
-
-**`InvokeAI-installer-${VERSION}.zip`**: the installer to be included in the GitHub release
68
+
-**`InvokeAI-installer-${VERSION}.zip`**: the legacy install scripts
61
69
62
-
#### Sanity Check & Smoke Test
70
+
You don't need to download either of these files.
63
71
64
-
At this point, the release workflow pauses as the remaining publish jobs require approval. Time to test the installer.
72
+
> The legacy install scripts are no longer used, but we haven't updated the workflow to skip building them.
65
73
66
-
Because the installer pulls from PyPI, and we haven't published to PyPI yet, you will need to install from the wheel:
74
+
#### Sanity Check & Smoke Test
67
75
68
-
- Download and unzip `dist.zip` and the installer from the **Summary** tab of the workflow
69
-
- Run the installer script using the `--wheel` CLI arg, pointing at the wheel:
76
+
At this point, the release workflow pauses as the remaining publish jobs require approval.
It's possible to test the python package before it gets published to PyPI. We've never had problems with it, so it's not necessary to do this.
74
79
75
-
- Install to a temporary directory so you get the new user experience
76
-
- Download a model and generate
80
+
But, if you want to be extra-super careful, here's how to test it:
77
81
78
-
> The same wheel file is bundled in the installer and in the `dist` artifact, which is uploaded to PyPI. You should end up with the exactly the same installation as if the installer got the wheel from PyPI.
82
+
- Download the `dist.zip` build artifact from the `build-installer` job
83
+
- Unzip it and find the wheel file
84
+
- Create a fresh Invoke install by following the [manual install guide](https://invoke-ai.github.io/InvokeAI/installation/manual/) - but instead of installing from PyPI, install from the wheel
85
+
- Test the app
79
86
80
87
##### Something isn't right
81
88
82
-
If testing reveals any issues, no worries. Cancel the workflow, which will cancel the pending publish jobs (you didn't approve them prematurely, right?).
83
-
84
-
Now you can start from the top:
85
-
86
-
- Fix the issues and PR the fixes per usual
87
-
- Get the PR approved and merged per usual
88
-
- Switch to `main` and pull in the fixes
89
-
- Run `make tag-release` to move the tag to `HEAD` (which has the fixes) and kick off the release workflow again
90
-
- Re-do the sanity check
89
+
If testing reveals any issues, no worries. Cancel the workflow, which will cancel the pending publish jobs (you didn't approve them prematurely, right?) and start over.
91
90
92
91
#### PyPI Publish Jobs
93
92
94
-
The publish jobs will run if any of the previous jobs fail.
93
+
The publish jobs will not run if any of the previous jobs fail.
95
94
96
95
They use [GitHub environments], which are configured as [trusted publishers] on PyPI.
97
96
98
-
Both jobs require a maintainer to approve them from the workflow's **Summary** tab.
97
+
Both jobs require a @hipsterusername or @psychedelicious to approve them from the workflow's **Summary** tab.
99
98
100
99
- Click the **Review deployments** button
101
-
- Select the environment (either `testpypi` or `pypi`)
100
+
- Select the environment (either `testpypi` or `pypi` - typically you select both)
102
101
- Click **Approve and deploy**
103
102
104
103
> **If the version already exists on PyPI, the publish jobs will fail.** PyPI only allows a given version to be published once - you cannot change it. If version published on PyPI has a problem, you'll need to "fail forward" by bumping the app version and publishing a followup release.
@@ -113,59 +112,44 @@ If there are no incidents, contact @hipsterusername or @lstein, who have owner a
113
112
114
113
Publishes the distribution on the [Test PyPI] index, using the `testpypi` GitHub environment.
115
114
116
-
This job is not required for the production PyPI publish, but included just in case you want to test the PyPI release.
115
+
This job is not required for the production PyPI publish, but included just in case you want to test the PyPI release for some reason:
117
116
118
-
If approved and successful, you could try out the test release like this:
- Approve this publish job without approving the prod publish
118
+
- Let it finish
119
+
- Create a fresh Invoke install by following the [manual install guide](https://invoke-ai.github.io/InvokeAI/installation/manual/), making sure to use the Test PyPI index URL: `https://test.pypi.org/simple/`
120
+
- Test the app
131
121
132
122
#### `publish-pypi` Job
133
123
134
124
Publishes the distribution on the production PyPI index, using the `pypi` GitHub environment.
135
125
136
-
## Publish the GitHub Release with installer
126
+
It's a good idea to wait to approve and run this job until you have the release notes ready!
137
127
138
-
Once the release is published to PyPI, it's time to publish the GitHub release.
128
+
## Prep and publish the GitHub Release
139
129
140
130
1.[Draft a new release] on GitHub, choosing the tag that triggered the release.
141
-
1. Write the release notes, describing important changes. The **Generate release notes** button automatically inserts the changelog and new contributors, and you can copy/paste the intro from previous releases.
142
-
1. Use `scripts/get_external_contributions.py` to get a list of external contributions to shout out in the release notes.
143
-
1. Upload the zip file created in **`build`** job into the Assets section of the release notes.
144
-
1. Check **Set as a pre-release** if it's a pre-release.
145
-
1. Check **Create a discussion for this release**.
146
-
1. Publish the release.
147
-
1. Announce the release in Discord.
148
-
149
-
> **TODO** Workflows can create a GitHub release from a template and upload release assets. One popular action to handle this is [ncipollo/release-action]. A future enhancement to the release process could set this up.
150
-
151
-
## Manual Build
152
-
153
-
The `build installer` workflow can be dispatched manually. This is useful to test the installer for a given branch or tag.
154
-
155
-
No checks are run, it just builds.
131
+
2. The **Generate release notes** button automatically inserts the changelog and new contributors. Make sure to select the correct tags for this release and the last stable release. GH often selects the wrong tags - do this manually.
132
+
3. Write the release notes, describing important changes. Contributions from community members should be shouted out. Use the GH-generated changelog to see all contributors. If there are Weblate translation updates, open that PR and shout out every person who contributed a translation.
133
+
4. Check **Set as a pre-release** if it's a pre-release.
134
+
5. Approve and wait for the `publish-pypi` job to finish if you haven't already.
135
+
6. Publish the GH release.
136
+
7. Post the release in Discord in the [releases](https://discord.com/channels/1020123559063990373/1149260708098359327) channel with abbreviated notes. For example:
> It's a pretty big one - Form Builder, Metadata Nodes (thanks @SkunkWorxDark!), and much more.
140
+
8. Right click the message in releases and copy the link to it. Then, post that link in the [new-release-discussion](https://discord.com/channels/1020123559063990373/1149506274971631688) channel. For example:
Copy file name to clipboardExpand all lines: docs/features/low-vram.md
+21-8Lines changed: 21 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -31,6 +31,7 @@ It is possible to fine-tune the settings for best performance or if you still ge
31
31
Low-VRAM mode involves 4 features, each of which can be configured or fine-tuned:
32
32
33
33
- Partial model loading (`enable_partial_loading`)
34
+
- PyTorch CUDA allocator config (`pytorch_cuda_alloc_conf`)
34
35
- Dynamic RAM and VRAM cache sizes (`max_cache_ram_gb`, `max_cache_vram_gb`)
35
36
- Working memory (`device_working_mem_gb`)
36
37
- Keeping a RAM weight copy (`keep_ram_copy_of_weights`)
@@ -51,6 +52,16 @@ As described above, you can enable partial model loading by adding this line to
51
52
enable_partial_loading: true
52
53
```
53
54
55
+
### PyTorch CUDA allocator config
56
+
57
+
The PyTorch CUDA allocator's behavior can be configured using the `pytorch_cuda_alloc_conf` config. Tuning the allocator configuration can help to reduce the peak reserved VRAM. The optimal configuration is dependent on many factors (e.g. device type, VRAM, CUDA driver version, etc.), but switching from PyTorch's native allocator to using CUDA's built-in allocator works well on many systems. To try this, add the following line to your `invokeai.yaml` file:
A more complete explanation of the available configuration options is [here](https://pytorch.org/docs/stable/notes/cuda.html#optimizing-memory-usage-with-pytorch-cuda-alloc-conf).
64
+
54
65
### Dynamic RAM and VRAM cache sizes
55
66
56
67
Loading models from disk is slow and can be a major bottleneck for performance. Invoke uses two model caches - RAM and VRAM - to reduce loading from disk to a minimum.
@@ -75,24 +86,26 @@ But, if your GPU has enough VRAM to hold models fully, you might get a perf boos
75
86
# As an example, if your system has 32GB of RAM and no other heavy processes, setting the `max_cache_ram_gb` to 28GB
76
87
# might be a good value to achieve aggressive model caching.
77
88
max_cache_ram_gb: 28
89
+
78
90
# The default max cache VRAM size is adjusted dynamically based on the amount of available VRAM (taking into
79
91
# consideration the VRAM used by other processes).
80
-
# You can override the default value by setting `max_cache_vram_gb`. Note that this value takes precedence over the
81
-
# `device_working_mem_gb`.
82
-
# It is recommended to set the VRAM cache size to be as large as possible while leaving enough room for the working
83
-
# memory of the tasks you will be doing. For example, on a 24GB GPU that will be running unquantized FLUX without any
84
-
# auxiliary models, 18GB might be a good value.
85
-
max_cache_vram_gb: 18
92
+
# You can override the default value by setting `max_cache_vram_gb`.
93
+
# CAUTION: Most users should not manually set this value. See warning below.
94
+
max_cache_vram_gb: 16
86
95
```
87
96
88
-
!!! tip "Max safe value for `max_cache_vram_gb`"
97
+
!!! warning "Max safe value for `max_cache_vram_gb`"
89
98
90
-
To determine the max safe value for `max_cache_vram_gb`, subtract `device_working_mem_gb` from your GPU's VRAM. As described below, the default for `device_working_mem_gb` is 3GB.
99
+
Most users should not manually configure the `max_cache_vram_gb`. This configuration value takes precedence over the `device_working_mem_gb` and any operations that explicitly reserve additional working memory (e.g. VAE decode). As such, manually configuring it increases the likelihood of encountering out-of-memory errors.
100
+
101
+
For users who wish to configure `max_cache_vram_gb`, the max safe value can be determined by subtracting `device_working_mem_gb` from your GPU's VRAM. As described below, the default for `device_working_mem_gb` is 3GB.
91
102
92
103
For example, if you have a 12GB GPU, the max safe value for `max_cache_vram_gb` is `12GB - 3GB = 9GB`.
93
104
94
105
If you had increased `device_working_mem_gb` to 4GB, then the max safe value for `max_cache_vram_gb` is `12GB - 4GB = 8GB`.
95
106
107
+
Most users who override `max_cache_vram_gb` are doing so because they wish to use significantly less VRAM, and should be setting `max_cache_vram_gb` to a value significantly less than the 'max safe value'.
108
+
96
109
### Working memory
97
110
98
111
Invoke cannot use _all_ of your VRAM for model caching and loading. It requires some VRAM to use as working memory for various operations.
0 commit comments