|
1 | 1 | # Change Log
|
2 | 2 |
|
| 3 | +## Version 0.4.0 - 2021-03-06 |
| 4 | + |
| 5 | +- [feature] New `dwarfsextract` tool that allows extracting a file |
| 6 | + system image. It also allows conversion of the file system image |
| 7 | + directly into a standard archive format (e.g. `tar` or `cpio`). |
| 8 | + Extracting a DwarFS image can be significantly faster than |
| 9 | + extracting a equivalent compressed archive. |
| 10 | + |
| 11 | +- [feature] The segmenting algorithm has been completely rewritten |
| 12 | + and is now much cleaner, uses much less memory, is significantly |
| 13 | + faster and detects a lot more duplicate segments. At the same time |
| 14 | + it's easier to configure (just a single window size instead of a |
| 15 | + list). |
| 16 | + |
| 17 | +- [feature] There's a new option `--max-lookback-blocks` that |
| 18 | + allows duplicate segments to be detected across multiple blocks, |
| 19 | + which can result in significantly better compression when using |
| 20 | + small file system blocks. |
| 21 | + |
| 22 | +- [compat] The `--blockhash-window-sizes` and |
| 23 | + `--blockhash-increment-shift` options were replaced by |
| 24 | + `--window-size` and `--window-step`, respectively. The new |
| 25 | + `--window-size` option takes only a single window size instead |
| 26 | + of a list. |
| 27 | + |
| 28 | +- [fix] The rewrite of the segmenting algorithm was triggered by |
| 29 | + a "bug" (github #35) that caused excessive memory consumption |
| 30 | + in `mkdwarfs`. It wasn't really a bug, though, more like a bad |
| 31 | + algorithm that used memory proportional to the file size. This |
| 32 | + issue has now been fully solved. |
| 33 | + |
| 34 | +- [fix] Scanning of large files would excessively grow `mkdwarfs` |
| 35 | + RSS. The memory would have sooner or later be reclaimed by the |
| 36 | + kernel, but the code now actively releases the memory while |
| 37 | + scanning. |
| 38 | + |
| 39 | +- [perf] `mkdwarfs` speed has been significantly improved. The |
| 40 | + 47 GiB worth of Perl installations can now be turned into a |
| 41 | + DwarFS image in less then 6 minutes, about 30% faster than |
| 42 | + with the 0.3.1 release. Using `lzma` compression, it actually |
| 43 | + takes less than 4 minutes now, almost twice as fast as 0.3.1. |
| 44 | + |
| 45 | +- [perf] At the same time, compression ratio also significantly |
| 46 | + improved, mostly due to the new segmenting algorithm. With the |
| 47 | + 0.3.1 release, using the default configuration, the 47 GiB of |
| 48 | + Perl installations compressed down to 471.6 MiB. With the 0.4.0 |
| 49 | + release, this has dropped to 426.5 MiB, a 10% improvement. |
| 50 | + Using `lzma` compression (`-l9`), the size of the resulting |
| 51 | + image went from 319.5 MiB to 300.9 MiB, about 5% better. More |
| 52 | + importantly, though, the uncompressed file system size dropped |
| 53 | + from about 7 GiB to 4 GiB thanks to improved segmenting, which |
| 54 | + means *less* blocks need to be decompressed on average when |
| 55 | + using the file system. |
| 56 | + |
| 57 | +- [build] The project can now be built to use the system installed |
| 58 | + `zstd` and `xxHash` libraries. (fixes github #34) |
| 59 | + |
| 60 | +- [build] The project can now be built without the legacy FUSE |
| 61 | + driver. (fixes github #32) |
| 62 | + |
| 63 | +- [other] Several small code cleanups. |
| 64 | + |
| 65 | + |
3 | 66 | ## Version 0.3.1 - 2021-01-07
|
4 | 67 |
|
5 | 68 | - [fix] Fix linking of Python libraries
|
|
0 commit comments