Skip to content

Commit 3ea7c9e

Browse files
authored
Merge pull request #148 from txkxgit/txkx/docu
Txkx/docu
2 parents 64114df + 3071f48 commit 3ea7c9e

File tree

2 files changed

+18
-18
lines changed

2 files changed

+18
-18
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -563,7 +563,7 @@ resulting images are significantly smaller. Still, `mkdwarfs` is about
563563
**4 times faster** and produces and image that's **12 times smaller** than
564564
the SquashFS image. The DwarFS image is only 0.6% of the original file size.
565565

566-
So why not use `lzma` instead of `zstd` by default? The reason is that `lzma`
566+
So, why not use `lzma` instead of `zstd` by default? The reason is that `lzma`
567567
is about an order of magnitude slower to decompress than `zstd`. If you're
568568
only accessing data on your compressed filesystem occasionally, this might
569569
not be a big deal, but if you use it extensively, `zstd` will result in
@@ -733,7 +733,7 @@ Summary
733733
5.85 ± 0.08 times faster than 'squashfs-zstd'
734734
```
735735

736-
So DwarFS is almost six times faster than SquashFS. But what's more,
736+
So, DwarFS is almost six times faster than SquashFS. But what's more,
737737
SquashFS also uses significantly more CPU power. However, the numbers
738738
shown above for DwarFS obviously don't include the time spent in the
739739
`dwarfs` process, so I repeated the test outside of hyperfine:
@@ -746,7 +746,7 @@ user 0m2.154s
746746
sys 0m1.846s
747747
```
748748

749-
So in total, DwarFS was using 5.7 seconds of CPU time, whereas
749+
So, in total, DwarFS was using 5.7 seconds of CPU time, whereas
750750
SquashFS was using 20.2 seconds, almost four times as much. Ignore
751751
the 'real' time, this is only how long it took me to unmount the
752752
file system again after mounting it.
@@ -988,7 +988,7 @@ user 0m13.234s
988988
sys 0m1.382s
989989
```
990990

991-
So `dwarfsextract` is almost 4 times faster thanks to using multiple
991+
So, `dwarfsextract` is almost 4 times faster thanks to using multiple
992992
worker threads for decompression. It's writing about 300 MiB/s in this
993993
example.
994994

@@ -1118,7 +1118,7 @@ user 714m44.286s
11181118
sys 3m6.751s
11191119
```
11201120

1121-
So it's an order of magnitude slower than `mkdwarfs` and uses 14 times
1121+
So, it's an order of magnitude slower than `mkdwarfs` and uses 14 times
11221122
as much CPU resources as `mkdwarfs -l9`. The resulting archive it pretty
11231123
close in size to the default configuration DwarFS image, but it's more
11241124
than 50% bigger than the image produced by `mkdwarfs -l9`.
@@ -1174,7 +1174,7 @@ $ ll perl-install.*
11741174
-rw-r--r-- 1 mhx users 1016981520 Mar 6 21:12 perl-install.wim
11751175
```
11761176

1177-
So wimlib is definitely much better than squashfs, in terms of both
1177+
So, wimlib is definitely much better than squashfs, in terms of both
11781178
compression ratio and speed. DwarFS is however about 3 times faster to
11791179
create the file system and the DwarFS file system less than half the size.
11801180
When switching to LZMA compression, the DwarFS file system is more than
@@ -1332,7 +1332,7 @@ user 201m37.816s
13321332
sys 2m15.005s
13331333
```
13341334

1335-
So it processed 21 MiB out of 48 GiB in half an hour, using almost
1335+
So, it processed 21 MiB out of 48 GiB in half an hour, using almost
13361336
twice as much CPU resources as DwarFS for the *whole* file system.
13371337
At this point I decided it's likely not worth waiting (presumably)
13381338
another month (!) for `mkcromfs` to finish. I double checked that
@@ -1415,7 +1415,7 @@ user 3m43.324s
14151415
sys 0m4.015s
14161416
```
14171417

1418-
So `mkdwarfs` is about 50 times faster than `mkcromfs` and uses 75 times
1418+
So, `mkdwarfs` is about 50 times faster than `mkcromfs` and uses 75 times
14191419
less CPU resources. At the same time, the DwarFS file system is 30% smaller:
14201420

14211421
```

doc/mkdwarfs.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,13 @@ full contents of `/path/dir` with:
1818

1919
mkdwarfs -i /path/dir -o image.dwarfs
2020

21-
After that, you can mount it with dwarfs(1):
21+
After that, you can mount it using dwarfs(1):
2222

2323
dwarfs image.dwarfs /path/to/mountpoint
2424

2525
## OPTIONS
2626

27-
There two mandatory options for specifying the input and output:
27+
There are two mandatory options for specifying the input and output:
2828

2929
- `-i`, `--input=`*path*|*file*:
3030
Path to the root directory containing the files from which you want to
@@ -78,9 +78,9 @@ Most other options are concerned with compression tuning:
7878
to the number of processors available on your system. Use this option if
7979
you want to limit the resources used by `mkdwarfs` or to optimize build
8080
speed. This option affects only the compression phase.
81-
In the compression phase, the worker threads are used to compress the
81+
During the compression phase, the worker threads are used to compress the
8282
individual filesystem blocks in the background. Ordering, segmenting
83-
and block building are, again, single-threaded and run independently.
83+
and block building are single-threaded and run independently.
8484

8585
- `--compress-niceness=`*value*:
8686
Set the niceness of compression worker threads. Defaults to 5. This makes
@@ -112,7 +112,7 @@ Most other options are concerned with compression tuning:
112112
will completely disable duplicate segment search.
113113

114114
- `-W`, `--window-size=`*value*:
115-
Window size of cyclic hash used for segmenting. This is again an exponent
115+
Window size of cyclic hash used for segmenting. This is an exponent
116116
to a base of two. Cyclic hashes are used by `mkdwarfs` for finding
117117
identical segments across multiple files. This is done on top of duplicate
118118
file detection. If a reasonable amount of duplicate segments is found,
@@ -399,7 +399,7 @@ a library that allows serialization of structures defined in
399399
[Thrift IDL](https://github.com/facebook/fbthrift/) into an extremely
400400
compact representation that can be used in-place without the need for
401401
deserialization. It is very well suited for persistent, memory-mappable
402-
data. With Frozen, you essentially only pay for what you use: if fields
402+
data. With Frozen, you essentially only "pay for what you use": if fields
403403
are defined in the IDL, but they always hold the same value (or are not
404404
used at all), not a single bit will be allocated for this field even if
405405
you have a list of millions of items.
@@ -461,7 +461,7 @@ These options are controlled by the `--pack-metadata` option.
461461
of two. The entries can be decompressed individually, so no
462462
extra memory is used when accessing the filesystem (except for
463463
the symbol table, which is only a few hundred bytes). This is
464-
turned on by default. For small filesystems, it's possible that
464+
enabled by default. For small filesystems, it's possible that
465465
the compressed strings plus symbol table are actually larger
466466
than the uncompressed strings. If this is the case, the strings
467467
will be stored uncompressed, unless `force` is also specified.
@@ -497,10 +497,10 @@ the corresponding packing option.
497497
plain | 6,430,275 | 121.30% | 48.36% | 41.37%
498498
---------|---------------|-----------|---------|---------
499499

500-
So the default (`auto`) is roughly 20% smaller than not using any
500+
So, the default (`auto`) is roughly 20% smaller than not using any
501501
packing (`none` or `plain`). Enabling `all` packing options doesn't
502502
reduce the size much more. However, it *does* help if you want to
503-
further compress the block. So if you're really desperately trying
503+
further compress the block. So, if you're really desperately trying
504504
to reduce the image size, enabling `all` packing would be an option
505505
at the cost of using a lot more memory when using the filesystem.
506506

@@ -521,7 +521,7 @@ using `--input-list`.
521521

522522
## FILTER RULES
523523

524-
The filter rules have been inspired by the `rsync` utility. They
524+
The filter rules have been inspired by the `rsync` utility. These
525525
look very similar, but there are differences. These rules are quite
526526
powerful, yet they're somewhat hard to get used to.
527527

0 commit comments

Comments
 (0)