Replies: 3 comments 3 replies
-
There are a three block sizes stored in block pointer according to https://www.giis.co.in/Zfs_ondiskformat.pdf:
I would guess that if compression is enabled the second block will be stored more efficiently as it padded with compressible zeros. It will use as many a-shift sized 'sectors' as needed. It seems that if compression is off - the last block will use full 1MiB on disk, which is kinda strange IMHO. |
Beta Was this translation helpful? Give feedback.
-
For a note: #15215 |
Beta Was this translation helpful? Give feedback.
-
depends on how fast it's written - if you drip data into it slowly, it can end up stored in way more smaller blocks when a file is freshly created, it has 0 then when a last block of a file is being modified in so, depending on how fast you write the file, I would say it ends up in two 1MB blocks, if the write is fast and the dataset can take 1MB blocks |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello! Basically:
If I have an incompressible 1.1MiB file on a dataset with recordsize set to 1MiB, will the file be stored:
I remember reading that all records of a file needed to be the same size, but can't find that source now and the https://openzfs.github.io/openzfs-docs/man/v2.3/7/zfsprops.7.html#recordsize does not help in that regard (it's also in need of updating, but that's a different matter).
Beta Was this translation helpful? Give feedback.
All reactions