Skip to content

Out-of-memory when mksquashfs'ing 200M files #238

@nh2

Description

@nh2

Hi,

I'm having trouble finding concrete information on whether squashfs is designed to handle packing and unpacking large amounts of files with low/constant RAM usage.

I ran mksquashfs on directory with 200 million files, around 20 TB total size.

I used flags -no-duplicates -no-hardlinks; mksquashfs version 4.5.1 (2022/03/17) on Linux x86_64.
It OOM'ed with 53 GB resident memory usage.

Should mksquashfs handle this? If yes, I guess the OOM should be considered a bug.

Otherwise, I'd put it as a feature request, as it would be very nice to have a tool that can handle this.

Metadata

Metadata

Assignees

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions