File descriptor exhaustion when calling FilesystemWriter::write
#614
Replies: 3 comments 1 reply
-
I think what I would do to solve this would be to pre-read files that aren't that large, as otherwise the if len < 1024 {
let mut buffer = Vec::new();
file.read_to_end(&mut buffer)?;
let reader = BufReader::new(Cursor::new(buffer));
squash_fs.push_file(reader, entry.path().file_name().unwrap(), header)?
} else {
let reader = BufReader::new(file);
squash_fs.push_file(reader, entry.path().file_name().unwrap(), header)?
} |
Beta Was this translation helpful? Give feedback.
-
I faced exactly the same issue: when I'm trying to create SquashFS image for a pretty big filesystem, I run into "Too many open files (os error 24)". I solved it probably in the most ugliest way ever: I write image into a file after every single file handling and re-write it again and again. This way I always have only one source file opened at a time. However this solution introduces enormous performance issue (even with compression disabled) and becomes useless. Your solution with pre-reading small files results in a huge memory increase, which is also not pretty. I'm wondering, is it possible to extend the library with another input option: something like For now I'm thinking about another solution: implementing a specific |
Beta Was this translation helpful? Give feedback.
-
Looks like I was able to survive with lazy-open approach. It looks something like this: struct LazyOpen {
path: PathBuf,
pos: SeekFrom,
}
impl io::Read for LazyOpen {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let mut file = fs::File::open(&self.path)?;
file.seek(self.pos)?;
let count = file.read(buf)?;
let current_pos = file.stream_position()?;
self.pos = SeekFrom::Start(current_pos);
Ok(count)
}
}
fn write_squashfs() {
// ...
let reader = LazyOpen {
path: local_path,
pos: SeekFrom::Start(0),
};
fs_writer.push_file(reader, path_in_squashfs, header).unwrap();
} I don't see any significant performance degradation with this. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
When creating a
FilesystemWriter
, callingpush_file
with many (over a thousand) files, then callingwrite
, the process will crash due to running out of file descriptors.I've created a small reproducer below - it depends on
backhand
andanyhow
, and you'll need to replace thepath
with something that points to a folder with many files on your system.Is there a way around this, beside writing and re-opening the SquashFS filesystem after every few hundred calls to
push_file
?Beta Was this translation helpful? Give feedback.
All reactions