Skip to content

Commit 9416006

Browse files
ardbiesheuvelkees
authored andcommitted
pstore: Base compression input buffer size on estimated compressed size
Commit 1756dde ("pstore: Remove worst-case compression size logic") removed some clunky per-algorithm worst case size estimation routines on the basis that we can always store pstore records uncompressed, and these worst case estimations are about how much the size might inadvertently *increase* due to encapsulation overhead when the input cannot be compressed at all. So if compression results in a size increase, we just store the original data instead. However, it seems that the original code was misinterpreting these calculations as an estimation of how much uncompressed data might fit into a compressed buffer of a given size, and it was using the results to consume the input data in larger chunks than the pstore record size, relying on the compression to ensure that what ultimately gets stored fits into the available space. One result of this, as observed and reported by Linus, is that upgrading to a newer kernel that includes the given commit may result in pstore decompression errors reported in the kernel log. This is due to the fact that the existing records may unexpectedly decompress to a size that is larger than the pstore record size. Another potential problem caused by this change is that we may underutilize the fixed sized records on pstore backends such as ramoops. And on pstore backends with variable sized records such as EFI, we will end up creating many more entries than before to store the same amount of compressed data. So let's fix both issues, by bringing back the typical case estimation of how much ASCII text captured from the dmesg log might fit into a pstore record of a given size after compression. The original implementation used the computation given below for zlib: switch (size) { /* buffer range for efivars */ case 1000 ... 2000: cmpr = 56; break; case 2001 ... 3000: cmpr = 54; break; case 3001 ... 3999: cmpr = 52; break; /* buffer range for nvram, erst */ case 4000 ... 10000: cmpr = 45; break; default: cmpr = 60; break; } return (size * 100) / cmpr; We will use the previous worst-case of 60% for compression. For decompression go extra large (3x) so we make sure there's enough space for anything. While at it, rate limit the error message so we don't flood the log unnecessarily on systems that have accumulated a lot of pstore history. Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230830212238.135900-1-ardb@kernel.org Co-developed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kees Cook <keescook@chromium.org>
1 parent af58740 commit 9416006

File tree

1 file changed

+27
-7
lines changed

1 file changed

+27
-7
lines changed

fs/pstore/platform.c

Lines changed: 27 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,14 @@ MODULE_PARM_DESC(kmsg_bytes, "amount of kernel log to snapshot (in bytes)");
9898

9999
static void *compress_workspace;
100100

101+
/*
102+
* Compression is only used for dmesg output, which consists of low-entropy
103+
* ASCII text, and so we can assume worst-case 60%.
104+
*/
105+
#define DMESG_COMP_PERCENT 60
106+
101107
static char *big_oops_buf;
108+
static size_t max_compressed_size;
102109

103110
void pstore_set_kmsg_bytes(int bytes)
104111
{
@@ -196,6 +203,7 @@ static int pstore_compress(const void *in, void *out,
196203

197204
static void allocate_buf_for_compression(void)
198205
{
206+
size_t compressed_size;
199207
char *buf;
200208

201209
/* Skip if not built-in or compression disabled. */
@@ -216,7 +224,8 @@ static void allocate_buf_for_compression(void)
216224
* uncompressed record size, since any record that would be expanded by
217225
* compression is just stored uncompressed.
218226
*/
219-
buf = kvzalloc(psinfo->bufsize, GFP_KERNEL);
227+
compressed_size = (psinfo->bufsize * 100) / DMESG_COMP_PERCENT;
228+
buf = kvzalloc(compressed_size, GFP_KERNEL);
220229
if (!buf) {
221230
pr_err("Failed %zu byte compression buffer allocation for: %s\n",
222231
psinfo->bufsize, compress);
@@ -233,6 +242,7 @@ static void allocate_buf_for_compression(void)
233242

234243
/* A non-NULL big_oops_buf indicates compression is available. */
235244
big_oops_buf = buf;
245+
max_compressed_size = compressed_size;
236246

237247
pr_info("Using crash dump compression: %s\n", compress);
238248
}
@@ -246,6 +256,7 @@ static void free_buf_for_compression(void)
246256

247257
kvfree(big_oops_buf);
248258
big_oops_buf = NULL;
259+
max_compressed_size = 0;
249260
}
250261

251262
void pstore_record_init(struct pstore_record *record,
@@ -305,7 +316,7 @@ static void pstore_dump(struct kmsg_dumper *dumper,
305316
record.buf = psinfo->buf;
306317

307318
dst = big_oops_buf ?: psinfo->buf;
308-
dst_size = psinfo->bufsize;
319+
dst_size = max_compressed_size ?: psinfo->bufsize;
309320

310321
/* Write dump header. */
311322
header_size = snprintf(dst, dst_size, "%s#%d Part%u\n", why,
@@ -326,8 +337,15 @@ static void pstore_dump(struct kmsg_dumper *dumper,
326337
record.compressed = true;
327338
record.size = zipped_len;
328339
} else {
329-
record.size = header_size + dump_size;
330-
memcpy(psinfo->buf, dst, record.size);
340+
/*
341+
* Compression failed, so the buffer is most
342+
* likely filled with binary data that does not
343+
* compress as well as ASCII text. Copy as much
344+
* of the uncompressed data as possible into
345+
* the pstore record, and discard the rest.
346+
*/
347+
record.size = psinfo->bufsize;
348+
memcpy(psinfo->buf, dst, psinfo->bufsize);
331349
}
332350
} else {
333351
record.size = header_size + dump_size;
@@ -560,6 +578,7 @@ static void decompress_record(struct pstore_record *record,
560578
int ret;
561579
int unzipped_len;
562580
char *unzipped, *workspace;
581+
size_t max_uncompressed_size;
563582

564583
if (!IS_ENABLED(CONFIG_PSTORE_COMPRESS) || !record->compressed)
565584
return;
@@ -583,19 +602,20 @@ static void decompress_record(struct pstore_record *record,
583602
}
584603

585604
/* Allocate enough space to hold max decompression and ECC. */
586-
workspace = kvzalloc(psinfo->bufsize + record->ecc_notice_size,
605+
max_uncompressed_size = 3 * psinfo->bufsize;
606+
workspace = kvzalloc(max_uncompressed_size + record->ecc_notice_size,
587607
GFP_KERNEL);
588608
if (!workspace)
589609
return;
590610

591611
zstream->next_in = record->buf;
592612
zstream->avail_in = record->size;
593613
zstream->next_out = workspace;
594-
zstream->avail_out = psinfo->bufsize;
614+
zstream->avail_out = max_uncompressed_size;
595615

596616
ret = zlib_inflate(zstream, Z_FINISH);
597617
if (ret != Z_STREAM_END) {
598-
pr_err("zlib_inflate() failed, ret = %d!\n", ret);
618+
pr_err_ratelimited("zlib_inflate() failed, ret = %d!\n", ret);
599619
kvfree(workspace);
600620
return;
601621
}

0 commit comments

Comments
 (0)