Skip to content

Allocation record processing fails while trying to calculate the high watermark of a 1.5+ GB bin output file #802

@bqback

Description

@bqback

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When running memray flamegraph on a large file, during calculating the high watermark, I get Memray ERROR: Failed to process allocation record. Same error as #43, but the output file is orders of magnitude smaller than the file that caused the error.

A perfectly browsable html file is still produced. despite the error during calculation/processing.

Only some of the produced outputs cause this error, slightly smaller (900 MB to 1.1-1.3 GB) files seem to be processed fine no correlation to the size seems to exist (see section at the end)

Expected Behavior

A lack of errors produced by the memray flamegraph command.

Steps To Reproduce

Run memray flamegraph --force --temporal or memray flamegraph --force --leaks on a large file

Memray Version

1.15.0 on the server that collects the info

1.17.2 on a local setup that processes the output

Python Version

3.8 on the server

3.13 on the local setup

Operating System

Linux

Anything else?

Output of the parse | awk | sort | uniq command used on one of the files:

396110692 ALLOCATION
     78 CONTEXT_SWITCH
  44210 FRAME_ID
12919326 FRAME_POP
24431599 FRAME_PUSH
      1 HEADER
 511384 MEMORY_RECORD
      1 THREAD

The data is collected using memray --follow-forks --trace-python-allocators --force from a gunicorn app with 2*cores+1 workers.

Sizes of processed files

No error: 1.1 GB, 1.2 GB, 1.3 GB (2x), 1.5 GB, 1.6 GB, 1.8 GB, 1.9 GB (2x)
Error: 906.6 MB, 1.1 GB, 1.5 GB, 1.6 GB, 1.8 GB (2x), 1.9 GB, 2.1 GB

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions