Skip to content

Performance investigation #37

Open
@NHDaly

Description

@NHDaly

Splitting out the benchmark discussion from #36.


Here's the benchmark in a gist:
https://gist.github.com/NHDaly/a8fae0d1d65ab1066c585c27e54146fa

And the results in a google spreadsheet:
https://docs.google.com/spreadsheets/d/1Lc3ughgwwK25cpwbnuLRxw19EtaMxdCsMmtnspT_4H8/edit?usp=sharing


Here are the results:

    Operation Values                
    identity   ÷   +   /   *  
Category Type time (ms) allocs time (ms) allocs time (ms) allocs time (ms) allocs time (ms) allocs
Int Int32 1.35 0 5.16 0 1.47 0 2.35 0 1.61 0
  Int64 1.86 0 18.66 0 1.89 0 2.46 0 1.95 0
  Int128 3.77 0 26.07 0 3.85 0 16.74 0 3.95 0
Float Float32 1.35 0 28.97 0 1.47 0 1.75 0 1.47 0
  Float64 1.85 0 27.37 0 1.88 0 2.45 0 1.89 0
FixedDecimal FD{Int32,2} 1.35 0 5.16 0 1.48 0 38.20 0 31.73 0
  FD{Int64,2} 1.86 0 18.75 0 1.89 0 59.18 0 47.03 0
  FD{Int128,2} 1320.01 14000000 26.19 0 1324.35 14000000 7267.32 72879639 6139.13 62000000

Here are my current question:

  • For some reason, even just element-wise copying an array of FixedDecimal{Int128, 2} into another array, allocates like crazy. (14,000,000 allocations / 1,000,000 elements).
    • Where do these allocations come from?
  • / and * of FixedDecimal{Int64, 2} are more expensive than for FixedDecimal{Int32, 2}., by a factor of around 1.5x each, whereas / and * for Int64 and Int32 are almost identical.
    • My current guess is that this might be related to promoting to Int128 during those operations (due to widemul), which seems to be slower than Int64 across the board.
  • / for Int128 is like 6x slower than for Int32! Where does that come from?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions