-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Description
Toeplitz-dot-Hankel has better complexity plans so should be the default when arrays are large: Eg for cheb2leg
even n = 1000
sees a signifcant speedup:
julia> n = 100; x = randn(n); @time cheb2leg(x); @time th_cheb2leg(x);
0.000267 seconds (2 allocations: 944 bytes)
0.000654 seconds (98 allocations: 288.531 KiB)
julia> n = 1000; x = randn(n); @time cheb2leg(x); @time th_cheb2leg(x);
0.028686 seconds (2 allocations: 7.984 KiB)
0.006464 seconds (99 allocations: 10.559 MiB)
julia> n = 1000; x = randn(n); @time cheb2leg(x); @time th_cheb2leg(x);
0.028856 seconds (2 allocations: 7.984 KiB)
0.011597 seconds (99 allocations: 10.559 MiB)
julia> n = 10_000; x = randn(n); @time cheb2leg(x); @time th_cheb2leg(x);
0.778423 seconds (3 allocations: 78.219 KiB)
0.103821 seconds (108 allocations: 799.524 MiB)
This was prematurely changed but reverted as there were some regressions. But also the number of allocations in th_*
is exorbitant, probably because it dates back to a port of Matlab code.
For matrices I'm not seeing much improvement in the th_*
, even for a 40k x 10 transform, which is suspicious....but doing a profile all the time is in the FFTs
Metadata
Metadata
Assignees
Labels
No labels