Skip to content

Base: make TwicePrecision more accurate using standard algorithms #49569

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

nsajko
Copy link
Contributor

@nsajko nsajko commented Apr 29, 2023

Update the arithmetic code to use algorithms published since
twiceprecision.jl got written.

Handle complex numbers.

Prevent some harmful promotions.

Some of the introduced extended precision arithmetic code will also be
used in base/div.jl to tackle issue #49450 with PR #49561.

Results:

The example in #33677 still gives the same result.

I wasn't able to evaluate #23497 because of how much Julia changed in
the meantime.

Proof that #26553 is fixed:

julia> const TwicePrecision = Base.TwicePrecision
Base.TwicePrecision

julia> a = TwicePrecision{Float64}(0.42857142857142855, 2.3790493384824782e-17)
Base.TwicePrecision{Float64}(0.42857142857142855, 2.3790493384824782e-17)

julia> b = TwicePrecision{Float64}(3.4, 8.881784197001253e-17)
Base.TwicePrecision{Float64}(3.4, 8.881784197001253e-17)

julia> a_minus_b_inexact = Tuple(a - b)
(-2.9714285714285715, 1.0150610510858574e-16)

julia> BigFloat(a) == sum(map(BigFloat, Tuple(a)))
true

julia> BigFloat(b) == sum(map(BigFloat, Tuple(b)))
true

julia> a_minus_b = BigFloat(a) - BigFloat(b)
-2.971428571428571428571428571428577679589762353999797347402693647078382455095635

julia> Float64(a_minus_b) == first(a_minus_b_inexact)
true

julia> Float64(a_minus_b - Float64(a_minus_b)) == a_minus_b_inexact[begin + 1]
true

Fixes #26553

Updates #49589

@nsajko
Copy link
Contributor Author

nsajko commented Apr 29, 2023

Note, in case this PR results in some unacceptable performance regression: it's possible to replace Algorithm 18 from the paper with Algorithm 17. It's less accurate but faster.

@nsajko nsajko marked this pull request as draft April 29, 2023 18:04
@nsajko nsajko force-pushed the dw_div_fp branch 4 times, most recently from 5814884 to 40294ae Compare May 4, 2023 13:00
Update the arithmetic code to use algorithms published since
twiceprecision.jl got written.

Handle complex numbers.

Prevent some harmful promotions.

Some of the introduced extended precision arithmetic code will also be
used in `base/div.jl` to tackle issue JuliaLang#49450 with PR JuliaLang#49561.

Results:

The example in JuliaLang#33677 still gives the same result.

I wasn't able to evaluate JuliaLang#23497 because of how much Julia changed in
the meantime.

Proof that JuliaLang#26553 is fixed:
```julia-repl
julia> const TwicePrecision = Base.TwicePrecision
Base.TwicePrecision

julia> a = TwicePrecision{Float64}(0.42857142857142855, 2.3790493384824782e-17)
Base.TwicePrecision{Float64}(0.42857142857142855, 2.3790493384824782e-17)

julia> b = TwicePrecision{Float64}(3.4, 8.881784197001253e-17)
Base.TwicePrecision{Float64}(3.4, 8.881784197001253e-17)

julia> a_minus_b_inexact = Tuple(a - b)
(-2.9714285714285715, 1.0150610510858574e-16)

julia> BigFloat(a) == sum(map(BigFloat, Tuple(a)))
true

julia> BigFloat(b) == sum(map(BigFloat, Tuple(b)))
true

julia> a_minus_b = BigFloat(a) - BigFloat(b)
-2.971428571428571428571428571428577679589762353999797347402693647078382455095635

julia> Float64(a_minus_b) == first(a_minus_b_inexact)
true

julia> Float64(a_minus_b - Float64(a_minus_b)) == a_minus_b_inexact[begin + 1]
true
```

Fixes JuliaLang#26553

Updates JuliaLang#49589
@nsajko nsajko marked this pull request as ready for review May 4, 2023 19:28
@nsajko
Copy link
Contributor Author

nsajko commented May 4, 2023

A thing I'm still not certain about is division: the Base.div12 code used that trick with normalizing the numerator and denominator using frexp to try to prevent overflow and underflow, and, of course, I kept with that and furthermore extended the trick to double-word floating-point. This required increasing the slopbits parameter for a test, though.

@KristofferC
Copy link
Member

Since this is an implementation detail for float ranges it feels like PRs improving it's accuracy need to have corresponding tests for what fixes it corresponds to for these ranges. Otherwise, it's hard to understand the point of it.

@nsajko nsajko marked this pull request as draft May 4, 2023 20:04
@LilithHafner LilithHafner added the needs nanosoldier run This PR should have benchmarks run on it label May 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs nanosoldier run This PR should have benchmarks run on it
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TwicePrecise subtraction is inexact
4 participants