-
Notifications
You must be signed in to change notification settings - Fork 2
implement FiniteDiffs submodule: gradient, divergence and laplacian operators #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -14,4 +14,11 @@ | |
@test maxabsfinite(A) == maximum_finite(abs, A) | ||
@test maxabsfinite(A, dims=1) == maximum_finite(abs, A, dims=1) | ||
end | ||
|
||
@testset "fdiff entrypoints" begin | ||
A = rand(Float32, 5) | ||
@test ImageBase.fdiff(A, rev=true) == ImageBase.FiniteDiff.fdiff(A, rev=true) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You could even just check There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Unfortunately, this doesn't hold true if I do @deprecate fdiff ImageBase.FiniteDiff.fdiff without inspecting the deprecation details, it seems that julia> A = rand(Float32, 5);
julia> ImageBase.fdiff(A, rev=true)
ERROR: MethodError: no method matching fdiff(::Vector{Float32}; rev=true)
Closest candidates are:
fdiff(::Any...) at deprecated.jl:45 got unsupported keyword argument "rev" |
||
out = similar(A) | ||
@test ImageBase.fdiff!(out, A, rev=false) == ImageBase.FiniteDiff.fdiff!(out, A, rev=false) | ||
end | ||
end |
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -1,8 +1,12 @@ | ||||||
using ImageBase.FiniteDiff | ||||||
# TODO(johnnychen94): remove this after we delete `Imagebase.fdiff` and `ImageBase.fdiff!` entrypoints | ||||||
using ImageBase.FiniteDiff: fdiff, fdiff! | ||||||
|
||||||
@testset "fdiff" begin | ||||||
# Base.diff doesn't promote integer to float | ||||||
@test ImageBase.maybe_floattype(Int) == Int | ||||||
@test ImageBase.maybe_floattype(N0f8) == Float32 | ||||||
@test ImageBase.maybe_floattype(RGB{N0f8}) == RGB{Float32} | ||||||
@test ImageBase.FiniteDiff.maybe_floattype(Int) == Int | ||||||
@test ImageBase.FiniteDiff.maybe_floattype(N0f8) == Float32 | ||||||
@test ImageBase.FiniteDiff.maybe_floattype(RGB{N0f8}) == RGB{Float32} | ||||||
|
||||||
@testset "API" begin | ||||||
# fdiff! works the same as fdiff | ||||||
|
@@ -107,3 +111,77 @@ | |||||
@test fdiff(A, dims=1) == fdiff(float.(A), dims=1) | ||||||
end | ||||||
end | ||||||
|
||||||
@testset "fgradient" begin | ||||||
for T in generate_test_types([N0f8, Float32], [Gray, RGB]) | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
(tiny bit easier on compiler and speeds up the tests) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'll fix this and elsewhere in a separate PR. Edit: Actually, this doesn't change anything by comparing the first run time; in both cases, they take about 0.07s to compile the function. I'll keep them unchanged for now. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's fine. If you're fixing inference triggers in SnoopCompile these may show up without the |
||||||
for sz in [(7, ), (7, 5), (7, 5, 3)] | ||||||
A = rand(T, sz...) | ||||||
|
||||||
∇A = fgradient(A) | ||||||
for i in 1:length(sz) | ||||||
@test ∇A[i] == fdiff(A, dims=i) | ||||||
end | ||||||
∇A = map(similar, ∇A) | ||||||
@test fgradient!(∇A, A) == fgradient(A) | ||||||
|
||||||
∇tA = fgradient(A, adjoint=true) | ||||||
for i in 1:length(sz) | ||||||
@test ∇tA[i] == .-fdiff(A, dims=i, rev=true) | ||||||
end | ||||||
∇tA = map(similar, ∇tA) | ||||||
@test fgradient!(∇tA, A, adjoint=true) == fgradient(A, adjoint=true) | ||||||
end | ||||||
end | ||||||
end | ||||||
|
||||||
@testset "fdiv/flaplacian" begin | ||||||
ref_laplacian(X) = imfilter(X, Kernel.Laplacian(ntuple(x->true, ndims(X))), "circular") | ||||||
|
||||||
X = [ | ||||||
5 3 8 1 2 2 3 | ||||||
5 5 1 3 3 1 1 | ||||||
1 8 2 9 1 2 7 | ||||||
7 3 4 5 8 1 5 | ||||||
1 4 1 8 8 9 7 | ||||||
] | ||||||
ΔX_ref = [ | ||||||
-8 10 -26 17 6 7 3 | ||||||
-8 -3 14 2 -5 4 12 | ||||||
23 -21 14 -25 18 2 -19 | ||||||
-18 11 -5 9 -17 20 2 | ||||||
19 -8 20 -17 -5 -18 -10 | ||||||
] | ||||||
ΔX = ref_laplacian(X) | ||||||
# Base.diff doesn't promote Int to floats so we should probably do the same for laplacian | ||||||
@test eltype(ΔX) == Int | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this important to test here? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep, to ensure that we don't promote the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To clarify, my point is that if it doesn't pass it's actually a bug in ImageFiltering, but that isn't something that can be fixed here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You are right, I just keep it here as a detector for potential bug tracing when the test fails. If it turns out to be annoying we can always remove it. |
||||||
@test ΔX_ref == ΔX | ||||||
|
||||||
for T in generate_test_types([N0f8, Float32], [Gray, RGB]) | ||||||
for sz in [(7,), (7, 7), (7, 7, 7)] | ||||||
A = rand(T, sz...) | ||||||
∇A = fgradient(A) | ||||||
out = fdiv(∇A) | ||||||
@test out ≈ ref_laplacian(A) | ||||||
|
||||||
fill!(out, zero(T)) | ||||||
fdiv!(out, ∇A...) | ||||||
@test out == fdiv(∇A) | ||||||
end | ||||||
end | ||||||
|
||||||
for T in generate_test_types([N0f8, Float32], [Gray, RGB]) | ||||||
for sz in [(7,), (7, 7), (7, 7, 7)] | ||||||
A = rand(T, sz...) | ||||||
out = flaplacian(A) | ||||||
@test out ≈ ref_laplacian(A) | ||||||
|
||||||
∇A = fgradient(A) | ||||||
foreach((out, ∇A...)) do x | ||||||
fill!(x, zero(T)) | ||||||
end | ||||||
flaplacian!(out, ∇A, A) | ||||||
@test out == flaplacian(A) | ||||||
@test ∇A == fgradient(A) | ||||||
end | ||||||
end | ||||||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing this should be
@inline
, otherwise I think you'll get a bit of splat penalty from callers likefdiv(::Tuple) and
flaplacian!`.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't observe a clear performance difference by trying the
flaplacian
benchmark, so I choose to keep it unchanged for now.However, I'm quite curious about what makes you trying to think this way. Are there any references or information outside that I can take a look at?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about references, other than maybe checking the
MethodInstance
s that get created via MethodAnalysis.jl. If you see abstract instances then it's possible that things will be better with forced-inlining. However, varargs typically work out well when the types are homogenous, and that seems likely to be true in this case, which may explain why you didn't see this.