Skip to content

implement FiniteDiffs submodule: gradient, divergence and laplacian operators #22

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Oct 18, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .github/workflows/UnitTest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,14 @@ jobs:
${{ runner.os }}-test-
${{ runner.os }}-
- uses: julia-actions/julia-buildpkg@v1
- name: "Compat fix for Julia < v1.3.0"
if: ${{ matrix.version == '1.0' }}
run: |
using Pkg
Pkg.add([
PackageSpec(name="AbstractFFTs", version="0.5"),
])
shell: julia --project=. --startup=no --color=yes {0}
- uses: julia-actions/julia-runtest@v1
- uses: julia-actions/julia-processcoverage@v1
- uses: codecov/codecov-action@v1
Expand Down
3 changes: 2 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ julia = "1"
[extras]
Aqua = "4c88cf16-eb10-579e-8560-4a9242c79595"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
ImageFiltering = "6a3955dd-da59-5b1f-98d4-e7296123deb5"
ImageIO = "82e4d734-157c-48bb-816b-45c225c6df19"
ImageMagick = "6218d12a-5da1-5696-b52f-db25d2ecc6d1"
OffsetArrays = "6fe1bfb0-de20-5000-8ca7-80f57d26f881"
Expand All @@ -23,4 +24,4 @@ Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
TestImages = "5e47fb64-e119-507b-a336-dd2b206d9990"

[targets]
test = ["Aqua", "Documenter", "Test", "ImageIO", "ImageMagick", "OffsetArrays", "Statistics", "StackViews", "TestImages"]
test = ["Aqua", "Documenter", "Test", "ImageFiltering", "ImageIO", "ImageMagick", "OffsetArrays", "Statistics", "StackViews", "TestImages"]
7 changes: 0 additions & 7 deletions src/ImageBase.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,6 @@ export
# originally from ImageTransformations.jl
restrict,

# finite difference on one-dimension
# originally from Images.jl
fdiff,
fdiff!,

# basic image statistics, from Images.jl
minimum_finite,
maximum_finite,
Expand All @@ -26,13 +21,11 @@ using Reexport
using Base.Cartesian: @nloops
@reexport using ImageCore
using ImageCore.OffsetArrays
using ImageCore.MappedArrays: of_eltype

include("diff.jl")
include("restrict.jl")
include("utils.jl")
include("statistics.jl")
include("compat.jl")
include("deprecated.jl")

if VERSION >= v"1.4.2" # work around https://github.com/JuliaLang/julia/issues/34121
Expand Down
9 changes: 0 additions & 9 deletions src/compat.jl

This file was deleted.

5 changes: 5 additions & 0 deletions src/deprecated.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,9 @@
@deprecate maxfinite(A; kwargs...) maximum_finite(A; kwargs...)
@deprecate maxabsfinite(A; kwargs...) maximum_finite(abs, A; kwargs...)

# These two symbols are exported by previous ImageBase versions and now organized in the
# FiniteDiff submodule.
@deprecate fdiff(args...; kwargs...) ImageBase.FiniteDiff.fdiff(args...; kwargs...)
@deprecate fdiff!(args...; kwargs...) ImageBase.FiniteDiff.fdiff!(args...; kwargs...)

# END 0.1 deprecation
145 changes: 142 additions & 3 deletions src/diff.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,36 @@
# TODO: add keyword `shrink` to give a consistant result on Base
# when this is done, then we can propose this change to upstream Base
# Because there exists the `FiniteDiff.jl` package with quite different purposes,
# this module is not expected to be reexported.
module FiniteDiff

using ImageCore
using ImageCore.MappedArrays: of_eltype

"""
Although stored as an array, image can also be viewed as a function from discrete grid space
Zᴺ to continuous space R if it is gray image, to C if it is complex-valued image
(MRI rawdata), to Rᴺ if it is colorant image, etc.
This module provides the discrete
version of gradient-related operators by viewing image arrays as functions.

This module provides:

- forward/backward difference [`fdiff`](@ref) are the Images-flavor of `Base.diff`
- gradient operator [`fgradient`](@ref) and its adjoint via keyword `adjoint=true`.
- divergence operator [`fdiv`](@ref) computes the sum of discrete derivatives of vector
fields.
- laplacian operator [`flaplacian`](@ref) is the divergence of the gradient fields.

Every function in this module has its in-place version.
"""
FiniteDiff

export
fdiff, fdiff!,
fdiv, fdiv!,
fgradient, fgradient!,
flaplacian, flaplacian!


"""
fdiff(A::AbstractArray; dims::Int, rev=false, boundary=:periodic)

Expand All @@ -19,7 +50,7 @@ Take vector as an example, it computes `(A[2]-A[1], A[3]-A[2], ..., A[1]-A[end])

# Examples

```jldoctest; setup=:(using ImageBase: fdiff)
```jldoctest; setup=:(using ImageBase.FiniteDiff: fdiff)
julia> A = [2 4 8; 3 9 27; 4 16 64]
3×3 $(Matrix{Int}):
2 4 8
Expand Down Expand Up @@ -106,3 +137,111 @@ _fdiff_default_dims(A::AbstractVector) = 1
maybe_floattype(::Type{T}) where T = T
maybe_floattype(::Type{T}) where T<:FixedPoint = floattype(T)
maybe_floattype(::Type{CT}) where CT<:Color = base_color_type(CT){maybe_floattype(eltype(CT))}


"""
fdiv(Vs...)

Compute the divergence of vector fields `Vs`.

See also [`fdiv!`](@ref) for the in-place version.
"""
function fdiv(V₁::AbstractArray{T}, Vs::AbstractArray{T}...) where T
fdiv!(similar(V₁, maybe_floattype(T)), V₁, Vs...)
end
fdiv(Vs::Tuple) = fdiv(Vs...)

"""
fdiv!(out, Vs...)

The in-place version of divergence operator [`fdiv`](@ref).
"""
function fdiv!(out::AbstractArray, V₁::AbstractArray, Vs::AbstractArray...)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing this should be @inline, otherwise I think you'll get a bit of splat penalty from callers like fdiv(::Tuple) and flaplacian!`.

Copy link
Member Author

@johnnychen94 johnnychen94 Oct 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't observe a clear performance difference by trying the flaplacian benchmark, so I choose to keep it unchanged for now.

However, I'm quite curious about what makes you trying to think this way. Are there any references or information outside that I can take a look at?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about references, other than maybe checking the MethodInstances that get created via MethodAnalysis.jl. If you see abstract instances then it's possible that things will be better with forced-inlining. However, varargs typically work out well when the types are homogenous, and that seems likely to be true in this case, which may explain why you didn't see this.

# This is the optimized version of `sum(v->fgradient(v; ajoint=true), (V₁, Vs...))`
# by removing unnecessary memory allocations.
all(v->axes(v) == axes(out), (V₁, Vs...)) || throw(ArgumentError("All axes of vector fields Vs and X should be the same."))

# TODO(johnnychen94): for better performance, we can eliminate this `tmp` allocation by fusing multiple `fdiff` in the inner loop.
out .= fdiff(V₁; dims=1, rev=true, boundary=:periodic)
tmp = similar(out)
for i in 1:length(Vs)
out .+= fdiff!(tmp, Vs[i]; dims=i+1, rev=true, boundary=:periodic)
end
return out
end
fdiv!(out::AbstractArray, Vs::Tuple) = fdiv!(out, Vs...)

"""
flaplacian(X::AbstractArray)

The discrete laplacian operator, i.e., the divergence of the gradient fields of `X`.

See also [`flaplacian!`](@ref) for the in-place version.
"""
flaplacian(X::AbstractArray) = flaplacian!(similar(X, maybe_floattype(eltype(X))), X)

"""
flaplacian!(out, X)
flaplacian!(out, ∇X::Tuple, X)

The in-place version of the laplacian operator [`flaplacian`](@ref).

!!! tip Avoiding allocations
The two-argument method will allocate memory to store the intermediate
gradient fields `∇X`. If you call this repeatedly with images of consistent size and type,
consider using the three-argument form with pre-allocated memory for `∇X`,
which will eliminate allocation by this function.
"""
flaplacian!(out, X::AbstractArray) = fdiv!(out, fgradient(X))
flaplacian!(out, ∇X::Tuple, X::AbstractArray) = fdiv!(out, fgradient!(∇X, X))


"""
fgradient(X::AbstractArray; adjoint=false) -> (∂₁X, ∂₂X, ..., ∂ₙX)

Computes the gradient fields of `X`. If `adjoint==true` then it computes the adjoint gradient
fields.

Each gradient vector is computed as forward difference along specific dimension, e.g.,
[`∂ᵢX = fdiff(X, dims=i)`](@ref fdiff).

Mathematically, the adjoint operator ∂ᵢ' of ∂ᵢ is defined as `<∂ᵢu, v> := <u, ∂ᵢ'v>`.

See also the in-place version [`fgradient!(X)`](@ref) to reuse the allocated memory.
"""
function fgradient(X::AbstractArray{T,N}; adjoint::Bool=false) where {T,N}
fgradient!(ntuple(i->similar(X, maybe_floattype(T)), N), X; adjoint=adjoint)
end

"""
fgradient!(∇X::Tuple, X::AbstractArray; adjoint=false)

The in-place version of (adjoint) gradient operator [`fgradient`](@ref).

The input `∇X = (∂₁X, ∂₂X, ..., ∂ₙX)` is a tuple of arrays that are similar to `X`, i.e.,
`eltype(∂ᵢX) == eltype(X)` and `axes(∂ᵢX) == axes(X)` for all `i`.
"""
function fgradient!(∇X::NTuple{N, <:AbstractArray}, X; adjoint::Bool=false) where N
all(v->axes(v) == axes(X), ∇X) || throw(ArgumentError("All axes of vector fields ∇X and X should be the same."))
for i in 1:N
if adjoint
# the negative adjoint of gradient operator for forward difference is the backward difference
# see also
# Getreuer, Pascal. "Rudin-Osher-Fatemi total variation denoising using split Bregman." _Image Processing On Line_ 2 (2012): 74-95.
fdiff!(∇X[i], X, dims=i, rev=true)
# TODO(johnnychen94): ideally we can get avoid flipping the signs for better performance.
@. ∇X[i] = -∇X[i]
else
fdiff!(∇X[i], X, dims=i)
end
end
return ∇X
end



if VERSION < v"1.1"
isnothing(x) = x === nothing
end

end # module
7 changes: 7 additions & 0 deletions test/deprecated.jl
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,11 @@
@test maxabsfinite(A) == maximum_finite(abs, A)
@test maxabsfinite(A, dims=1) == maximum_finite(abs, A, dims=1)
end

@testset "fdiff entrypoints" begin
A = rand(Float32, 5)
@test ImageBase.fdiff(A, rev=true) == ImageBase.FiniteDiff.fdiff(A, rev=true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could even just check ImageBase.fdiff === ImageBase.FiniteDiff.tdiff and so on.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, this doesn't hold true if I do

@deprecate fdiff ImageBase.FiniteDiff.fdiff

without inspecting the deprecation details, it seems that @deprecate only passes positional arguments and not keyword arguments.

julia> A = rand(Float32, 5);

julia> ImageBase.fdiff(A, rev=true)
ERROR: MethodError: no method matching fdiff(::Vector{Float32}; rev=true)
Closest candidates are:
  fdiff(::Any...) at deprecated.jl:45 got unsupported keyword argument "rev"

out = similar(A)
@test ImageBase.fdiff!(out, A, rev=false) == ImageBase.FiniteDiff.fdiff!(out, A, rev=false)
end
end
84 changes: 81 additions & 3 deletions test/diff.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
using ImageBase.FiniteDiff
# TODO(johnnychen94): remove this after we delete `Imagebase.fdiff` and `ImageBase.fdiff!` entrypoints
using ImageBase.FiniteDiff: fdiff, fdiff!

@testset "fdiff" begin
# Base.diff doesn't promote integer to float
@test ImageBase.maybe_floattype(Int) == Int
@test ImageBase.maybe_floattype(N0f8) == Float32
@test ImageBase.maybe_floattype(RGB{N0f8}) == RGB{Float32}
@test ImageBase.FiniteDiff.maybe_floattype(Int) == Int
@test ImageBase.FiniteDiff.maybe_floattype(N0f8) == Float32
@test ImageBase.FiniteDiff.maybe_floattype(RGB{N0f8}) == RGB{Float32}

@testset "API" begin
# fdiff! works the same as fdiff
Expand Down Expand Up @@ -107,3 +111,77 @@
@test fdiff(A, dims=1) == fdiff(float.(A), dims=1)
end
end

@testset "fgradient" begin
for T in generate_test_types([N0f8, Float32], [Gray, RGB])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for T in generate_test_types([N0f8, Float32], [Gray, RGB])
for T in generate_test_types(Any[N0f8, Float32], Any[Gray, RGB])

(tiny bit easier on compiler and speeds up the tests)

Copy link
Member Author

@johnnychen94 johnnychen94 Oct 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll fix this and elsewhere in a separate PR.

Edit:

Actually, this doesn't change anything by comparing the first run time; in both cases, they take about 0.07s to compile the function. I'll keep them unchanged for now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fine. If you're fixing inference triggers in SnoopCompile these may show up without the Anys but that's just "analysis noise" and not anything concerning.

for sz in [(7, ), (7, 5), (7, 5, 3)]
A = rand(T, sz...)

∇A = fgradient(A)
for i in 1:length(sz)
@test ∇A[i] == fdiff(A, dims=i)
end
∇A = map(similar, ∇A)
@test fgradient!(∇A, A) == fgradient(A)

∇tA = fgradient(A, adjoint=true)
for i in 1:length(sz)
@test ∇tA[i] == .-fdiff(A, dims=i, rev=true)
end
∇tA = map(similar, ∇tA)
@test fgradient!(∇tA, A, adjoint=true) == fgradient(A, adjoint=true)
end
end
end

@testset "fdiv/flaplacian" begin
ref_laplacian(X) = imfilter(X, Kernel.Laplacian(ntuple(x->true, ndims(X))), "circular")

X = [
5 3 8 1 2 2 3
5 5 1 3 3 1 1
1 8 2 9 1 2 7
7 3 4 5 8 1 5
1 4 1 8 8 9 7
]
ΔX_ref = [
-8 10 -26 17 6 7 3
-8 -3 14 2 -5 4 12
23 -21 14 -25 18 2 -19
-18 11 -5 9 -17 20 2
19 -8 20 -17 -5 -18 -10
]
ΔX = ref_laplacian(X)
# Base.diff doesn't promote Int to floats so we should probably do the same for laplacian
@test eltype(ΔX) == Int
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this important to test here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, to ensure that we don't promote the Int to float types. This gives a consistent result to Base's diff.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, my point is that if it doesn't pass it's actually a bug in ImageFiltering, but that isn't something that can be fixed here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, I just keep it here as a detector for potential bug tracing when the test fails. If it turns out to be annoying we can always remove it.

@test ΔX_ref == ΔX

for T in generate_test_types([N0f8, Float32], [Gray, RGB])
for sz in [(7,), (7, 7), (7, 7, 7)]
A = rand(T, sz...)
∇A = fgradient(A)
out = fdiv(∇A)
@test out ≈ ref_laplacian(A)

fill!(out, zero(T))
fdiv!(out, ∇A...)
@test out == fdiv(∇A)
end
end

for T in generate_test_types([N0f8, Float32], [Gray, RGB])
for sz in [(7,), (7, 7), (7, 7, 7)]
A = rand(T, sz...)
out = flaplacian(A)
@test out ≈ ref_laplacian(A)

∇A = fgradient(A)
foreach((out, ∇A...)) do x
fill!(x, zero(T))
end
flaplacian!(out, ∇A, A)
@test out == flaplacian(A)
@test ∇A == fgradient(A)
end
end
end
24 changes: 12 additions & 12 deletions test/runtests.jl
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
using ImageBase, OffsetArrays, StackViews
using ImageFiltering
using Test, TestImages, Aqua, Documenter

using OffsetArrays: IdentityUnitRange
include("testutils.jl")

@testset "ImageBase.jl" begin

@testset "Project meta quality checks" begin
# Not checking compat section for test-only dependencies
Aqua.test_ambiguities(ImageBase)
Aqua.test_all(ImageBase;
ambiguities=false,
project_extras=true,
deps_compat=true,
stale_deps=true,
project_toml_formatting=true
)
if VERSION >= v"1.2"
doctest(ImageBase,manual = false)
if VERSION >= v"1.3"
# Not checking compat section for test-only dependencies
Aqua.test_ambiguities(ImageBase)
Aqua.test_all(ImageBase;
ambiguities=false,
project_extras=true,
deps_compat=true,
stale_deps=true,
project_toml_formatting=true
)
doctest(ImageBase, manual = false)
end
end

Expand Down