Replies: 3 comments 2 replies
-
Hello! Best, |
Beta Was this translation helpful? Give feedback.
-
Hey, Best, |
Beta Was this translation helpful? Give feedback.
-
Hello, I looked a bit into your question this afternoon.
A straightforward way to tackle your problem with InferOpt with no additional assumptions, is to directly differentiate through the black-box cost function using a using InferOpt
using LinearAlgebra
using Optimisers
using Plots
using Zygote
M = 123
K = 10
datapoints_feature1 = randn(M, K)
datapoints_feature2 = randn(M, K)
select_samples(scores; k=3) = sortperm(scores)[1:k]
function top_k(scores; k=3)
res = falses(length(scores))
res[select_samples(scores; k=k)] .= 1
return res
end
function cost(y)
m1 = datapoints_feature1[:, y]'datapoints_feature1[:, y]
m2 = datapoints_feature2[:, y]'datapoints_feature2[:, y]
return dot(m1, m2)
end
perturbed = PerturbedAdditive(top_k; ε=0.5, nb_samples=1000)
loss = Pushforward(perturbed, cost)
pipeline = cost ∘ top_k
# Training loop
θ_start = randn(K)
perturbed(θ_start)
θ = copy(θ_start)
optimizer = Optimisers.setup(Descent(1e-3), θ)
N = 10
cost_history = Float64[pipeline(θ_start)]
loss_history = Float64[]
for i in 1:N
grad = gradient(loss, θ)[1]
Optimisers.update!(optimizer, θ, grad)
@info "$i: $(loss(θ)) $(norm(grad))"
push!(cost_history, pipeline(θ))
push!(loss_history, loss(θ))
end
plot(loss_history)
plot(cost_history)
pipeline(θ_start)
pipeline(θ) I have some questions about your setting:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey, I've found your package and I think I should be able to apply it to my problem. However, I'm not sure how.
I want to optimize a function where the input is a set of samples or rather the indices with which I can get them by indexing into a larger dataset.
Example:
Currently, I'm using a Genetic Algorithm from Metaheuristics.jl. For this, every datapoint has a score with which the datapoints are sorted. Then, the top k samples are selected.
For the above example:
I've checked InferOpt.jl and found
soft_rank
andsoft_sort
. Checking the code, it seems that I should be able to get something likesoft_perm
from them (by not applying the indexing withinvperm
). However, this will be (as the name says) asoft_perm
which can not directly be used for indexing. Maybe something like a straight-through optimizer should work there (discrete solution in forward-pass, reverse-pass uses soft version).Do you have any recommendation how to implement the optimizer for my problem? I want to try a gradient-based optimization because most of the computation of the optimized function is differentiable and I think using this information might help.
Thanks and best regards,
Heiner
Beta Was this translation helpful? Give feedback.
All reactions