Replies: 6 comments 6 replies
-
Hey, thanks for using RxInfer, have you tried using the limit stack depth option as suggested in the error message? Documentation can be found here https://docs.rxinfer.com/stable/manuals/sharpbits/stack-overflow-inference/ |
Beta Was this translation helpful? Give feedback.
-
I tried a couple of different limit_stack_depth, for example:
Now I get a new error that I don't understand:
|
Beta Was this translation helpful? Give feedback.
-
there are few issues with this model (from RxInfer perspective), at first, can you please share the dimensionality of A minimum working example, perhaps not exactly what you were looking for is: using RxInfer, Distributions
G = diageye(n) # need to think through this
y = randn(1000)
#Model
@model function gblup_model(y)
var_g ~ GammaShapeRate(1.0, 1.0)
var_e ~ GammaShapeRate(1.0, 1.0)
mu ~ Normal(mean = 0.0, var = 100.0)
g ~ MvNormalMeanScalePrecision(zeros(n), var_g)
y ~ MvNormalMeanScalePrecision(ones(n) * mu + g, var_e)
end
n = 1000
# Initialization
init = @initialization begin
q(var_g) = vague(GammaShapeRate)
q(var_e) = vague(GammaShapeRate)
q(mu) = NormalMeanVariance(0.0, 100.0)
q(g) = MvNormalMeanCovariance(ones(n), 10 * diageye(n))
end
# Perform inference on the model
result = infer(
model = gblup_model(),
data = (y = y,),
initialization = init,
returnvars = (mu = KeepLast(), g = KeepLast(), var_g = KeepLast(), var_e = KeepLast()),
free_energy = true,
iterations = 50,
options = (limit_stack_depth = 10,),
constraints = MeanField()
)
# Access the inferred parameters from the result object
posterior_mean_g = mean(result.posteriors[:g])
posterior_mean_var_g = mean(result.posteriors[:var_g])
posterior_mean_var_e = mean(result.posteriors[:var_e]) I am not sure how to incorporate |
Beta Was this translation helpful? Give feedback.
-
G is a dense covariance matrix and y is just a vector of length n. Here is some code to generate data:
I have a working Gibbs sampler if you want to compare. |
Beta Was this translation helpful? Give feedback.
-
@patwa67, thanks for providing the data. I can see what you're trying to achieve now. First, the error you didn't understand was suggesting that you need to initialize messages and marginals. Since you're using MeanField constraints throughout the model, you need to ensure that all marginals are initialized. In your code, you haven't initialized the marginal for Now, the bad news is that the vanilla and fast approaches won't work for your model specification. Specifically, this part: We don't have a rule that allows you to multiply a fixed matrix
Meanwhile providing you with the code for option 1. using RxInfer, Distributions
using LinearAlgebra
# ---- Data Simulation ----
n = 3000
Z_matrix = randn(n, n)
G = Z_matrix * Z_matrix' / n
G = G / mean(Diagonal(G))
σ_u_true = 2.0
σ_e_true = 1.0
u_true = cholesky(Symmetric(G * σ_u_true)).L * randn(n)
e_true = randn(n) * sqrt(σ_e_true)
β_true = [3.0] #Set to zero if you want to exclude mean
X = ones(n, 1)
y = X * β_true + u_true + e_true
#Model
@model function gblup_model(y)
var_g ~ GammaShapeRate(1.0, 1.0)
var_e ~ GammaShapeRate(1.0, 1.0)
mu ~ Normal(mean = 0.0, var = 100.0)
g ~ MvNormalMeanScalePrecision(zeros(n), var_g)
y ~ MvNormalMeanScalePrecision(ones(n) * mu + g, var_e)
end
# Initialization
init = @initialization begin
q(var_g) = vague(GammaShapeRate)
q(var_e) = vague(GammaShapeRate)
q(g) = MvNormalMeanCovariance(ones(n), 10 * diageye(n))
end
# Perform inference on the model
result = infer(
model = gblup_model(),
data = (y = y,),
initialization = init,
returnvars = (mu = KeepLast(), g = KeepLast(), var_g = KeepLast(), var_e = KeepLast()),
free_energy = true,
iterations = 50,
options = (limit_stack_depth = 10,),
showprogress = true,
constraints = MeanField()
)
# Access the inferred parameters from the result object
posterior_mean_g = mean(result.posteriors[:g])
posterior_mean_var_g = mean(result.posteriors[:var_g])
posterior_mean_var_e = mean(result.posteriors[:var_e]) Additionally, as this is not inherently an issue with RxInfer, I suggest moving this thread to discussions. |
Beta Was this translation helpful? Give feedback.
-
I'm afraid that neither 1 nor 2 are viable options since the full G matrix needs to be accounted for. These kind of models are common in spatial statistics, but there the covariance matrix is replaced with a geographic distance matrix. It is possible to interpret the G matrix as a weighted graph and theoretically it should be possible to set up a model that performs some form of weighted message passing. If this would fit into options 3 or 4 is beyond my knowledge. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a linear regression model with a MVN distribution that has a (relatively) large fixed covariance matrix (G), but it encounters Stack overflow error. The size of G is 3226×3226, i.e. n=3226. The model code is:
I get the following error message:
Beta Was this translation helpful? Give feedback.
All reactions