-
Notifications
You must be signed in to change notification settings - Fork 4
Description
In the current implementation of the RC loss function, we use scikit-learn to perform kernel density estimation (along with k-fold cross-validation for the optimization of the bandwidth), which returns a NumPy array instead of a Pytorch tensor. Since there is no information about the gradient of the RC loss function, the current Boltzmann generator was not able to backpropagate and the RC loss function did not actually influence the result of training. Since there are no built-in functions for kernel density estimation and k-fold cross-validation, the way of solving this issue might be coding up RC loss in using PyTorch instead of NumPy-based packages.
Currently, the development of the project has been paused, so this issue might not be addressed in the short term. We, therefore, file an issue here as a record/reminder of the problem.