Skip to content

Commit 200cb2a

Browse files
kit1980facebook-github-bot
authored andcommitted
Use log1p(x) instead of log(1+x) (#2539)
Summary: This function is more accurate than torch.log() for small values of input - https://pytorch.org/docs/stable/generated/torch.log1p.html Found with https://github.com/pytorch-labs/torchfix/ Pull Request resolved: #2539 Reviewed By: saitcakmak Differential Revision: D62785177 Pulled By: kit1980 fbshipit-source-id: 7857fccc6816b9362cf357210db102200bc7f3c8
1 parent e9ce11f commit 200cb2a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

botorch/models/transforms/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ def lognorm_to_norm(mu: Tensor, Cov: Tensor) -> tuple[Tensor, Tensor]:
3232
- The `batch_shape x n` mean vector of the Normal distribution
3333
- The `batch_shape x n x n` covariance matrix of the Normal distribution
3434
"""
35-
Cov_n = torch.log(1 + Cov / (mu.unsqueeze(-1) * mu.unsqueeze(-2)))
35+
Cov_n = torch.log1p(Cov / (mu.unsqueeze(-1) * mu.unsqueeze(-2)))
3636
mu_n = torch.log(mu) - 0.5 * torch.diagonal(Cov_n, dim1=-1, dim2=-2)
3737
return mu_n, Cov_n
3838

0 commit comments

Comments
 (0)