Skip to content

Commit 8536468

Browse files
esantorellafacebook-github-bot
authored andcommitted
CHANGELOG for 0.12.0 release (#2522)
Summary: Pull Request resolved: #2522 changelog Reviewed By: saitcakmak Differential Revision: D62405336 fbshipit-source-id: 44c0fa153fcb5ccc4c6c88c7172731861237517a
1 parent 509bccc commit 8536468

File tree

1 file changed

+66
-0
lines changed

1 file changed

+66
-0
lines changed

CHANGELOG.md

+66
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,72 @@
22

33
The release log for BoTorch.
44

5+
## [0.12.0] -- Sep 17, 2024
6+
7+
#### Major changes
8+
* Update most models to use dimension-scaled log-normal hyperparameter priors by
9+
default, which makes performance much more robust to dimensionality. See
10+
discussion #2451 for details. The only models that are _not_ changed are those
11+
for fully Bayesian models and `PairwiseGP`; for models that utilize a
12+
composite kernel, such as multi-fidelity/task/context, this change only
13+
affects the base kernel (#2449, #2450, #2507).
14+
* Use `Standarize` by default in all the models using the upgraded priors. In
15+
addition to reducing the amount of boilerplate needed to initialize a model,
16+
this change was motivated by the change to default priors, because the new
17+
priors will work less well when data is not standardized. Users who do not
18+
want to use transforms should explicitly pass in `None` (#2458, #2532).
19+
20+
#### Compatibility
21+
* Unpin NumPy (#2459).
22+
* Require PyTorch>=2.0.1, GPyTorch==1.13, and linear_operator==0.5.3 (#2511).
23+
24+
#### New features
25+
* Introduce `PathwiseThompsonSampling` acquisition function (#2443).
26+
* Enable `qBayesianActiveLearningByDisagreement` to accept a posterior
27+
transform, and improve its implementation (#2457).
28+
* Enable `SaasPyroModel` to sample via NUTS when training data is empty (#2465).
29+
* Add multi-objective `qBayesianActiveLearningByDisagreement` (#2475).
30+
* Add input constructor for `qNegIntegratedPosteriorVariance` (#2477).
31+
* Introduce `qLowerConfidenceBound` (#2517).
32+
* Add input constructor for `qMultiFidelityHypervolumeKnowledgeGradient` (#2524).
33+
* Add `posterior_transform` to `ApproximateGPyTorchModel.posterior` (#2531).
34+
35+
#### Bug fixes
36+
* Fix `batch_shape` default in `OrthogonalAdditiveKernel` (#2473).
37+
* Ensure all tensors are on CPU in `HitAndRunPolytopeSampler` (#2502).
38+
* Fix duplicate logging in `generation/gen.py` (#2504).
39+
* Raise exception if `X_pending` is set on the underlying `AcquisitionFunction`
40+
in prior-guided `AcquisitionFunction` (#2505).
41+
* Make affine input transforms error with data of incorrect dimension, even in
42+
eval mode (#2510).
43+
* Use fidelity-aware `current_value` in input constructor for `qMultiFidelityKnowledgeGradient` (#2519).
44+
* Apply input transforms when computing MLL in model closures (#2527).
45+
* Detach `fval` in `torch_minimize` to remove an opportunity for memory leaks
46+
(#2529).
47+
48+
#### Documentation
49+
* Clarify incompatibility of inter-point constraints with `get_polytope_samples`
50+
(#2469).
51+
* Update tutorials to use the log variants of EI-family acquisition functions,
52+
don't make tutorials pass `Standardize` unnecessarily, and other
53+
simplifications and cleanup (#2462, #2463, #2490, #2495, #2496, #2498, #2499).
54+
* Remove deprecated `FixedNoiseGP` (#2536).
55+
56+
#### Other changes
57+
* More informative warnings about failure to standardize or normalize data
58+
(#2489).
59+
* Suppress irrelevant warnings in `qHypervolumeKnowledgeGradient` helpers
60+
(#2486).
61+
* Cleaner `botorch/acquisition/multi_objective` directory structure (#2485).
62+
* With `AffineInputTransform`, always require data to have at least two
63+
dimensions (#2518).
64+
* Remove deprecated argument `data_fidelity` to `SingleTaskMultiFidelityGP` and
65+
deprecated model `FixedNoiseMultiFidelityGP` (#2532).
66+
* Raise an `OptimizationGradientError` when optimization produces NaN gradients (#2537).
67+
* Improve numerics by replacing `torch.log(1 + x)` with `torch.log1p(x)`
68+
and `torch.exp(x) - 1` with `torch.special.expm1` (#2539, #2540, #2541).
69+
70+
571
## [0.11.3] -- Jul 22, 2024
672

773
#### Compatibility

0 commit comments

Comments
 (0)