Skip to content

Commit 6d6fc5d

Browse files
saitcakmakfacebook-github-bot
authored andcommitted
Changelog for 0.13.0 (#2714)
Summary: 4.5 months between releases leads to a long changelog... Pull Request resolved: #2714 Reviewed By: esantorella Differential Revision: D68958765 Pulled By: saitcakmak fbshipit-source-id: 4bae846119b90f244e2692a18c6a73c5a897c923
1 parent 16fbe2c commit 6d6fc5d

File tree

1 file changed

+80
-0
lines changed

1 file changed

+80
-0
lines changed

CHANGELOG.md

+80
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,86 @@
22

33
The release log for BoTorch.
44

5+
6+
## [0.13.0] -- Feb 3, 2025
7+
8+
#### Highlights
9+
* BoTorch website has been upgraded to utilize Docusaurus v3, with the API
10+
reference being hosted by ReadTheDocs. The tutorials now expose an option to
11+
open with Colab, for easy access to a runtime with modifiable tutorials.
12+
The old versions of the website can be found at archive.botorch.org (#2653).
13+
* `RobustRelevancePursuitSingleTaskGP`, a robust Gaussian process model that adaptively identifies
14+
outliers and leverages Bayesian model selection ([paper](https://arxiv.org/pdf/2410.24222)) (#2608, #2690, #2707).
15+
* `LatentKroneckerGP`, a scalable model for data on partially observed grids, like the joint modeling
16+
of hyper-parameters and partially completed learning curves in AutoML ([paper](https://arxiv.org/pdf/2410.09239)) (#2647).
17+
* Add MAP-SAAS model, which utilizes the sparse axis-aligned subspace priors
18+
([paper](https://proceedings.mlr.press/v161/eriksson21a/eriksson21a.pdf)) with MAP model fitting (#2694).
19+
20+
#### Compatibility
21+
* Require GPyTorch==1.14 and linear_operator==0.6 (#2710).
22+
* Remove support for anaconda (official package) (#2617).
23+
* Remove `mpmath` dependency pin (#2640).
24+
* Updates to optimization routines to support SciPy>1.15:
25+
* Use `threadpoolctl` in `minimize_with_timeout` to prevent CPU oversubscription (#2712).
26+
* Update optimizer output parsing to make model fitting compatible with SciPy>1.15 (#2667).
27+
28+
#### New Features
29+
* Add support for priors in OAK Kernel (#2535).
30+
* Add `BatchBroadcastedTransformList`, which broadcasts a list of `InputTransform`s over batch shapes (#2558).
31+
* `InteractionFeatures` input transform (#2560).
32+
* Implement `percentile_of_score`, which takes inputs `data` and `score`, and returns the percentile of
33+
values in `data` that are below `score` (#2568).
34+
* Add `optimize_acqf_mixed_alternating`, which supports optimization over mixed discrete & continuous spaces (#2573).
35+
* Add support for `PosteriorTransform` to `get_optimal_samples` and `optimize_posterior_samples` (#2576).
36+
* Support inequality constraints & `X_avoid` in `optimize_acqf_discrete` (#2593).
37+
* Add ability to mix batch initial conditions and internal IC generation (#2610).
38+
* Add `qPosteriorStandardDeviation` acquisition function (#2634).
39+
* TopK downselection for initial batch generation. (#2636).
40+
* Support optimization over mixed spaces in `optimize_acqf_homotopy` (#2639).
41+
* Add `InfeasibilityError` exception class (#2652).
42+
* Support `InputTransform`s in `SparseOutlierLikelihood` and `get_posterior_over_support` (#2659).
43+
* `StratifiedStandardize` outcome transform (#2671).
44+
* Add `center` argument to `Normalize` (#2680).
45+
* Add input normalization step in `Warp` input transform (#2692).
46+
* Support mixing fully Bayesian & `SingleTaskGP` models in `ModelListGP` (#2693).
47+
* Add abstract fully Bayesian GP class and fully Bayesian linear GP model (#2696, #2697).
48+
* Tutorial on BO constrained by probability of classification model (#2700).
49+
50+
#### Bug Fixes
51+
* Fix error in decoupled_mobo tutorial due to torch/numpy issues (#2550).
52+
* Raise error for MTGP in `batch_cross_validation` (#2554).
53+
* Fix `posterior` method in `BatchedMultiOutputGPyTorchModel` for tracing JIT (#2592).
54+
* Replace hard-coded double precision in test_functions with default dtype (#2597).
55+
* Remove `as_tensor` argument of `set_tensors_from_ndarray_1d` (#2615).
56+
* Skip fixed feature enumerations in `optimize_acqf_mixed` that can't satisfy the parameter constraints (#2614).
57+
* Fix `get_default_partitioning_alpha` for >7 objectives (#2646).
58+
* Fix random seed handling in `sample_hypersphere` (#2688).
59+
* Fix bug in `optimize_objective` with fixed features (#2691).
60+
* `FullyBayesianSingleTaskGP.train` should not return `None` (#2702).
61+
62+
#### Other Changes
63+
* More efficient sampling from `KroneckerMultiTaskGP` (#2460).
64+
* Update `HigherOrderGP` to use new priors & standardize outcome transform by default (#2555).
65+
* Update `initialize_q_batch` methods to return both candidates and the corresponding acquisition values (#2571).
66+
* Update optimization documentation with LogEI insights (#2587).
67+
* Make all arguments in `optimize_acqf_homotopy` explicit (#2588).
68+
* Introduce `trial_indices` argument to `SupervisedDataset` (#2595).
69+
* Make optimizers raise an error when provided negative indices for fixed features (#2603).
70+
* Make input transforms `Module`s by default (#2607).
71+
* Reduce memory usage in `ConstrainedMaxPosteriorSampling` (#2622).
72+
* Add `clone` method to datasets (#2625).
73+
* Add support for continuous relaxation within `optimize_acqf_mixed_alternating` (#2635).
74+
* Update indexing in `qLogNEI._get_samples_and_objectives` to support multiple input batches (#2649).
75+
* Pass `X` to `OutcomeTransform`s (#2663).
76+
* Use mini-batches when evaluating candidates within `optimize_acqf_discrete_local_search` (#2682).
77+
78+
#### Deprecations
79+
* Remove `HeteroskedasticSingleTaskGP` (#2616).
80+
* Remove `FixedNoiseDataset` (#2626).
81+
* Remove support for legacy format non-linear constraints (#2627).
82+
* Remove `maximize` option from information theoretic acquisition functions (#2590).
83+
84+
585
## [0.12.0] -- Sep 17, 2024
686

787
#### Major changes

0 commit comments

Comments
 (0)