Skip to content

Commit b197bf1

Browse files
CristianLarafacebook-github-bot
authored andcommitted
Fix broken links (#2720)
Summary: ## Motivation The recent website upgrade moved the location of tutorials and api reference breaking existing links to those. Here we fix those links and also configure Docusaurus to raise an error on broken links in the future. Pull Request resolved: #2720 Test Plan: Docusaurus checks for broken links when creating a production build. Running `./scripts/build_docs.sh -b` now results in a clean build with no broken links reported. ## Related PRs - #2715 Reviewed By: saitcakmak, Balandat Differential Revision: D69034874 Pulled By: CristianLara fbshipit-source-id: 3a3c9488a6bb1c0a21d0cbb854972e84432eb467
1 parent 37f3f7d commit b197bf1

File tree

36 files changed

+118
-118
lines changed

36 files changed

+118
-118
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ pip install -e ".[dev, tutorials]"
130130

131131
Here's a quick run down of the main components of a Bayesian optimization loop.
132132
For more details see our [Documentation](https://botorch.org/docs/introduction) and the
133-
[Tutorials](https://botorch.org/tutorials).
133+
[Tutorials](https://botorch.org/docs/tutorials).
134134

135135
1. Fit a Gaussian Process model to data
136136
```python

botorch/models/cost.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
1010
Cost are useful for defining known cost functions when the cost of an evaluation
1111
is heterogeneous in fidelity. For a full worked example, see the
12-
`tutorial <https://botorch.org/tutorials/multi_fidelity_bo>`_ on continuous
12+
`tutorial <https://botorch.org/docs/tutorials/multi_fidelity_bo>`_ on continuous
1313
multi-fidelity Bayesian Optimization.
1414
"""
1515

@@ -29,7 +29,7 @@ class AffineFidelityCostModel(DeterministicModel):
2929
cost = fixed_cost + sum_j weights[j] * X[fidelity_dims[j]]
3030
3131
For a full worked example, see the
32-
`tutorial <https://botorch.org/tutorials/multi_fidelity_bo>`_ on continuous
32+
`tutorial <https://botorch.org/docs/tutorials/multi_fidelity_bo>`_ on continuous
3333
multi-fidelity Bayesian Optimization.
3434
3535
Example:

botorch/models/gp_regression_fidelity.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
Multi-Fidelity Gaussian Process Regression models based on GPyTorch models.
99
1010
For more on Multi-Fidelity BO, see the
11-
`tutorial <https://botorch.org/tutorials/discrete_multi_fidelity_bo>`__.
11+
`tutorial <https://botorch.org/docs/tutorials/discrete_multi_fidelity_bo>`__.
1212
1313
A common use case of multi-fidelity regression modeling is optimizing a
1414
"high-fidelity" function that is expensive to simulate when you have access to

botorch/models/transforms/input.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1242,7 +1242,7 @@ class AppendFeatures(InputTransform):
12421242
`RiskMeasureMCObjective` to optimize risk measures as described in
12431243
[Cakmak2020risk]_. A tutorial notebook implementing the rhoKG acqusition
12441244
function introduced in [Cakmak2020risk]_ can be found at
1245-
https://botorch.org/tutorials/risk_averse_bo_with_environmental_variables.
1245+
https://botorch.org/docs/tutorials/risk_averse_bo_with_environmental_variables.
12461246
12471247
The steps for using this to obtain samples of a risk measure are as follows:
12481248
@@ -1505,7 +1505,7 @@ class InputPerturbation(InputTransform):
15051505
on optimizing risk measures.
15061506
15071507
A tutorial notebook using this with `qNoisyExpectedImprovement` can be found at
1508-
https://botorch.org/tutorials/risk_averse_bo_with_input_perturbations.
1508+
https://botorch.org/docs/tutorials/risk_averse_bo_with_input_perturbations.
15091509
"""
15101510

15111511
is_one_to_many: bool = True

docs/acquisition.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ black box function.
99

1010
BoTorch supports both analytic as well as (quasi-) Monte-Carlo based acquisition
1111
functions. It provides a generic
12-
[`AcquisitionFunction`](../api/acquisition.html#acquisitionfunction) API that
12+
[`AcquisitionFunction`](https://botorch.readthedocs.io/en/latest/acquisition.html#botorch.acquisition.acquisition.AcquisitionFunction) API that
1313
abstracts away from the particular type, so that optimization can be performed
1414
on the same objects.
1515

@@ -64,7 +64,7 @@ where $\mu(X)$ is the posterior mean of $f$ at $X$, and $L(X)L(X)^T = \Sigma(X)$
6464
is a root decomposition of the posterior covariance matrix.
6565

6666
All MC-based acquisition functions in BoTorch are derived from
67-
[`MCAcquisitionFunction`](../api/acquisition.html#mcacquisitionfunction).
67+
[`MCAcquisitionFunction`](https://botorch.readthedocs.io/en/latest/acquisition.html#botorch.acquisition.monte_carlo.MCAcquisitionFunction).
6868

6969
Acquisition functions expect input tensors $X$ of shape
7070
$\textit{batch\_shape} \times q \times d$, where $d$ is the dimension of the
@@ -122,15 +122,15 @@ above.
122122

123123
BoTorch also provides implementations of analytic acquisition functions that
124124
do not depend on MC sampling. These acquisition functions are subclasses of
125-
[`AnalyticAcquisitionFunction`](../api/acquisition.html#analyticacquisitionfunction)
125+
[`AnalyticAcquisitionFunction`](https://botorch.readthedocs.io/en/latest/acquisition.html#botorch.acquisition.analytic.AnalyticAcquisitionFunction)
126126
and only exist for the case of a single candidate point ($q = 1$). These
127127
include classical acquisition functions such as Expected Improvement (EI),
128128
Upper Confidence Bound (UCB), and Probability of Improvement (PI). An example
129-
comparing [`ExpectedImprovement`](../api/acquisition.html#expectedimprovement),
129+
comparing [`ExpectedImprovement`](https://botorch.readthedocs.io/en/latest/acquisition.html#botorch.acquisition.analytic.ExpectedImprovement),
130130
the analytic version of EI, to it's MC counterpart
131-
[`qExpectedImprovement`](../api/acquisition.html#qexpectedimprovement)
131+
[`qExpectedImprovement`](https://botorch.readthedocs.io/en/latest/acquisition.html#botorch.acquisition.monte_carlo.qExpectedImprovement)
132132
can be found in
133-
[this tutorial](../tutorials/compare_mc_analytic_acquisition).
133+
[this tutorial](tutorials/compare_mc_analytic_acquisition).
134134

135135
Analytic acquisition functions allow for an explicit expression in terms of the
136136
summary statistics of the posterior distribution at the evaluated point(s).

docs/batching.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ referred to as q-Acquisition Functions. For instance, BoTorch ships with support
1919
for q-EI, q-UCB, and a few others.
2020

2121
As discussed in the
22-
[design philosophy](design_philosophy#batching-batching-batching),
22+
[design philosophy](/docs/design_philosophy#parallelism-through-batched-computations),
2323
BoTorch has adopted the convention of referring to batches in the
2424
batch-acquisition sense as "q-batches", and to batches in the torch
2525
batch-evaluation sense as "t-batches".
@@ -35,9 +35,9 @@ with samples from the posterior in a consistent fashion.
3535

3636
#### Batch-Mode Decorator
3737

38-
In order to simplify the user-facing API for evaluating acquisition functions,
38+
In order to simplify the user-facing API for evaluating acquisition functions,
3939
BoTorch implements the
40-
[`@t_batch_mode_transform`](../api/utils.html#botorch.utils.transforms.t_batch_mode_transform)
40+
[`@t_batch_mode_transform`](https://botorch.readthedocs.io/en/latest/utils.html#botorch.utils.transforms.t_batch_mode_transform)
4141
decorator, which allows the use of non-batch mode inputs. If applied to an
4242
instance method with a single `Tensor` argument, an input tensor to that method
4343
without a t-batch dimension (i.e. tensors of shape $q \times d$) will automatically
@@ -66,7 +66,7 @@ distribution:
6666
of $b_1 \times \cdots \times b_k$, with $n$ data points of $d$-dimensions each in every batch)
6767
yields a posterior with `event_shape` being $b_1 \times \cdots \times b_k \times n \times 1$.
6868
In most cases, the t-batch-shape will be single-dimensional (i.e., $k=1$).
69-
- Evaluating a multi-output model with $o$ outputs at a $b_1 \times \cdots \times b_k
69+
- Evaluating a multi-output model with $o$ outputs at a $b_1 \times \cdots \times b_k
7070
\times n \times d$ tensor yields a posterior with `event_shape` equal to
7171
$b_1 \times \cdots \times b_k \times n \times o$.
7272
- Recall from the previous section that internally, with the help of the
@@ -123,7 +123,7 @@ The shape of the test points must support broadcasting to the $\textit{batch_sha
123123
necessary over $\textit{batch_shape}$)
124124

125125
#### Batched Multi-Output Models
126-
The [`BatchedMultiOutputGPyTorchModel`](../api/models.html#batchedmultioutputgpytorchmodel)
126+
The [`BatchedMultiOutputGPyTorchModel`](https://botorch.readthedocs.io/en/latest/models.html#botorch.models.gpytorch.BatchedMultiOutputGPyTorchModel)
127127
class implements a fast multi-output model (assuming conditional independence of
128128
the outputs given the input) by batching over the outputs.
129129

@@ -157,5 +157,5 @@ back-propagating.
157157

158158
#### Batched Cross Validation
159159
See the
160-
[Using batch evaluation for fast cross validation](../tutorials/batch_mode_cross_validation)
160+
[Using batch evaluation for fast cross validation](tutorials/batch_mode_cross_validation)
161161
tutorial for details on using batching for fast cross validation.

docs/botorch_and_ax.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ it easy to drive the car.
1818

1919

2020
Ax provides a
21-
[`BotorchModel`](https://ax.dev/api/models.html#ax.models.torch.botorch.BotorchModel)
21+
[`BotorchModel`](https://https://ax.readthedocs.io/en/latest/models.html#ax.models.torch.botorch.BotorchModel)
2222
that is a sensible default for modeling and optimization which can be customized
2323
by specifying and passing in bespoke model constructors, acquisition functions,
2424
and optimization strategies.
@@ -43,7 +43,7 @@ the the Bayesian Optimization loop untouched. It is then straightforward to plug
4343
your custom BoTorch model or acquisition function into Ax to take advantage of
4444
Ax's various loop control APIs, as well as its powerful automated metadata
4545
management, data storage, etc. See the
46-
[Using a custom BoTorch model in Ax](../tutorials/custom_botorch_model_in_ax)
46+
[Using a custom BoTorch model in Ax](tutorials/custom_botorch_model_in_ax)
4747
tutorial for more on how to do this.
4848

4949

@@ -53,8 +53,8 @@ If you're working in a non-standard setting, such as structured feature or
5353
design spaces, or where the model fitting process requires interactive work,
5454
then using Ax may not be the best solution for you. In such a situation, you
5555
might be better off writing your own full Bayesian Optimization loop in BoTorch.
56-
The [q-Noisy Constrained EI](../tutorials/closed_loop_botorch_only) tutorial and
57-
[variational auto-encoder](../tutorials/vae_mnist) tutorial give examples of how
56+
The [q-Noisy Constrained EI](tutorials/closed_loop_botorch_only) tutorial and
57+
[variational auto-encoder](tutorials/vae_mnist) tutorial give examples of how
5858
this can be done.
5959

6060
You may also consider working purely in BoTorch if you want to be able to

docs/constraints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ the constrained expected improvement variant is mathematically equivalent to the
4141
unconstrained expected improvement of the objective, multiplied by the probability of
4242
feasibility under the modeled outcome constraint.
4343

44-
See the [Closed-Loop Optimization](../tutorials/closed_loop_botorch_only)
44+
See the [Closed-Loop Optimization](tutorials/closed_loop_botorch_only)
4545
tutorial for an example of using outcome constraints in BoTorch.
4646

4747

docs/design_philosophy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ all data available. In typical machine learning model training, a stochastic
6969
version of the empirical loss, obtained by "mini-batching" the data, is
7070
optimized using stochastic optimization algorithms.
7171

72-
In BoTorch, [`AcquisitionFunction`](../api/acquisition.html#acquisitionfunction)
72+
In BoTorch, [`AcquisitionFunction`](https://botorch.readthedocs.io/en/latest/acquisition.html#botorch.acquisition.acquisition.AcquisitionFunction)
7373
modules map an input design $X$ to the acquisition function value. Optimizing
7474
the acquisition function means optimizing the output over the possible values of
7575
$X$. If the acquisition function is deterministic, then so is the optimization

docs/getting_started.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -89,13 +89,13 @@ Here's a quick run down of the main components of a Bayesian Optimization loop.
8989
## Tutorials
9090

9191
Our Jupyter notebook tutorials help you get off the ground with BoTorch.
92-
View and download them [here](../tutorials).
92+
View and download them [here](tutorials).
9393

9494

9595
## API Reference
9696

9797
For an in-depth reference of the various BoTorch internals, see our
98-
[API Reference](../api).
98+
[API Reference](https://botorch.readthedocs.io/).
9999

100100

101101
## Contributing

0 commit comments

Comments
 (0)