Skip to content

Commit d56dc5d

Browse files
authored
DOC Adding dropdown for module 2.2 Manifold Learning (scikit-learn#26720)
1 parent 5c57694 commit d56dc5d

File tree

1 file changed

+50
-20
lines changed

1 file changed

+50
-20
lines changed

doc/modules/manifold.rst

Lines changed: 50 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -130,8 +130,10 @@ distances between all points. Isomap can be performed with the object
130130
:align: center
131131
:scale: 50
132132

133-
Complexity
134-
----------
133+
|details-start|
134+
**Complexity**
135+
|details-split|
136+
135137
The Isomap algorithm comprises three stages:
136138

137139
1. **Nearest neighbor search.** Isomap uses
@@ -162,6 +164,8 @@ The overall complexity of Isomap is
162164
* :math:`k` : number of nearest neighbors
163165
* :math:`d` : output dimension
164166

167+
|details-end|
168+
165169
.. topic:: References:
166170

167171
* `"A global geometric framework for nonlinear dimensionality reduction"
@@ -187,8 +191,9 @@ Locally linear embedding can be performed with function
187191
:align: center
188192
:scale: 50
189193

190-
Complexity
191-
----------
194+
|details-start|
195+
**Complexity**
196+
|details-split|
192197

193198
The standard LLE algorithm comprises three stages:
194199

@@ -209,6 +214,8 @@ The overall complexity of standard LLE is
209214
* :math:`k` : number of nearest neighbors
210215
* :math:`d` : output dimension
211216

217+
|details-end|
218+
212219
.. topic:: References:
213220

214221
* `"Nonlinear dimensionality reduction by locally linear embedding"
@@ -241,8 +248,9 @@ It requires ``n_neighbors > n_components``.
241248
:align: center
242249
:scale: 50
243250

244-
Complexity
245-
----------
251+
|details-start|
252+
**Complexity**
253+
|details-split|
246254

247255
The MLLE algorithm comprises three stages:
248256

@@ -265,6 +273,8 @@ The overall complexity of MLLE is
265273
* :math:`k` : number of nearest neighbors
266274
* :math:`d` : output dimension
267275

276+
|details-end|
277+
268278
.. topic:: References:
269279

270280
* `"MLLE: Modified Locally Linear Embedding Using Multiple Weights"
@@ -291,8 +301,9 @@ It requires ``n_neighbors > n_components * (n_components + 3) / 2``.
291301
:align: center
292302
:scale: 50
293303

294-
Complexity
295-
----------
304+
|details-start|
305+
**Complexity**
306+
|details-split|
296307

297308
The HLLE algorithm comprises three stages:
298309

@@ -313,6 +324,8 @@ The overall complexity of standard HLLE is
313324
* :math:`k` : number of nearest neighbors
314325
* :math:`d` : output dimension
315326

327+
|details-end|
328+
316329
.. topic:: References:
317330

318331
* `"Hessian Eigenmaps: Locally linear embedding techniques for
@@ -335,8 +348,9 @@ preserving local distances. Spectral embedding can be performed with the
335348
function :func:`spectral_embedding` or its object-oriented counterpart
336349
:class:`SpectralEmbedding`.
337350

338-
Complexity
339-
----------
351+
|details-start|
352+
**Complexity**
353+
|details-split|
340354

341355
The Spectral Embedding (Laplacian Eigenmaps) algorithm comprises three stages:
342356

@@ -358,6 +372,8 @@ The overall complexity of spectral embedding is
358372
* :math:`k` : number of nearest neighbors
359373
* :math:`d` : output dimension
360374

375+
|details-end|
376+
361377
.. topic:: References:
362378

363379
* `"Laplacian Eigenmaps for Dimensionality Reduction
@@ -383,8 +399,9 @@ tangent spaces to learn the embedding. LTSA can be performed with function
383399
:align: center
384400
:scale: 50
385401

386-
Complexity
387-
----------
402+
|details-start|
403+
**Complexity**
404+
|details-split|
388405

389406
The LTSA algorithm comprises three stages:
390407

@@ -404,6 +421,8 @@ The overall complexity of standard LTSA is
404421
* :math:`k` : number of nearest neighbors
405422
* :math:`d` : output dimension
406423

424+
|details-end|
425+
407426
.. topic:: References:
408427

409428
* :arxiv:`"Principal manifolds and nonlinear dimensionality reduction via
@@ -448,8 +467,9 @@ the similarities chosen in some optimal ways. The objective, called the
448467
stress, is then defined by :math:`\sum_{i < j} d_{ij}(X) - \hat{d}_{ij}(X)`
449468

450469

451-
Metric MDS
452-
----------
470+
|details-start|
471+
**Metric MDS**
472+
|details-split|
453473

454474
The simplest metric :class:`MDS` model, called *absolute MDS*, disparities are defined by
455475
:math:`\hat{d}_{ij} = S_{ij}`. With absolute MDS, the value :math:`S_{ij}`
@@ -458,8 +478,11 @@ should then correspond exactly to the distance between point :math:`i` and
458478

459479
Most commonly, disparities are set to :math:`\hat{d}_{ij} = b S_{ij}`.
460480

461-
Nonmetric MDS
462-
-------------
481+
|details-end|
482+
483+
|details-start|
484+
**Nonmetric MDS**
485+
|details-split|
463486

464487
Non metric :class:`MDS` focuses on the ordination of the data. If
465488
:math:`S_{ij} > S_{jk}`, then the embedding should enforce :math:`d_{ij} <
@@ -490,6 +513,7 @@ in the metric case.
490513
:align: center
491514
:scale: 60
492515

516+
|details-end|
493517

494518
.. topic:: References:
495519

@@ -551,8 +575,10 @@ The disadvantages to using t-SNE are roughly:
551575
:align: center
552576
:scale: 50
553577

554-
Optimizing t-SNE
555-
----------------
578+
|details-start|
579+
**Optimizing t-SNE**
580+
|details-split|
581+
556582
The main purpose of t-SNE is visualization of high-dimensional data. Hence,
557583
it works best when the data will be embedded on two or three dimensions.
558584

@@ -601,8 +627,11 @@ but less accurate results.
601627
provides a good discussion of the effects of the various parameters, as well
602628
as interactive plots to explore the effects of different parameters.
603629

604-
Barnes-Hut t-SNE
605-
----------------
630+
|details-end|
631+
632+
|details-start|
633+
**Barnes-Hut t-SNE**
634+
|details-split|
606635

607636
The Barnes-Hut t-SNE that has been implemented here is usually much slower than
608637
other manifold learning algorithms. The optimization is quite difficult
@@ -638,6 +667,7 @@ imply that the data cannot be correctly classified by a supervised model. It
638667
might be the case that 2 dimensions are not high enough to accurately represent
639668
the internal structure of the data.
640669

670+
|details-end|
641671

642672
.. topic:: References:
643673

0 commit comments

Comments
 (0)