Skip to content

Commit 0f43434

Browse files
DOC fix typos found by codespell (scikit-learn#27457)
1 parent 011e209 commit 0f43434

File tree

15 files changed

+22
-22
lines changed

15 files changed

+22
-22
lines changed

asv_benchmarks/asv.conf.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@
7272
// followed by the pip installed packages).
7373
//
7474
// The versions of the dependencies should be bumped in a dedicated commit
75-
// to easily identify regressions/imrovements due to code changes from
75+
// to easily identify regressions/improvements due to code changes from
7676
// those due to dependency changes.
7777
//
7878
"matrix": {

doc/modules/clustering.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1780,7 +1780,7 @@ mean of homogeneity and completeness**:
17801780
measure <https://aclweb.org/anthology/D/D07/D07-1043.pdf>`_
17811781
Andrew Rosenberg and Julia Hirschberg, 2007
17821782

1783-
.. [B2011] `Identication and Characterization of Events in Social Media
1783+
.. [B2011] `Identification and Characterization of Events in Social Media
17841784
<http://www.cs.columbia.edu/~hila/hila-thesis-distributed.pdf>`_, Hila
17851785
Becker, PhD Thesis.
17861786

doc/whats_new/v0.23.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -784,7 +784,7 @@ Miscellaneous
784784
.............
785785

786786
- |MajorFeature| Adds a HTML representation of estimators to be shown in
787-
a jupyter notebook or lab. This visualization is acitivated by setting the
787+
a jupyter notebook or lab. This visualization is activated by setting the
788788
`display` option in :func:`sklearn.set_config`. :pr:`14180` by
789789
`Thomas Fan`_.
790790

doc/whats_new/v1.3.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -432,7 +432,7 @@ Changelog
432432
the pandas parser. The parameter `read_csv_kwargs` allows to overwrite this behaviour.
433433
:pr:`26551` by :user:`Guillaume Lemaitre <glemaitre>`.
434434

435-
- |Fix| :func:`datasets.fetch_openml` will consistenly use `np.nan` as missing marker
435+
- |Fix| :func:`datasets.fetch_openml` will consistently use `np.nan` as missing marker
436436
with both parsers `"pandas"` and `"liac-arff"`.
437437
:pr:`26579` by :user:`Guillaume Lemaitre <glemaitre>`.
438438

@@ -724,7 +724,7 @@ Changelog
724724

725725
- |API| The parameter `log_scale` in the class
726726
:class:`model_selection.LearningCurveDisplay` has been deprecated in 1.3 and
727-
will be removed in 1.5. The default scale can be overriden by setting it
727+
will be removed in 1.5. The default scale can be overridden by setting it
728728
directly on the `ax` object and will be set automatically from the spacing
729729
of the data points otherwise.
730730
:pr:`25120` by :user:`Guillaume Lemaitre <glemaitre>`.
@@ -837,7 +837,7 @@ Changelog
837837
- |Fix| :class:`preprocessing.PowerTransformer` now correctly preserves the Pandas
838838
Index when the `set_config(transform_output="pandas")`. :pr:`26454` by `Thomas Fan`_.
839839

840-
- |Fix| :class:`preprocessing.PowerTransformer` now correcly raises error when
840+
- |Fix| :class:`preprocessing.PowerTransformer` now correctly raises error when
841841
using `method="box-cox"` on data with a constant `np.nan` column.
842842
:pr:`26400` by :user:`Yao Xiao <Charlie-XIAO>`.
843843

examples/applications/plot_cyclical_feature_engineering.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -631,7 +631,7 @@ def periodic_spline_transformer(period, n_splines=None, degree=3):
631631

632632
# %%
633633
# Those features are then combined with the ones already computed in the
634-
# previous spline-base pipeline. We can observe a nice performance improvemnt
634+
# previous spline-base pipeline. We can observe a nice performance improvement
635635
# by modeling this pairwise interaction explicitly:
636636

637637
cyclic_spline_interactions_pipeline = make_pipeline(

examples/ensemble/plot_voting_probas.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
three different classifiers and averaged by the
1010
:class:`~ensemble.VotingClassifier`.
1111
12-
First, three examplary classifiers are initialized
12+
First, three exemplary classifiers are initialized
1313
(:class:`~linear_model.LogisticRegression`, :class:`~naive_bayes.GaussianNB`,
1414
and :class:`~ensemble.RandomForestClassifier`) and used to initialize a
1515
soft-voting :class:`~ensemble.VotingClassifier` with weights `[1, 1, 5]`, which

examples/linear_model/plot_poisson_regression_non_normal_loss.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ def score_estimator(estimator, df_test):
209209
# ---------------------------
210210
#
211211
# We start by modeling the target variable with the (l2 penalized) least
212-
# squares linear regression model, more comonly known as Ridge regression. We
212+
# squares linear regression model, more commonly known as Ridge regression. We
213213
# use a low penalization `alpha`, as we expect such a linear model to under-fit
214214
# on such a large dataset.
215215

examples/manifold/plot_lle_digits.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ def plot_embedding(X, title):
145145
"Spectral embedding": SpectralEmbedding(
146146
n_components=2, random_state=0, eigen_solver="arpack"
147147
),
148-
"t-SNE embeedding": TSNE(
148+
"t-SNE embedding": TSNE(
149149
n_components=2,
150150
n_iter=500,
151151
n_iter_without_progress=150,
@@ -158,7 +158,7 @@ def plot_embedding(X, title):
158158
}
159159

160160
# %%
161-
# Once we declared all the methodes of interest, we can run and perform the projection
161+
# Once we declared all the methods of interest, we can run and perform the projection
162162
# of the original data. We will store the projected data as well as the computational
163163
# time needed to perform each projection.
164164
from time import time

examples/svm/plot_svm_kernels.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def plot_training_data_with_decision_boundary(kernel):
184184
# (`gamma`) controls the influence of each individual training sample on the
185185
# decision boundary and :math:`{r}` is the bias term (`coef0`) that shifts the
186186
# data up or down. Here, we use the default value for the degree of the
187-
# polynomal in the kernel funcion (`degree=3`). When `coef0=0` (the default),
187+
# polynomial in the kernel function (`degree=3`). When `coef0=0` (the default),
188188
# the data is only transformed, but no additional dimension is added. Using a
189189
# polynomial kernel is equivalent to creating
190190
# :class:`~sklearn.preprocessing.PolynomialFeatures` and then fitting a

sklearn/ensemble/_gb.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def _init_raw_predictions(X, estimator, loss, use_predict_proba):
8787
estimator : object
8888
The estimator to use to compute the predictions.
8989
loss : BaseLoss
90-
An instace of a loss function class.
90+
An instance of a loss function class.
9191
use_predict_proba : bool
9292
Whether estimator.predict_proba is used instead of estimator.predict.
9393

0 commit comments

Comments
 (0)