Skip to content

Commit ccad485

Browse files
authored
Merge pull request scikit-learn#7316 from totalgood/doc-clf-reg-patch
[MRG + 1] fix regressor var name
2 parents 6297815 + b4661b1 commit ccad485

File tree

1 file changed

+32
-25
lines changed

1 file changed

+32
-25
lines changed

doc/modules/linear_model.rst

Lines changed: 32 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -43,10 +43,10 @@ and will store the coefficients :math:`w` of the linear model in its
4343
``coef_`` member::
4444

4545
>>> from sklearn import linear_model
46-
>>> clf = linear_model.LinearRegression()
47-
>>> clf.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
46+
>>> reg = linear_model.LinearRegression()
47+
>>> reg.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
4848
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
49-
>>> clf.coef_
49+
>>> reg.coef_
5050
array([ 0.5, 0.5])
5151

5252
However, coefficient estimates for Ordinary Least Squares rely on the
@@ -101,13 +101,13 @@ arrays X, y and will store the coefficients :math:`w` of the linear model in
101101
its ``coef_`` member::
102102

103103
>>> from sklearn import linear_model
104-
>>> clf = linear_model.Ridge (alpha = .5)
105-
>>> clf.fit ([[0, 0], [0, 0], [1, 1]], [0, .1, 1]) # doctest: +NORMALIZE_WHITESPACE
104+
>>> reg = linear_model.Ridge (alpha = .5)
105+
>>> reg.fit ([[0, 0], [0, 0], [1, 1]], [0, .1, 1]) # doctest: +NORMALIZE_WHITESPACE
106106
Ridge(alpha=0.5, copy_X=True, fit_intercept=True, max_iter=None,
107107
normalize=False, random_state=None, solver='auto', tol=0.001)
108-
>>> clf.coef_
108+
>>> reg.coef_
109109
array([ 0.34545455, 0.34545455])
110-
>>> clf.intercept_ #doctest: +ELLIPSIS
110+
>>> reg.intercept_ #doctest: +ELLIPSIS
111111
0.13636...
112112

113113

@@ -138,11 +138,11 @@ as GridSearchCV except that it defaults to Generalized Cross-Validation
138138
(GCV), an efficient form of leave-one-out cross-validation::
139139

140140
>>> from sklearn import linear_model
141-
>>> clf = linear_model.RidgeCV(alphas=[0.1, 1.0, 10.0])
142-
>>> clf.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1]) # doctest: +SKIP
141+
>>> reg = linear_model.RidgeCV(alphas=[0.1, 1.0, 10.0])
142+
>>> reg.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1]) # doctest: +SKIP
143143
RidgeCV(alphas=[0.1, 1.0, 10.0], cv=None, fit_intercept=True, scoring=None,
144144
normalize=False)
145-
>>> clf.alpha_ # doctest: +SKIP
145+
>>> reg.alpha_ # doctest: +SKIP
146146
0.1
147147

148148
.. topic:: References
@@ -182,12 +182,12 @@ the algorithm to fit the coefficients. See :ref:`least_angle_regression`
182182
for another implementation::
183183

184184
>>> from sklearn import linear_model
185-
>>> clf = linear_model.Lasso(alpha = 0.1)
186-
>>> clf.fit([[0, 0], [1, 1]], [0, 1])
185+
>>> reg = linear_model.Lasso(alpha = 0.1)
186+
>>> reg.fit([[0, 0], [1, 1]], [0, 1])
187187
Lasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000,
188188
normalize=False, positive=False, precompute=False, random_state=None,
189189
selection='cyclic', tol=0.0001, warm_start=False)
190-
>>> clf.predict([[1, 1]])
190+
>>> reg.predict([[1, 1]])
191191
array([ 0.8])
192192

193193
Also useful for lower-level tasks is the function :func:`lasso_path` that
@@ -441,12 +441,12 @@ function of the norm of its coefficients.
441441
::
442442

443443
>>> from sklearn import linear_model
444-
>>> clf = linear_model.LassoLars(alpha=.1)
445-
>>> clf.fit([[0, 0], [1, 1]], [0, 1]) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
444+
>>> reg = linear_model.LassoLars(alpha=.1)
445+
>>> reg.fit([[0, 0], [1, 1]], [0, 1]) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
446446
LassoLars(alpha=0.1, copy_X=True, eps=..., fit_intercept=True,
447447
fit_path=True, max_iter=500, normalize=True, positive=False,
448448
precompute='auto', verbose=False)
449-
>>> clf.coef_ # doctest: +ELLIPSIS
449+
>>> reg.coef_ # doctest: +ELLIPSIS
450450
array([ 0.717157..., 0. ])
451451

452452
.. topic:: Examples:
@@ -604,21 +604,21 @@ Bayesian Ridge Regression is used for regression::
604604
>>> from sklearn import linear_model
605605
>>> X = [[0., 0.], [1., 1.], [2., 2.], [3., 3.]]
606606
>>> Y = [0., 1., 2., 3.]
607-
>>> clf = linear_model.BayesianRidge()
608-
>>> clf.fit(X, Y)
607+
>>> reg = linear_model.BayesianRidge()
608+
>>> reg.fit(X, Y)
609609
BayesianRidge(alpha_1=1e-06, alpha_2=1e-06, compute_score=False, copy_X=True,
610610
fit_intercept=True, lambda_1=1e-06, lambda_2=1e-06, n_iter=300,
611611
normalize=False, tol=0.001, verbose=False)
612612

613613
After being fitted, the model can then be used to predict new values::
614614

615-
>>> clf.predict ([[1, 0.]])
615+
>>> reg.predict ([[1, 0.]])
616616
array([ 0.50000013])
617617

618618

619619
The weights :math:`w` of the model can be access::
620620

621-
>>> clf.coef_
621+
>>> reg.coef_
622622
array([ 0.49999993, 0.49999993])
623623

624624
Due to the Bayesian framework, the weights found are slightly different to the
@@ -1233,12 +1233,19 @@ This way, we can solve the XOR problem with a linear classifier::
12331233
>>> import numpy as np
12341234
>>> X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
12351235
>>> y = X[:, 0] ^ X[:, 1]
1236-
>>> X = PolynomialFeatures(interaction_only=True).fit_transform(X)
1236+
>>> y
1237+
array([0, 1, 1, 0])
1238+
>>> X = PolynomialFeatures(interaction_only=True).fit_transform(X).astype(int)
12371239
>>> X
1238-
array([[ 1., 0., 0., 0.],
1239-
[ 1., 0., 1., 0.],
1240-
[ 1., 1., 0., 0.],
1241-
[ 1., 1., 1., 1.]])
1240+
array([[1, 0, 0, 0],
1241+
[1, 0, 1, 0],
1242+
[1, 1, 0, 0],
1243+
[1, 1, 1, 1]])
12421244
>>> clf = Perceptron(fit_intercept=False, n_iter=10, shuffle=False).fit(X, y)
1245+
1246+
And the classifier "predictions" are perfect::
1247+
1248+
>>> clf.predict(X)
1249+
array([0, 1, 1, 0])
12431250
>>> clf.score(X, y)
12441251
1.0

0 commit comments

Comments
 (0)