Skip to content

Commit 5e9302b

Browse files
DOC Add info about influence of sample_weights to User Guide, LogisticRegression (scikit-learn#27534)
Co-authored-by: Christian Lorentzen <lorentzen.ch@gmail.com>
1 parent e6978b7 commit 5e9302b

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

doc/modules/linear_model.rst

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -897,8 +897,11 @@ following cost function:
897897
.. math::
898898
:name: regularized-logistic-loss
899899
900-
\min_{w} C \sum_{i=1}^n \left(-y_i \log(\hat{p}(X_i)) - (1 - y_i) \log(1 - \hat{p}(X_i))\right) + r(w).
900+
\min_{w} C \sum_{i=1}^n s_i \left(-y_i \log(\hat{p}(X_i)) - (1 - y_i) \log(1 - \hat{p}(X_i))\right) + r(w),
901901
902+
where :math:`{s_i}` corresponds to the weights assigned by the user to a
903+
specific training sample (the vector :math:`s` is formed by element-wise
904+
multiplication of the class weights and sample weights).
902905

903906
We currently provide four choices for the regularization term :math:`r(w)` via
904907
the `penalty` argument:
@@ -920,6 +923,11 @@ controls the strength of :math:`\ell_1` regularization vs. :math:`\ell_2`
920923
regularization. Elastic-Net is equivalent to :math:`\ell_1` when
921924
:math:`\rho = 1` and equivalent to :math:`\ell_2` when :math:`\rho=0`.
922925

926+
Note that the scale of the class weights and the sample weights will influence
927+
the optimization problem. For instance, multiplying the sample weights by a
928+
constant :math:`b>0` is equivalent to multiplying the (inverse) regularization
929+
strength `C` by :math:`b`.
930+
923931
Multinomial Case
924932
----------------
925933

0 commit comments

Comments
 (0)