Releases: dmlc/xgboost
Release candidate of version 1.5.0
1.4.2 Patch Release
This is a patch release for Python package with following fixes:
- Handle the latest version of
cupy.ndarray
ininplace_predict
. #6933 - Ensure output array from
predict_leaf
is(n_samples, )
when there's only 1 tree. 1.4.0 outputs(n_samples, 1)
. #6889 - Fix empty dataset handling with multi-class AUC. #6947
- Handle object type from pandas in
inplace_predict
. #6927
You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:
echo "3ffd4a90cd03efde596e51cadf7f344c8b6c91aefd06cc92db349cd47056c05a *xgboost.tar.gz" | shasum -a 256 --check
1.4.1 Patch Release
This is a bug fix release.
- Fix GPU implementation of AUC on some large datasets. (#6866)
You can verify the downloaded source code xgboost.tar.gz by
running this on your unix shell:
echo "f3a37e5ddac10786e46423db874b29af413eed49fd9baed85035bbfee6fc6635 *xgboost.tar.gz" | shasum -a 256 --check
Release 1.4.0 stable
Introduction of pre-built binary package for R, with GPU support
Starting with release 1.4.0, users now have the option of installing {xgboost}
without
having to build it from the source. This is particularly advantageous for users who want
to take advantage of the GPU algorithm (gpu_hist
), as previously they'd have to build
{xgboost}
from the source using CMake and NVCC. Now installing {xgboost}
with GPU
support is as easy as: R CMD INSTALL ./xgboost_r_gpu_linux.tar.gz
. (#6827)
See the instructions at https://xgboost.readthedocs.io/en/latest/build.html
Improvements on prediction functions
XGBoost has many prediction types including shap value computation and inplace prediction.
In 1.4 we overhauled the underlying prediction functions for C API and Python API with an
unified interface. (#6777, #6693, #6653, #6662, #6648, #6668, #6804)
- Starting with 1.4, sklearn interface prediction will use inplace predict by default when
input data is supported. - Users can use inplace predict with
dart
booster and enable GPU acceleration just
likegbtree
. - Also all prediction functions with tree models are now thread-safe. Inplace predict is
improved withbase_margin
support. - A new set of C predict functions are exposed in the public interface.
- A user-visible change is a newly added parameter called
strict_shape
. See
https://xgboost.readthedocs.io/en/latest/prediction.html for more details.
Improvement on Dask interface
-
Starting with 1.4, the Dask interface is considered to be feature-complete, which means
all of the models found in the single node Python interface are now supported in Dask,
including but not limited to ranking and random forest. Also, the prediction function
is significantly faster and supports shap value computation.- Most of the parameters found in single node sklearn interface are supported by
Dask interface. (#6471, #6591) - Implements learning to rank. On the Dask interface, we use the newly added support of
query ID to enable group structure. (#6576) - The Dask interface has Python type hints support. (#6519)
- All models can be safely pickled. (#6651)
- Random forest estimators are now supported. (#6602)
- Shap value computation is now supported. (#6575, #6645, #6614)
- Evaluation result is printed on the scheduler process. (#6609)
DaskDMatrix
(and device quantile dmatrix) now accepts all meta-information. (#6601)
- Most of the parameters found in single node sklearn interface are supported by
-
Prediction optimization. We enhanced and speeded up the prediction function for the
Dask interface. See the latest Dask tutorial page in our document for an overview of
how you can optimize it even further. (#6650, #6645, #6648, #6668) -
Bug fixes
-
Other improvements on documents, blogs, tutorials, and demos. (#6389, #6366, #6687,
#6699, #6532, #6501)
Python package
With changes from Dask and general improvement on prediction, we have made some
enhancements on the general Python interface and IO for booster information. Starting
from 1.4, booster feature names and types can be saved into the JSON model. Also some
model attributes like best_iteration
, best_score
are restored upon model load. On
sklearn interface, some attributes are now implemented as Python object property with
better documents.
-
Breaking change: All
data
parameters in prediction functions are renamed toX
for better compliance to sklearn estimator interface guidelines. -
Breaking change: XGBoost used to generate some pseudo feature names with
DMatrix
when inputs likenp.ndarray
don't have column names. The procedure is removed to
avoid conflict with other inputs. (#6605) -
Early stopping with training continuation is now supported. (#6506)
-
Optional import for Dask and cuDF are now lazy. (#6522)
-
As mentioned in the prediction improvement summary, the sklearn interface uses inplace
prediction whenever possible. (#6718) -
Booster information like feature names and feature types are now saved into the JSON
model file. (#6605) -
All
DMatrix
interfaces includingDeviceQuantileDMatrix
and counterparts in Dask
interface (as mentioned in the Dask changes summary) now accept all the meta-information
likegroup
andqid
in their constructor for better consistency. (#6601) -
Booster attributes are restored upon model load so users don't have to call
attr
manually. (#6593) -
On sklearn interface, all models accept
base_margin
for evaluation datasets. (#6591) -
Improvements over the setup script including smaller sdist size and faster installation
if the C++ library is already built (#6611, #6694, #6565). -
Bug fixes for Python package:
- Don't validate feature when number of rows is 0. (#6472)
- Move metric configuration into booster. (#6504)
- Calling XGBModel.fit() should clear the Booster by default (#6562)
- Support
_estimator_type
. (#6582) - [dask, sklearn] Fix predict proba. (#6566, #6817)
- Restore unknown data support. (#6595)
- Fix learning rate scheduler with cv. (#6720)
- Fixes small typo in sklearn documentation (#6717)
- [python-package] Fix class Booster: feature_types = None (#6705)
- Fix divide by 0 in feature importance when no split is found. (#6676)
JVM package
- [jvm-packages] fix early stopping doesn't work even without custom_eval setting (#6738)
- fix potential TaskFailedListener's callback won't be called (#6612)
- [jvm] Add ability to load booster direct from byte array (#6655)
- [jvm-packages] JVM library loader extensions (#6630)
R package
- R documentation: Make construction of DMatrix consistent.
- Fix R documentation for xgb.train. (#6764)
ROC-AUC
We re-implemented the ROC-AUC metric in XGBoost. The new implementation supports
multi-class classification and has better support for learning to rank tasks that are not
binary. Also, it has a better-defined average on distributed environments with additional
handling for invalid datasets. (#6749, #6747, #6797)
Global configuration.
Starting from 1.4, XGBoost's Python, R and C interfaces support a new global configuration
model where users can specify some global parameters. Currently, supported parameters are
verbosity
and use_rmm
. The latter is experimental, see rmm plugin demo and
related README file for details. (#6414, #6656)
Other New features.
- Better handling for input data types that support
__array_interface__
. For some
data types including GPU inputs andscipy.sparse.csr_matrix
, XGBoost employs
__array_interface__
for processing the underlying data. Starting from 1.4, XGBoost
can accept arbitrary array strides (which means column-major is supported) without
making data copies, potentially reducing a significant amount of memory consumption.
Also version 3 of__cuda_array_interface__
is now supported. (#6776, #6765, #6459,
#6675) - Improved parameter validation, now feeding XGBoost with parameters that contain
whitespace will trigger an error. (#6769) - For Python and R packages, file paths containing the home indicator
~
are supported. - As mentioned in the Python changes summary, the JSON model can now save feature
information of the trained booster. The JSON schema is updated accordingly. (#6605) - Development of categorical data support is continued. Newly added weighted data support
anddart
booster support. (#6508, #6693) - As mentioned in Dask change summary, ranking now supports the
qid
parameter for
query groups. (#6576) DMatrix.slice
can now consume a numpy array. (#6368)
Other breaking changes
- Aside from the feature name generation, there are 2 breaking changes:
CPU Optimization
- Aside from the general changes on predict function, some optimizations are applied on
CPU implementation. (#6683, #6550, #6696, #6700) - Also performance for sampling initialization in
hist
is improved. (#6410)
Notable fixes in the core library
These fixes do not reside in particular language bindings:
- Fixes for gamma regression. This includes checking for invalid input values, fixes for
gamma deviance metric, and better floating point guard for gamma negative log-likelihood
metric. (#6778, #6537, #6761) - Random forest with
gpu_hist
might generate low accuracy in previous versions. (#6755) - Fix a bug in GPU sketching when data size exceeds limit of 32-bit integer. (#6826)
- Memory consumption fix for row-major adapters (#6779)
- Don't estimate sketch batch size when rmm is used. (#6807) (#6830)
- Fix in-place predict with missing value. (#6787)
- Re-introduce double buffer in UpdatePosition, to fix perf regression in gpu_hist (#6757)
- Pass correct split_type to GPU predictor (#6491)
- Fix DMatrix feature names/types IO. (#6507)
- Use view for
SparsePage
exclusively to avoid some data access races. (#6590) - Check for invalid data. (#6742)
- Fix relocatable include in CMakeList (#6734) (#6737)
- Fix DMatrix slice with feature types. (#6689)
Other deprecation notices:
-
This release will be the last release to support CUDA 10.0. (#6642)
-
Starting in the next release, the Python package will require Pip 19.3+ due to the use
of manylinux2014 tag. Also, CentOS 6, RHEL 6 and other old distributions will not be
supported.
Known issue:
MacOS build of the JVM packages doesn't support multi-threading out of the box. To enable
mul...
1.3.3 Patch Release
- Fix regression on
best_ntree_limit
. (#6616)
1.3.2 Patch Release
- Fix compatibility with newer scikit-learn. (#6555)
- Fix wrong
best_ntree_limit
in multi-class. (#6569) - Ensure that Rabit can be compiled on Solaris (#6578)
- Fix
best_ntree_limit
for linear and dart. (#6579) - Remove duplicated DMatrix creation in scikit-learn interface. (#6592)
- Fix
evals_result
in XGBRanker. (##6594)
1.3.1 Patch Release
- Enable loading model from <1.0.0 trained with
objective='binary:logitraw'
(#6517) - Fix handling of print period in
EvaluationMonitor
(#6499) - Fix a bug in metric configuration after loading model. (#6504)
- Fix
save_best
early stopping option (#6523) - Remove
cupy.array_equal
, since it's not compatible with cuPy 7.8 (#6528)
You can verify the downloaded source code xgboost.tar.gz
by running this on your unix shell:
echo "fd51e844dd0291fd9e7129407be85aaeeda2309381a6e3fc104938b27fb09279 *xgboost.tar.gz" | shasum -a 256 --check
Release 1.3.0 stable
XGBoost4J-Spark: Exceptions should cancel jobs gracefully instead of killing SparkContext (#6019).
- By default, exceptions in XGBoost4J-Spark causes the whole SparkContext to shut down, necessitating the restart of the Spark cluster. This behavior is often a major inconvenience.
- Starting from 1.3.0 release, XGBoost adds a new parameter
killSparkContextOnWorkerFailure
to optionally prevent killing SparkContext. If this parameter is set, exceptions will gracefully cancel training jobs instead of killing SparkContext.
GPUTreeSHAP: GPU acceleration of the TreeSHAP algorithm (#6038, #6064, #6087, #6099, #6163, #6281, #6332)
- SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain predictions of machine learning models. It computes feature importance scores for individual examples, establishing how each feature influences a particular prediction. TreeSHAP is an optimized SHAP algorithm specifically designed for decision tree ensembles.
- Starting with 1.3.0 release, it is now possible to leverage CUDA-capable GPUs to accelerate the TreeSHAP algorithm. Check out the demo notebook.
- The CUDA implementation of the TreeSHAP algorithm is hosted at rapidsai/GPUTreeSHAP. XGBoost imports it as a Git submodule.
New style Python callback API (#6199, #6270, #6320, #6348, #6376, #6399, #6441)
- The XGBoost Python package now offers a re-designed callback API. The new callback API lets you design various extensions of training in idomatic Python. In addition, the new callback API allows you to use early stopping with the native Dask API (
xgboost.dask
). Check out the tutorial and the demo.
Enable the use of DeviceQuantileDMatrix
/ DaskDeviceQuantileDMatrix
with large data (#6201, #6229, #6234).
DeviceQuantileDMatrix
can achieve memory saving by avoiding extra copies of the training data, and the saving is bigger for large data. Unfortunately, large data with more than 2^31 elements was triggering integer overflow bugs in CUB and Thrust. Tracking issue: #6228.- This release contains a series of work-arounds to allow the use of
DeviceQuantileDMatrix
with large data:
Support slicing of tree models (#6302)
- Accessing the best iteration of a model after the application of early stopping used to be error-prone, need to manually pass the
ntree_limit
argument to thepredict()
function. - Now we provide a simple interface to slice tree models by specifying a range of boosting rounds. The tree ensemble can be split into multiple sub-ensembles via the slicing interface. Check out an example.
- In addition, the early stopping callback now supports
save_best
option. When enabled, XGBoost will save (persist) the model at the best boosting round and discard the trees that were fit subsequent to the best round.
Weighted subsampling of features (columns) (#5962)
- It is now possible to sample features (columns) via weighted subsampling, in which features with higher weights are more likely to be selected in the sample. Weighted subsampling allows you to encode domain knowledge by emphasizing a particular set of features in the choice of tree splits. In addition, you can prevent particular features from being used in any splits, by assigning them zero weights.
- Check out the demo.
Improved integration with Dask
- Support reverse-proxy environment such as Google Kubernetes Engine (#6343, #6475)
- An XGBoost training job will no longer use all available workers. Instead, it will only use the workers that contain input data (#6343).
- The new callback API works well with the Dask training API.
- The
predict()
andfit()
function ofDaskXGBClassifier
andDaskXGBRegressor
now accept a base margin (#6155). - Support more meta data in the Dask API (#6130, #6132, #6333).
- Allow passing extra keyword arguments as
kwargs
inpredict()
(#6117) - Fix typo in dask interface:
sample_weights
->sample_weight
(#6240) - Allow empty data matrix in AFT survival, as Dask may produce empty partitions (#6379)
- Speed up prediction by overlapping prediction jobs in all workers (#6412)
Experimental support for direct splits with categorical features (#6028, #6128, #6137, #6140, #6164, #6165, #6166, #6179, #6194, #6219)
- Currently, XGBoost requires users to one-hot-encode categorical variables. This has adverse performance implications, as the creation of many dummy variables results into higher memory consumption and may require fitting deeper trees to achieve equivalent model accuracy.
- The 1.3.0 release of XGBoost contains an experimental support for direct handling of categorical variables in test nodes. Each test node will have the condition of form
feature_value \in match_set
, where thematch_set
on the right hand side contains one or more matching categories. The matching categories inmatch_set
represent the condition for traversing to the right child node. Currently, XGBoost will only generate categorical splits with only a single matching category ("one-vs-rest split"). In a future release, we plan to remove this restriction and produce splits with multiple matching categories inmatch_set
. - The categorical split requires the use of JSON model serialization. The legacy binary serialization method cannot be used to save (persist) models with categorical splits.
- Note. This feature is currently highly experimental. Use it at your own risk. See the detailed list of limitations at #5949.
Experimental plugin for RAPIDS Memory Manager (#5873, #6131, #6146, #6150, #6182)
- RAPIDS Memory Manager library (rapidsai/rmm) provides a collection of efficient memory allocators for NVIDIA GPUs. It is now possible to use XGBoost with memory allocators provided by RMM, by enabling the RMM integration plugin. With this plugin, XGBoost is now able to share a common GPU memory pool with other applications using RMM, such as the RAPIDS data science packages.
- See the demo for a working example, as well as directions for building XGBoost with the RMM plugin.
- The plugin will be soon considered non-experimental, once #6297 is resolved.
Experimental plugin for oneAPI programming model (#5825)
- oneAPI is a programming interface developed by Intel aimed at providing one programming model for many types of hardware such as CPU, GPU, FGPA and other hardware accelerators.
- XGBoost now includes an experimental plugin for using oneAPI for the predictor and objective functions. The plugin is hosted in the directory
plugin/updater_oneapi
. - Roadmap: #5442
Pickling the XGBoost model will now trigger JSON serialization (#6027)
- The pickle will now contain the JSON string representation of the XGBoost model, as well as related configuration.
Performance improvements
- Various performance improvement on multi-core CPUs
- Optimize DMatrix build time by up to 3.7x. (#5877)
- CPU predict performance improvement, by up to 3.6x. (#6127)
- Optimize CPU sketch allreduce for sparse data (#6009)
- Thread local memory allocation for BuildHist, leading to speedup up to 1.7x. (#6358)
- Disable hyperthreading for DMatrix creation (#6386). This speeds up DMatrix creation by up to 2x.
- Simple fix for static shedule in predict (#6357)
- Unify thread configuration, to make it easy to utilize all CPU cores (#6186)
- [jvm-packages] Clean the way deterministic paritioning is computed (#6033)
- Speed up JSON serialization by implementing an intrusive pointer class (#6129). It leads to 1.5x-2x performance boost.
API additions
- [R] Add SHAP summary plot using ggplot2 (#5882)
- Modin DataFrame can now be used as input (#6055)
- [jvm-packages] Add
getNumFeature
method (#6075) - Add MAPE metric (#6119)
- Implement GPU predict leaf. (#6187)
- Enable cuDF/cuPy inputs in
XGBClassifier
(#6269) - Document tree method for feature weights. (#6312)
- Add
fail_on_invalid_gpu_id
parameter, which will cause XGBoost to terminate upon seeing an invalid value ofgpu_id
(#6342)
Breaking: the default evaluation metric for classification is changed to logloss
/ mlogloss
(#6183)
- The default metric used to be accuracy, and it is not statistically consistent to perform early stopping with the accuracy metric when we are really optimizing the log loss for the
binary:logistic
objective. - For statistical consistency, the default metric for classification has been changed to
logloss
. Users may choose to preserve the old behavior by explicitly specifyingeval_metric
.
Breaking: skmaker
is now removed (#5971)
- The
skmaker
updater has not been documented nor tested.
Breaking: the JSON model format no longer stores the leaf child count (#6094).
- The leaf child count field has been deprecated and is not used anywhere in the XGBoost codebase.
Breaking: XGBoost now requires MacOS 10.14 (Mojave) and later.
- Homebrew has dropped support for MacOS 10.13 (High Sierra), so we are not able to install the OpenMP runtime (
libomp
) from Homebrew on MacOS 10.13. Please use MacOS 10.14 (Mojave) or later.
Deprecation notices
- The use of
LabelEncoder
inXGBClassifier
is now deprecated and will be re...
Release Candidate of version 1.3.0
R package: xgboost_1.3.0.1.tar.gz
1.2.1 Patch Release
This patch release applies the following patches to 1.2.0 release:
- Hide C++ symbols from dmlc-core (#6188)