How does the multi-alpha quantile regression training work? #11314
-
Hi! I am using the xgboost package (in python if this matters) to train quantile regression for multiple alphas. One can set a list of alphas in the train xgboost train function when using pin ball loss, and this would return a single model that do prediction for the quantiles at this list of alphas. I wonder how it is different from me looping over the list of alphas, and train a single xgboost model for every alpha? Could you clarify that for the single model approach (set a list of alphas in the train function and train one model), are the prediction heads for different alphas concatenated to a shared tree structure? If so, how large is the shared tree and how can I tune it? Is there any document or paper you may be able to share for this approach? Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Other than the initialization step, no difference.
No, but we are working on it. See #9043 |
Beta Was this translation helpful? Give feedback.
Could you please elaborate on that? Are you suggesting that
xgb.train({"quantile_alphas": [0.95, 0.5, 0.05]})
has lower accuracy than:? If so, could you please try #11286 ?
There's no difference for training aside from the initialization step. However, the metric calculation (pin ball) had a bug related to multi-quantiles and it's fixed in 3.0. After the fix, it should use the average of the loss across quantile targets, which may contribute to the observed difference.