Gaussian constraints on normfactor modifiers #2588
-
In the current version of pyhf it is not possible to add Gaussian constraints on normfactor modifiers directly. Let me know if I am wrong, but currently the only thing one can do is for e.g. given c = 10 +/- 2 to build a normfactor in the samples field:
And to use a normsys modifier:
And then to fix the normfactor in the parameters field:
The problem with this approach is when for e.g. c = 0 +/- 2 in which case the relative error becomes ill defined in the normsys modifier. Is there a reason why a normfactor cannot have a Gaussian constraint term in the likelihood similar to what the lumi modifier has? Such that we could define "auxdata": [0] , "sigmas": [2] in the corresponding parameters field which is well defined in the case where the central value is 0? In that case we would have in the samples field:
And in the parameters field:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Hi @marfari, I think we might need a bit more information about the setup you have in mind. Once you fix the normfactor it no longer participates in your fits, so the first thing you describe to me sounds the same as using a A One way to achieve something like a contribution of 0 +/- 2 events is to define a sample that provides a constant offset in each bin (e.g. by a value 3), assign a constant and negative normalization factor to it, and then re-define your sample which predicts 0 +/- 2 to instead predict e.g. 3 +/- 2 and when summed together with the constant negative samples your total prediction from those two parts becomes 0 +/- 2. You still need to ensure your total prediction from summing all other samples together does not become negative though since you otherwise will run into other issues evaluating the Poisson term for a negative predicted rate. |
Beta Was this translation helpful? Give feedback.
We probably need to distinguish conceptual physics points here from the question of how to implement something like you have in mind. Let's start with the physics: generally speaking it does not make sense for a sample to predict a negative amount of events in a bin (there can be some exceptions in practice e.g. for interference measurements depending on how they're set up). If you have a sample normalization estimated to be$0\pm2$ , what does that mean? Generally I would read that as having an estimate consistent with 0 and an idea of how much larger than 0 it could be. What would make most sense to me for such a case is to implement an asymmetric uncertainty but enforce the prediction t…