-
Notifications
You must be signed in to change notification settings - Fork 33
Cls error masking #67
base: development
Are you sure you want to change the base?
Changes from 3 commits
1aafb55
d8ccd6f
a9525d3
cf36c6e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -132,12 +132,12 @@ def main( args ): | |
# cost function and accumulate errors | ||
props, cerr, grdts = pars['cost_fn']( props, lbl_outs, msks ) | ||
err += cerr | ||
cls += cost_fn.get_cls(props, lbl_outs) | ||
cls += cost_fn.get_cls(props, lbl_outs, msks) | ||
# compute rand error | ||
if pars['is_debug']: | ||
assert not np.all(lbl_outs.values()[0]==0) | ||
re += pyznn.get_rand_error( props.values()[0], lbl_outs.values()[0] ) | ||
num_mask_voxels += utils.sum_over_dict(msks) | ||
num_mask_voxels += utils.get_total_num_mask(msks) | ||
|
||
# check whether there is a NaN here! | ||
if pars['is_debug']: | ||
|
@@ -182,8 +182,8 @@ def main( args ): | |
err = err / vn / pars['Num_iter_per_show'] | ||
cls = cls / vn / pars['Num_iter_per_show'] | ||
else: | ||
err = err / num_mask_voxels / pars['Num_iter_per_show'] | ||
cls = cls / num_mask_voxels / pars['Num_iter_per_show'] | ||
err = err / num_mask_voxels | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @nicholasturner1 could you explain this change a little bit? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure. The num_mask_voxels variable accumulates over the number of iterations anyway, so we don't need to further normalize it by the number of iterations. For example, if there were 10 rounds of 10 mask voxels, the num_mask_voxels value would be 100, and diving by 10 at the end obscures the resulting error value. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this makes sense, tks. |
||
cls = cls / num_mask_voxels | ||
re = re / pars['Num_iter_per_show'] | ||
lc.append_train(i, err, cls, re) | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -211,6 +211,19 @@ def get_total_num(outputs): | |
n = n + np.prod(sz) | ||
return n | ||
|
||
|
||
def get_total_num_mask(masks, props=None): | ||
'''Returns the total number of active voxels in a forward pass''' | ||
s = 0 | ||
for name, mask in masks.iteritems(): | ||
#full mask can correspond to empty array | ||
if mask.size == 0 and props is not None: | ||
s += props[name].size | ||
else: | ||
s += mask.sum() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @nicholasturner1 do we have binarized the mask before to make it 0/1? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We do that already within the data provider, but that may be a bit safer if we end up changing the mask preprocessing. I'll commit a change for this |
||
return s | ||
|
||
|
||
def sum_over_dict(dict_vol): | ||
s = 0 | ||
for name, vol in dict_vol.iteritems(): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nicholasturner1 why do you change it to square loss? I think that auto is better for default setting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original thought was that it would be easier for new people to interpret/troubleshoot square error instead of having to take an extra step to figure out which error they're even using. Not a huge deal for me either way though. If you greatly prefer auto I'll swap it back
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I prefer auto. square loss corresponds to
linear
output. To help the new users, we should explain these in the comments in config.cfg file though.