-
Notifications
You must be signed in to change notification settings - Fork 1
Add condition in fct get_metrics #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I feel like SonarCube sometimes misses something, such as the high complexity of the code, and suddenly is triggered for an unknown reason and notices new stuff. |
I would propose the following code, which is not much simpler, but that's the best I can do:
with
Later, I will create the columns |
precision = sum(pw_k.values()) / len(id_classes) | ||
recall = sum(rw_k.values()) / len(id_classes) | ||
elif method == 'micro-average': | ||
precision = sum(tp_k.values()) / (sum(tp_k.values()) + sum(fp_k.values())) | ||
recall = sum(tp_k.values()) / (sum(tp_k.values()) + sum(fn_k.values())) | ||
if sum(tp_k.values()) == 0 and sum(fp_k.values()) == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it really happen or is it for the hypothetical case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it has occured.
Actually, I found a better version. It would be simpler if the dict values where set to 0 instead of
Sorry for all the mails... |
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. I just kept the previous version of the part of the code below because otherwise it doesn't work properly. sum(count_k.values())
must be used once the dict has go through all the id_class
.
if tp_count > 0:
p_k[id_cl] = tp_count / (tp_count + fp_count)
r_k[id_cl] = tp_count / (tp_count + fn_count)
count_k[id_cl] = tp_count + fn_count
if method == 'macro-weighted-average':
pw_k[id_cl] = (count_k[id_cl] / sum(count_k.values())) * p_k[id_cl]
rw_k[id_cl] = (count_k[id_cl] / sum(count_k.values())) * r_k[id_cl]
if method == 'macro-average':
precision = sum(p_k.values()) / len(id_classes)
recall = sum(r_k.values()) / len(id_classes)
elif method == 'macro-weighted-average':
precision = sum(pw_k.values()) / len(id_classes)
recall = sum(rw_k.values()) / len(id_classes)
elif method == 'micro-average':
if sum(tp_k.values()) == 0 and sum(fp_k.values()) == 0:
precision = 0
recall = 0
else:
precision = sum(tp_k.values()) / (sum(tp_k.values()) + sum(fp_k.values()))
recall = sum(tp_k.values()) / (sum(tp_k.values()) + sum(fn_k.values()))
We can figure out later how to make sonar cube happy...
precision = sum(pw_k.values()) / len(id_classes) | ||
recall = sum(rw_k.values()) / len(id_classes) | ||
elif method == 'micro-average': | ||
precision = sum(tp_k.values()) / (sum(tp_k.values()) + sum(fp_k.values())) | ||
recall = sum(tp_k.values()) / (sum(tp_k.values()) + sum(fn_k.values())) | ||
if sum(tp_k.values()) == 0 and sum(fp_k.values()) == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it has occured.
I had to add conditions to the
get_metrics
function to avoid errors when calculating metrics with the different methods.However, the complexity is now judged B by the sonar cube... I've quickly tried to do some simplification, but have been unable to resolve it. Any improvement is welcome!