Error in calculate_metric_plain method? #88
-
Hello, I'm trying to use the
Upon taking a look at the code, it makes sense to me why: ranked_queries = len(ranking)
ap_per_candidate_depth = np.zeros(ranked_queries)
rr_per_candidate_depth = np.zeros(len(global_metric_config["MRR+Recall@"]), ranked_queries)
rank_per_candidate_depth = np.zeros(len(global_metric_config["MRR+Recall@"]), ranked_queries)
recall_per_candidate_depth = np.zeros(len(global_metric_config["MRR+Recall@"]), ranked_queries)
ndcg_per_candidate_depth = np.zeros(len(global_metric_config["nDCG@"]), ranked_queries) The ranked_queries = len(ranking)
ap_per_candidate_depth = np.zeros(ranked_queries)
rr_per_candidate_depth = np.zeros((len(global_metric_config["MRR+Recall@"]), ranked_queries))
rank_per_candidate_depth = np.zeros((len(global_metric_config["MRR+Recall@"]), ranked_queries))
recall_per_candidate_depth = np.zeros((len(global_metric_config["MRR+Recall@"]), ranked_queries))
ndcg_per_candidate_depth = np.zeros((len(global_metric_config["nDCG@"]), ranked_queries)) However, I also checked the assignment template from last year and it had the same piece of code, so I think such an issue wouldn't have gone unnoticed. Is there maybe something else I'm doing wrong? Do I need to use a lower version of numpy (although there was none in the Kind regards, |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Dear Laurenz, thanks for bringing up this issue, this is definitely a mistake in the published code and your proposed solution is the right one! Best, |
Beta Was this translation helpful? Give feedback.
-
Hello, Which version have you used ? Best, Darius |
Beta Was this translation helpful? Give feedback.
Dear Laurenz,
thanks for bringing up this issue, this is definitely a mistake in the published code and your proposed solution is the right one!
Best,
Sophia