Skip to content

Code to propose a preference-based evaluation measure to evaluate the performance of information retrieval systems (documents and images).

Notifications You must be signed in to change notification settings

chengxiluo/Evaluation-Measures-based-on-Preference-Graphs

Repository files navigation

Evaluation-Measures-based-on-Preference-Graphs

Code to evaluate preference judgments for documents and images as defined in my master thesis.

Abstract

This thesis proposes a preference-based evaluation measure to compute the maximum similarity between an actual ranking from users’ preferences and an ideal ranking generated by systems. Specifically, this measure constructs a directed multigraph and computes the ordering of vertices, which we call the ideal ranking, that has maximum similarity to actual ranking calculated by the rank similarity measure. This measure is able to take any arbitrary collection of preferences that might include the property of conflicts, redundancies, incompleteness, and diverse type results (documents or images). Our results show that Greedy PGC is matches or exceeds the performance of evaluation measures proposed in previous research.

Usage

python greedy_pgc.py prefs <preference-judgments> run <actual-running/ranking>
python greedy_pgc_image.py prefs <preference-judgments> run <image-position>

About

Code to propose a preference-based evaluation measure to evaluate the performance of information retrieval systems (documents and images).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages