Skip to content

Action items for benchmarking UCCA #1

@omriabnd

Description

@omriabnd
  • Build a webpage similar to https://nlpprogress.com/english/semantic_parsing.html#ucca-parsing where we: (1) detailed description of the official evaluation protocol (for different corpora?) including eval scripts + versions, normalization, dataset versions etc.; (2) a leader-board with parser outputs, sorted by UCCA official score and another column where they are evaluated on the MRP metric; (3) a bottom part of the page with links to other (unofficial or legacy) exp setups and corresponding leader boards.
  • Post the description in (1) as a file in the ucca code repo. Evaluation documentation huji-nlp/ucca#92
  • Improve UCCA score to more sensible handle unary expansions / multiple categories over the same edge. This will become the new official score. Ask participants of the semeval shared task and conll shared tasks whether they'd like to re-evaluate their systems and post their scores. Evaluation treats multiple categories too leniently huji-nlp/ucca#91
  • Run the new script on the MRP 2019 and 2020 submitted UCCA parses, after converting them from JSON to XML.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions