Skip to content

RACL for token-level #3

@HariWu1995

Description

@HariWu1995

Pardon me,

In the begining, I train RACL for English comments in hotel domain, the results are all above 80% for word-level classification.
But now, I want to train RACL for multi-language, so I choose mBERT with WordPiece Tokenizer. Into the training, my results are only good for sentiment accuracy and f1 score (>60%), the opinion and aspect f1 score is disappointing (~5%).

For the very best case, my results are

aspect_f1 = 0.13
sentiment_acc = 0.76
sentiment_f1 = 0.75
opinion_f1 = 0.00
ABSA_f1 = 0.1

I still want to continue this approach so I wonder if you have any try this model for token-level task?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions