-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Description
Pardon me,
In the begining, I train RACL for English comments in hotel domain, the results are all above 80% for word-level classification.
But now, I want to train RACL for multi-language, so I choose mBERT with WordPiece Tokenizer. Into the training, my results are only good for sentiment accuracy and f1 score (>60%), the opinion and aspect f1 score is disappointing (~5%).
For the very best case, my results are
aspect_f1 = 0.13
sentiment_acc = 0.76
sentiment_f1 = 0.75
opinion_f1 = 0.00
ABSA_f1 = 0.1
I still want to continue this approach so I wonder if you have any try this model for token-level task?
Metadata
Metadata
Assignees
Labels
No labels