Skip to content

text parser Japanese tokenization issues #3

@tristcoil

Description

@tristcoil

We are using Mecab tokenizer to split Japanese sentences into individual words.

Issue:
word
食べてしまいます

gets split into

  • 食べて
  • しまい
  • ます

this is rather difficult for readers to understand

Ideally we should use better parser that understands Japanese conjugation on higher level.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions