Skip to content

Theoretical question - Domain accuarcy #22

@jecaJeca

Description

@jecaJeca

Hello,

Thanks for sharing this code! I have couple of question, more theoretical then practical :)

When I run MNIST example for 86000 steps (10 times more then original), I got very poor results (around 20% acc on source and target sets), so I am wondering what is a problem. I run couple of times more and got ok results, but I am wondering what leads to poor results in some executions.

First, I am not sure what should I expect for domain accuracy - should it be around 50% or it is not important. I think that is better to look at domain predictions, because I should expect probabilities around 50% (domain discriminator is not sure what is source what is target example), so maybe accuracy itself is not important. I think that problem with poor results become in situation where domain discriminator predicts only one class with probabilities 1 (loss is very huge so in that case it ruins feature extractor and classification accuracy).

Another thing which confuses me is could DANN gets good results even if domain accuracy is 1. So, domain discriminator can decide which is source which is target example (with huge probabilities), but results on target data are good.

Generally, I am wondering how can I track if my DANN works ok, which metrics to use.

I hope that you can help me, maybe I totally wrong understand concept :)

Thanks a lot!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions