Skip to content

Looking forward to the 11 layer model with 93.2% accuracy. #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ruc98 opened this issue Feb 5, 2020 · 5 comments
Open

Looking forward to the 11 layer model with 93.2% accuracy. #3

ruc98 opened this issue Feb 5, 2020 · 5 comments

Comments

@ruc98
Copy link

ruc98 commented Feb 5, 2020

.

@sheshap
Copy link

sheshap commented Feb 21, 2020

I am able to reproduce Acc: 0.931929 which is 93.2 as claimed in the paper.

@ruc98
Copy link
Author

ruc98 commented Mar 1, 2020

That great! Did you change any training configurations in config_cls.yaml? I tried to implement the exact 11 layer architecture mentioned in the paper with the same config settings but could not get the required accuracy.

@sheshap
Copy link

sheshap commented Mar 2, 2020

@ruc98 I didn't change anything. From where did you see 11 layers?. Classification network has 14 layers and segmentation has 23 layers right?

@ruc98
Copy link
Author

ruc98 commented Mar 4, 2020

@sheshappanavar Yes, I mean L=11 excluding the 3 fc layers. So are you able to reproduce 93.2% using the given L=6 code or did you implement the L=11 code given in the paper? Also without the voting evaluation what is your accuracy?

Thanks

@sheshap
Copy link

sheshap commented Mar 6, 2020

@ruc98 The 93.2% which I got is using the given L=6 code, with voting.
I have not checked without voting. Keep us posted if you try L=11 with or without voting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants