Skip to content

Conversation

waleeattia
Copy link

Reference issue

#426

Type of change

Implementing supervised contrastive loss
Adding plotting script to compare accuracies and transfer efficiencies

What does this implement/fix?

Implementing contrastive loss explicitly learns the progressive learning network transformer by penalizing samples of different classes that are close to one another. The new script enables two dnn algorithms to be compared by plotting the difference between their accuracies and transfer efficiencies. The accuracy of the supervised contrastive loss version improves by 6 percent compared to the PL network with categorical cross entropy.

Additional information

NDD 2021

@codecov
Copy link

codecov bot commented Dec 9, 2021

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 90.09%. Comparing base (634d4d1) to head (43b05f7).
⚠️ Report is 112 commits behind head on staging.

Additional details and impacted files
@@           Coverage Diff            @@
##           staging     #518   +/-   ##
========================================
  Coverage    90.09%   90.09%           
========================================
  Files            7        7           
  Lines          404      404           
========================================
  Hits           364      364           
  Misses          40       40           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@PSSF23 PSSF23 requested a review from jdey4 December 9, 2021 03:50
@jdey4
Copy link
Collaborator

jdey4 commented Dec 10, 2021

@rflperry Does this PR help your query about contrastive loss?

@rflperry
Copy link
Collaborator

Yeah seems like it matches my results here that the transfer ability goes down, which I find interesting but the reason why I'm still a bit intrigued by. Not really worth adding if just always worse? I forget why I had multiple different results with different labels.

@rflperry
Copy link
Collaborator

rflperry commented Dec 11, 2021

My takeaways/summary:

  • Since the decider is k-Nearest Neighbors, we want the learned (penultimate) representation to place samples of the same class close together.
  • Contrastive loss learns representations that are close together, and this is validated as we see higher accuracy from our kNN classifier. Softmax worked, but wasn't explicitly tuned to learn what we wanted (see embedding results for various losses here). In a way, the best loss would be a function of the network and decider together.
  • One slightly odd thing is that the difference in accuracy is non-monotonic (i.e. goes down then up). Maybe just a result of not running enough simulations?
  • Despite the accuracy going up, the transfer efficiencies are slightly worse. I'm a bit fuzzy on the details of the transfer efficiency metric, but potentially the learned embeddings are not good for OOD performance (this has been observed in various learned embedding algorithms like tSNE I believe)

Copy link
Member

@PSSF23 PSSF23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@waleeattia Put figure in correct folder & save in pdf format.

@waleeattia
Copy link
Author

@PSSF23 fixed!

@waleeattia waleeattia requested a review from PSSF23 December 13, 2021 17:14
Copy link
Member

@PSSF23 PSSF23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove commented code & unnecessary prints. After these LGTM.

@waleeattia
Copy link
Author

@PSSF23 Perfect, just made those changes. Thank you!

@waleeattia waleeattia requested a review from PSSF23 December 20, 2021 03:32
Copy link
Member

@PSSF23 PSSF23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still some commented code remaining in benchmarks/cifar_exp/plot_compare_two_algos.py @waleeattia

@waleeattia
Copy link
Author

@PSSF23 Sorry I missed that, it should be good now.

@waleeattia waleeattia requested a review from PSSF23 December 20, 2021 03:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants