Skip to content

This repository contains a PyTorch implementation of SimCLR, a self-supervised learning framework for contrastive learning of visual representations.

Notifications You must be signed in to change notification settings

ahanaf019/SimCLR-Implementation-Pytorch

Repository files navigation

Simple Contrastive Learning of Representations (SimCLR)

SimCLR is a contrastive Self Supervised Learning method that learns useful representations by pulling augmented versions of an image closer while pushing different images apart.

How SimCLR Works

  1. Data Augmentation → Random transformations create multiple views of the same image.
  2. Feature Extraction → A CNN encodes each augmented image.
  3. Projection Head → A small MLP maps features to a latent contrastive space .
  4. Contrastive Loss (NT-Xent Loss) → Encourages positive pairs to be close and negative pairs to be far.

Experiment

  1. Construct a custom CNN Model
  2. Pre-train using 3000 unlabeled images from the STL-10 dataset.
  3. Use the learned representation model and fine-tune using 1, 5, 10, 20, 50 and 100 images per class from the STL-10 training set.
  4. Evaluate on the entire test set.
  5. Compare with the results obtained while not performing any pretraining.

Results

❌ Without Pretraining

Samples Per Class Run 1 Run 2 Run 3
1 0.1000 0.1000 0.1000
5 0.1300 0.1558 0.1607
10 0.2115 0.2189 0.2356
20 0.2914 0.2654 0.2526
50 0.3148 0.3301 0.3216
100 0.3806 0.3654 0.3919

✅ With 3k Pretraining

Samples Per Class Run 1 Run 2 Run 3
1 0.2091 0.2084 0.2091
5 0.3516 0.3464 0.3400
10 0.3009 0.3382 0.3391
20 0.3686 0.4363 0.3739
50 0.5210 0.5184 0.5305
100 0.5521 0.5642 0.5573

About

This repository contains a PyTorch implementation of SimCLR, a self-supervised learning framework for contrastive learning of visual representations.

Topics

Resources

Stars

Watchers

Forks

Languages