Skip to content

This approach is not fully a Prototypical Network, because it goes beyond few-shot learning and integrates domain adaptation with semi-supervised techniques. Instead, it's a hybrid that combines prototype-based learning with domain adaptation and pseudo-labeling

License

Notifications You must be signed in to change notification settings

measterpojo/Prototype-based-learning-w-SSDA-pseudo-labeling

Repository files navigation

prototype-based-learning-w-SSDA-pseudo-labeling

This approach is not fully a Prototypical Network, because it goes beyond few-shot learning and integrates domain adaptation with semi-supervised techniques. Instead, it's a hybrid that combines prototype-based learning with domain adaptation and pseudo-labeling

Introduction

In semi-supervised domain adaptation (SSDA), prototypes can play a crucial role in aligning features across domains. A prototype is essentially a representative feature vector for each class, often computed as the mean of feature embeddings for samples belonging to that class. Here's how prototypes are used in SSDA:

When to use prototypes?

Domain Adaptation:

Prototypes align features between source and target domains at the class level.

They reduce domain shift by ensuring features cluster around their respective prototypes.

Semi-Supervised Learning:

Prototypes assist in generating pseudo-labels for unlabeled data by assigning them to the nearest prototype.

They improve feature alignment and classification accuracy.

Training loop

workflow where you're computing prototypes for each class, calculating distances between embeddings and their corresponding prototypes, and then using a loss function (criterion) to measure how well these distances align with a target (in this case, zeros).

Epoch Source Loss Target Labeled Loss Target Unlabeled Loss Total Loss

Training Results

Training Results

Epoch Source Loss Target Labeled Loss Target Unlabeled Loss Total Loss
1 3.5728 3.5793 3.6018 10.7538
2 3.5743 3.5719 3.5978 10.7440
3 3.5714 3.5784 3.5938 10.7436
4 3.5719 3.5587 3.5926 10.7232
5 3.5673 3.5499 3.5860 10.7031
6 3.5599 3.5534 3.5815 10.6948
7 3.5603 3.5583 3.5917 10.7103
8 3.5635 3.5471 3.5942 10.7048
9 3.5521 3.5388 3.5911 10.6820
10 3.5571 3.5559 3.5868 10.6998

Consistent Loss Across Domains

Your source loss remains stable, suggesting that your model has learned its labeled source domain well.

Target labeled loss is slightly decreasing, which is good—it means your model is learning to align the labeled target domain with the source.

Target unlabeled loss is also trending downward, but very slowly, meaning pseudo-labeling and prototype refinement are progressing but could be improved.

Total Loss Stabilization

The total loss remains around 10.70 after several epochs, showing that adaptation is happening but at a gradual rate.

Since unsupervised loss (pseudo-labeling) is involved, adaptation might take longer than pure supervised learning.

About

This approach is not fully a Prototypical Network, because it goes beyond few-shot learning and integrates domain adaptation with semi-supervised techniques. Instead, it's a hybrid that combines prototype-based learning with domain adaptation and pseudo-labeling

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published