Skip to content

Autoencoders for vision and NLP tasks. Vision autoencoders use fully connected and convolutional architectures with layer-inverse constraints. NLP autoencoder employs LSTM-based sequence-to-sequence model for text denoising.

Notifications You must be signed in to change notification settings

yehonatanke/vision_nlp_autoencoders

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision NLP Autoencoders

Autoencoders for vision and NLP.

  • Vision Autoencoders: Fully connected and convolutional autoencoders, with layer-inverse constraints.
  • Text Denoising Autoencoder: Sequence-to-sequence LSTM-based model for reconstructing clean text from noisy input.

Structure

vision/
    models.py       # Vision autoencoder architectures
    train.py        # Training loops for vision models
    eval.py         # Evaluation and plotting for vision
    utils.py        # Device and transform helpers

nlp/
    models.py       # NLP autoencoder and classifier architectures
    data.py         # Dataset classes, noise, vocabulary
    train.py        # Training loops for NLP models
    eval.py         # Evaluation for NLP
    utils.py        # NLP dataset stats and printing

common/
    plotting.py     # General plotting utilities
    metrics.py      # Accuracy, parameter counting
    config.py       # Centralized hyperparameters

scripts/
    run_vision.py   # Entrypoint for vision experiments
    run_nlp.py      # Entrypoint for NLP experiments
    run_transfer.py # Entrypoint for transfer learning

About

Autoencoders for vision and NLP tasks. Vision autoencoders use fully connected and convolutional architectures with layer-inverse constraints. NLP autoencoder employs LSTM-based sequence-to-sequence model for text denoising.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published