This project implements a DANN (Domain-Adversarial Neural Network) to perform robust fault classification under domain shift using the bearing dataset.
dann_pipeline/ βββ main.py # Entry point (load, train, evaluate) βββ config.py # Argument parser βββ dataset.py # Source/target dataloaders βββ model.py # DANN architecture (feature extractor + classifiers) βββ train.py # Adversarial training routine βββ test.py # Final evaluation on target domain βββ utils.py # Optional tools (e.g. plotting) βββ data/ # .npy input files βββ result/ # model.pt, logs, loss curves
This project uses two publicly available bearing fault datasets adapted for domain adaptation experiments:
-
CWRU Bearing Dataset β Provided by Case Western Reserve University
π https://engineering.case.edu/bearingdatacenter/download-data-file -
IMS Bearing Dataset β Provided via Data.gov
π https://catalog.data.gov/dataset/ims-bearings
For signal preprocessing and conversion into .npy
format, refer to the preprocessing code in this repository:
π§
https://github.com/97yong/signal-fault-classification
Preprocessed data is saved in the following format as .npy
:
data/ βββ X_source.npy βββ Y_source.npy βββ X_target.npy βββ Y_target.npy βββ X_target_test.npy βββ Y_target_test.npy
You can configure training parameters via config.py
or pass them through opt
.
Argument | Description | Default |
---|---|---|
--epochs | Number of training epochs | 10 |
--lr | Learning rate | 1e-4 |
--batch_size | Mini-batch size | 64 |
DANN consists of:
- Feature Extractor β 1D CNN layers
- Label Classifier β Predicts class labels for source domain
- Domain Classifier β Predicts domain (source/target) using GRL (gradient reversal)
Training is done with adversarial loss to align the feature distributions.
pip install numpy torch scikit-learn tqdm matplotlib
python main.py
This will:
- Load source/target domain data
- Train a domain-adversarial model
- Evaluate performance on target test data
Ganin, Y., & Lempitsky, V. (2015).
"Unsupervised Domain Adaptation by Backpropagation."
In Proceedings of the 32nd International Conference on Machine Learning (ICML), 1180β1189.
π arXiv:1409.7495