Paper Link: https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/cps2.70013
Smart advanced metering infrastructure and edge devices show promising solutions in digitalising distributed energy systems. Energy disaggregation of household load consumption provides a better understanding of consumers' appliance-level usage patterns. Machine learning approaches enhance the power system's efficiency but this is contingent upon sufficient training samples for efficient and accurate prediction tasks. In a centralised setup, transferring such a substantially high volume of information to the cloud server has a communication bottleneck. Although high-computing edge devices seek to address such problems, the data scarcity and heterogeneity among clients remain challenges to be addressed. Federated learning offers a compelling solution in such a scenario by leveraging the ML model training at edge devices and aggregating the client's updates at a cloud server. However, FL still faces significant security issues, including the potential eavesdropping by a malicious actor with the intention of stealing clients' information while communicating with an honest-but-curious server. The study aims to secure the sensitive information of energy users participating in the nonintrusive load monitoring (NILM) program by integrating differential privacy with a personalised federated learning approach. The Fisher information method was adapted to extract the global model information based on common features, while personalised updates will not be shared with the server for client-specific features. Similarly, the authors employed an adaptive differential privacy only on the shared local updates (DP-PFL) while communicating with the server. Experimental results on the Pecan Street and REFIT datasets depict that DP-PFL exhibits more favourable performance on both the energy prediction and status classification tasks compared to other state-of-the-art DP approaches in federated NILM.
This repository implements the paper which supports multiple federated learning algorithms and neural network architectures, enabling privacy-preserving load disaggregation across distributed clients.
-
Multiple Federated Learning Approaches:
- PFL: Personalized Federated Learning
- FLDP: Federated Learning with Differential Privacy
- SAM: Sharpness-Aware Minimization in Federated Learning
- DPPFL: Differential Privacy Personalized Federated Learning
-
Neural Network Models:
- GRU: Gated Recurrent Unit
- LSTM: Long Short-Term Memory
- CNN: Convolutional Neural Network
- CNN_LSTM: Hybrid CNN-LSTM Architecture
-
Privacy-Preserving Features:
- Differential privacy mechanisms
- Secure aggregation protocols
- Client-side data protection
- Python 3.8+
- PyTorch 1.9+
- CUDA (for GPU acceleration)
- Other dependencies listed in
requirements.txt
- Clone the repository:
git clone https://github.com/MazharAly/PPFL-for-NILM.git
cd federated-nilm
- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
The code supports various NILM datasets. Place your dataset files in the data/
directory:
refit.csv
: REFIT datasetukdale.csv
: UK-DALE dataset- Custom datasets in CSV format
Run a simple experiment with PFL approach and GRU model:
python main.py --approach PFL --model GRU --data_path data/refit.csv --gpu 0
- Personalized Federated Learning:
python main.py \
--approach PFL \
--model GRU \
--data_path data/refit.csv \
--gpu 0 \
--FL_epochs 200 \
--local_ep 1 \
--num_users 10 \\
--verbose
- Federated Learning with Differential Privacy:
python main.py \
--approach FLDP \
--model LSTM \
--data_path data/ukdale.csv \
--gpu 0 \
--FL_epochs 300 \
--local_ep 1 \
--epsilon 1.0 \
--delta 0.1 \
--verbose
- Sharpness-Aware Minimization:
python main.py \
--approach SAM \
--model CNN \
--data_path data/ukdale.csv \
--gpu 0 \
--FL_epochs 200 \
--local_ep 1 \
--frac 1.0 \
--sam_epsilon 0.01 \
--verbose
- Differential Privacy Personalized Federated Learning:
python main.py \
--approach DPPFL \
--model CNN_LSTM \
--data_path data/refit.csv \
--gpu 0 \
--FL_epochs 200 \
--local_ep 1 \
--epsilon 0.8 \
--delta 0.1 \
--fisher_threshold 1e-5 \
--verbose
Argument | Description | Default | Options |
---|---|---|---|
--approach |
Federated learning approach | PFL | PFL, FLDP, SAM,PPFL |
--model |
Neural network model | GRU | GRU, LSTM, CNN, CNN_LSTM |
--data_path |
Path to dataset file | data/refit.csv | Any CSV file |
--gpu |
GPU device ID (-1 for CPU) | -1 | Integer |
--FL_epochs |
Number of global federated rounds | 500 | Integer |
--local_ep |
Number of local epochs per round | 1 | Integer |
--num_users |
Number of federated clients | 10 | Integer |
--frac |
Fraction of clients to select per round | 1.0 | Float (0-1) |
--batch_size |
Batch size for training | 32 | Integer |
--lr |
Learning rate | 0.001 | Float |
--sequence_length |
Input sequence length | 32 | Integer |
--hidden_size |
Hidden layer size | 6 | Integer |
--epsilon |
Privacy budget (for DP methods) | 0.8 | Float |
--delta |
Privacy parameter (for DP methods) | 0.1 | Float |
--fisher_threshold |
Fisher threshold (DPPFL) | 1e-5 | Float |
--sam_epsilon |
SAM epsilon parameter | 0.01 | Float |
--verbose |
Enable verbose output | False | Flag |
federated-nilm/
βββ main.py # Main entry point
βββ config.py # Configuration management
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ data/ # Dataset directory
β βββ refit.csv
β βββ ukdale.csv
βββ models/ # Neural network models
β βββ __init__.py
β βββ nilm_models.py
βββ approaches/ # Federated learning approaches
β βββ __init__.py
β βββ pfl.py
β βββ fldp.py
β βββ dppfl.py
β βββ sam.py
βββ utils/ # Utility functions
β βββ __init__.py
β βββ data_utils.py
β βββ training_utils.py
βββ results/ # Experiment results
βββ .gitkeep
Compare different models with PFL approach:
# Test all models with PFL
for model in GRU LSTM CNN CNN_LSTM; do
python main.py --approach PFL --model $model --data_path data/refit.csv --gpu 0 --FL_epochs 5 --verbose
done
Compare different federated learning approaches:
# Test all approaches with GRU model
for approach in PFL FLDP DPPFL SAM; do
python main.py --approach $approach --model GRU --data_path data/refit.csv --gpu 0 --FL_epochs 5 --verbose
done
Analyze the impact of privacy parameters:
# Test different privacy budgets
for epsilon in 0.5 1.0 2.0 5.0; do
python main.py --approach FLDP --model GRU --data_path data/refit.csv --gpu 0 --epsilon $epsilon --verbose
done
The experiments generate detailed results including:
- Training and test losses
- Privacy guarantees (for DP methods)
- Training time statistics
- Model performance metrics
Results are displayed in the console and can be saved to files for further analysis.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
If you use this code in your research, please cite:
@article{ali2025privacy,
title={Privacy Preserving Federated Learning for Energy Disaggregation of Smart Homes},
author={Ali, Mazhar and Kumar, Ajit and Choi, Bong Jun},
journal={IET Cyber-Physical Systems: Theory \& Applications},
volume={10},
number={1},
pages={e70013},
year={2025},
publisher={Wiley Online Library}
}
For questions and support, please open an issue on GitHub or contact the maintainers.
- v1.0.0: Initial release with PFL, FLDP, SAM and DPPFL approaches
- Support for GRU, LSTM, CNN, and CNN_LSTM models
- GPU acceleration support
- Comprehensive privacy mechanisms