Skip to content

MazharAly/PPFL-for-NILM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Privacy Preserving Federated Learning for Energy Disaggregation of Smart Homes

Paper Link: https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/cps2.70013

Abstract

Smart advanced metering infrastructure and edge devices show promising solutions in digitalising distributed energy systems. Energy disaggregation of household load consumption provides a better understanding of consumers' appliance-level usage patterns. Machine learning approaches enhance the power system's efficiency but this is contingent upon sufficient training samples for efficient and accurate prediction tasks. In a centralised setup, transferring such a substantially high volume of information to the cloud server has a communication bottleneck. Although high-computing edge devices seek to address such problems, the data scarcity and heterogeneity among clients remain challenges to be addressed. Federated learning offers a compelling solution in such a scenario by leveraging the ML model training at edge devices and aggregating the client's updates at a cloud server. However, FL still faces significant security issues, including the potential eavesdropping by a malicious actor with the intention of stealing clients' information while communicating with an honest-but-curious server. The study aims to secure the sensitive information of energy users participating in the nonintrusive load monitoring (NILM) program by integrating differential privacy with a personalised federated learning approach. The Fisher information method was adapted to extract the global model information based on common features, while personalised updates will not be shared with the server for client-specific features. Similarly, the authors employed an adaptive differential privacy only on the shared local updates (DP-PFL) while communicating with the server. Experimental results on the Pecan Street and REFIT datasets depict that DP-PFL exhibits more favourable performance on both the energy prediction and status classification tasks compared to other state-of-the-art DP approaches in federated NILM.


This repository implements the paper which supports multiple federated learning algorithms and neural network architectures, enabling privacy-preserving load disaggregation across distributed clients.

πŸš€ Features

  • Multiple Federated Learning Approaches:

    • PFL: Personalized Federated Learning
    • FLDP: Federated Learning with Differential Privacy
    • SAM: Sharpness-Aware Minimization in Federated Learning
    • DPPFL: Differential Privacy Personalized Federated Learning
  • Neural Network Models:

    • GRU: Gated Recurrent Unit
    • LSTM: Long Short-Term Memory
    • CNN: Convolutional Neural Network
    • CNN_LSTM: Hybrid CNN-LSTM Architecture
  • Privacy-Preserving Features:

    • Differential privacy mechanisms
    • Secure aggregation protocols
    • Client-side data protection

πŸ“‹ Requirements

  • Python 3.8+
  • PyTorch 1.9+
  • CUDA (for GPU acceleration)
  • Other dependencies listed in requirements.txt

πŸ› οΈ Installation

  1. Clone the repository:
git clone https://github.com/MazharAly/PPFL-for-NILM.git
cd federated-nilm
  1. Create a virtual environment:
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt

πŸ“Š Datasets

The code supports various NILM datasets. Place your dataset files in the data/ directory:

  • refit.csv: REFIT dataset
  • ukdale.csv: UK-DALE dataset
  • Custom datasets in CSV format

🎯 Quick Start

Basic Usage

Run a simple experiment with PFL approach and GRU model:

python main.py --approach PFL --model GRU --data_path data/refit.csv --gpu 0

Advanced Examples

  1. Personalized Federated Learning:
python main.py \
    --approach PFL \
    --model GRU \
    --data_path data/refit.csv \
    --gpu 0 \
    --FL_epochs 200 \
    --local_ep 1 \
    --num_users 10 \\
    --verbose
  1. Federated Learning with Differential Privacy:
python main.py \
    --approach FLDP \
    --model LSTM \
    --data_path data/ukdale.csv \
    --gpu 0 \
    --FL_epochs 300 \
    --local_ep 1 \
    --epsilon 1.0 \
    --delta 0.1 \
    --verbose
  1. Sharpness-Aware Minimization:
python main.py \
    --approach SAM \
    --model CNN \
    --data_path data/ukdale.csv \
    --gpu 0 \
    --FL_epochs 200 \
    --local_ep 1 \
    --frac 1.0 \
    --sam_epsilon 0.01 \
    --verbose
  1. Differential Privacy Personalized Federated Learning:
python main.py \
    --approach DPPFL \
    --model CNN_LSTM \
    --data_path data/refit.csv \
    --gpu 0 \
    --FL_epochs 200 \
    --local_ep 1 \
    --epsilon 0.8 \
    --delta 0.1 \
    --fisher_threshold 1e-5 \
    --verbose

πŸ“– Command Line Arguments

Argument Description Default Options
--approach Federated learning approach PFL PFL, FLDP, SAM,PPFL
--model Neural network model GRU GRU, LSTM, CNN, CNN_LSTM
--data_path Path to dataset file data/refit.csv Any CSV file
--gpu GPU device ID (-1 for CPU) -1 Integer
--FL_epochs Number of global federated rounds 500 Integer
--local_ep Number of local epochs per round 1 Integer
--num_users Number of federated clients 10 Integer
--frac Fraction of clients to select per round 1.0 Float (0-1)
--batch_size Batch size for training 32 Integer
--lr Learning rate 0.001 Float
--sequence_length Input sequence length 32 Integer
--hidden_size Hidden layer size 6 Integer
--epsilon Privacy budget (for DP methods) 0.8 Float
--delta Privacy parameter (for DP methods) 0.1 Float
--fisher_threshold Fisher threshold (DPPFL) 1e-5 Float
--sam_epsilon SAM epsilon parameter 0.01 Float
--verbose Enable verbose output False Flag

πŸ—οΈ Project Structure

federated-nilm/
β”œβ”€β”€ main.py                 # Main entry point
β”œβ”€β”€ config.py              # Configuration management
β”œβ”€β”€ requirements.txt       # Python dependencies
β”œβ”€β”€ README.md             # This file
β”œβ”€β”€ data/                 # Dataset directory
β”‚   β”œβ”€β”€ refit.csv
β”‚   └── ukdale.csv
β”œβ”€β”€ models/               # Neural network models
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── nilm_models.py
β”œβ”€β”€ approaches/           # Federated learning approaches
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ pfl.py
β”‚   β”œβ”€β”€ fldp.py
β”‚   β”œβ”€β”€ dppfl.py
β”‚   └── sam.py
β”œβ”€β”€ utils/               # Utility functions
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ data_utils.py
β”‚   └── training_utils.py
└── results/             # Experiment results
    └── .gitkeep

πŸ”¬ Experiments

Model Comparison

Compare different models with PFL approach:

# Test all models with PFL
for model in GRU LSTM CNN CNN_LSTM; do
    python main.py --approach PFL --model $model --data_path data/refit.csv --gpu 0 --FL_epochs 5 --verbose
done

Approach Comparison

Compare different federated learning approaches:

# Test all approaches with GRU model
for approach in PFL FLDP DPPFL SAM; do
    python main.py --approach $approach --model GRU --data_path data/refit.csv --gpu 0 --FL_epochs 5 --verbose
done

Privacy Analysis

Analyze the impact of privacy parameters:

# Test different privacy budgets
for epsilon in 0.5 1.0 2.0 5.0; do
    python main.py --approach FLDP --model GRU --data_path data/refit.csv --gpu 0 --epsilon $epsilon --verbose
done

πŸ“ˆ Results

The experiments generate detailed results including:

  • Training and test losses
  • Privacy guarantees (for DP methods)
  • Training time statistics
  • Model performance metrics

Results are displayed in the console and can be saved to files for further analysis.

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ“š Citation

If you use this code in your research, please cite:

@article{ali2025privacy,
  title={Privacy Preserving Federated Learning for Energy Disaggregation of Smart Homes},
  author={Ali, Mazhar and Kumar, Ajit and Choi, Bong Jun},
  journal={IET Cyber-Physical Systems: Theory \& Applications},
  volume={10},
  number={1},
  pages={e70013},
  year={2025},
  publisher={Wiley Online Library}
}

πŸ†˜ Support

For questions and support, please open an issue on GitHub or contact the maintainers.

πŸ”„ Updates

  • v1.0.0: Initial release with PFL, FLDP, SAM and DPPFL approaches
  • Support for GRU, LSTM, CNN, and CNN_LSTM models
  • GPU acceleration support
  • Comprehensive privacy mechanisms

About

Privacy Preserving Federated Learning for Energy Disaggregation of Smart Homes

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages