Hamza A. Abushahla, Dara Varam, and Dr. Mohamed I. AlHajri
This repository contains code and resources for the paper: "Cognitive Radio Spectrum Sensing on the Edge: A Quantization-Aware Deep Learning Approach".
As wireless communications evolve with 5G and emerging 6G technologies, the dramatic increase in mobile and IoT devices has strained the available radio spectrum, exposing the limitations of traditional static spectrum allocation methods. We introduce a QAT-based approach to optimizing state-of-the-art (SOTA) convolutional neural network (CNN) models for wideband spectrum sensing, tailored for deployment on resource-constrained edge devices. Cognitive radio (CR) allows secondary users to access spectrum holes without interfering with licensed primary users. Recent advancements in deep learning, particularly CNNs, have enabled robust feature extraction from raw in-phase/quadrature (I/Q) data, even under complex channel conditions and noise uncertainties. However, while SOTA architectures like DeepSense and ParallelCNN can quickly detect multiple spectrum holes, their high computational complexity poses a significant challenge for real-time applications on devices with limited resources. Our approach in particular addresses these problems as follows:
- We modify existing architectures to optimize them for quantization and hardware deployment.
- We provide a comprehensive evaluation of our quantized models across various wireless technologies and under different signal-to-noise ratio (SNR) conditions.
- The optimized models have been successfully deployed on the Sony Spresense platform, achieving improvements of up to 72% in memory efficiency, 51% in latency reduction, and 7% in power consumption.
To get started, please ensure version compatibility with the packages from requirements.txt
. We encourage creating a new environment on your machine to keep track.
We evaluate our models using the publicly available SDR and LTE datasets obtained from here.
For SDR, in bin2hdf5.py, we set nsamples_per_file = 50000
(to match reported # of occurrences per channel) and buf = 32
and 128
(control window size, referred to as 0.1
(to get 90%
training + validation), and 10%
testing. we keep stride = 12
(overlap between I/Q samples).
For LTE, in generateLTEDataset.m, we set niq = 32
and 128
(control window size) and varry snr_db
between -20db
and 20db
. We keep Cross validation (train: 90%
, test: 10%
) of the generated data. rest of simulation stettings remain as provided by authors.
Models are trained (with modifications) according to the original DeepSense1 and ParallelCNN2 architectures. To train the standard and QAT versions of the model, navigate to /training_scripts
and look at the different architectures and datasets available. Note that the full model training details (including training parameters, such as batch_size
, epochs
, learning_rate
, etc...) can be found in the respective .py
files corresponding to the configuration.
For example:
Spectrum-Sensing/
├──training_scripts/
│ ├──DeepSense/
│ │ ├──LTE
│ │ └──SDR/
│ │ ├──Deepsense128QAT_SDR.py
│ │ └──Deepsense128_SDR.py
This structure shows how to access the DeepSense architecture trained on the SDR dataset with a window size of
A sample of trained models in .tflite
and .h
formats is available in the /trained_models
directory. The folder structure is as follows:
Spectrum-Sensing/
├──trained_models/
│ ├──DeepSense/
│ │ ├──LTE/
│ │ └──SDR/
│ │ ├──DeepSense_128_normal_SDR_best_overall_model.tflite
│ │ ├──DeepSense_128_normal_SDR_best_overall_model.h
│ │ ├──DeepSense_128_QAT_SDR_best_overall_model_INT8.tflite
│ │ └──DeepSense_128_QAT_SDR_best_overall_model_INT8.h
The best-performing models from each configuration, selected based on validation F1-score, were deployed on the Sony Spresense using TensorFlow Lite for Microcontrollers (TFLM).
Deployment on the Sony Spresense involves converting the .tflite
model into a byte array and integrating it into embedded C code. The steps are outlined in our Sony Spresense TFLite Guide. Specifically:
- Model Conversion: The trained
.tflite
models were converted into.h
header files. - Integration & Flashing: The models were integrated into the Arduino sketchs (
.ino
files) located ininference_scripts
and flashed onto the device using the Arduino IDE. - Inference Testing: Each script runs the model for 1,000 inferences, reporting the mean and standard deviation of inference times in milliseconds (ms).
Datasets were converted into CSV format and uploaded to an SD card, which was read by the Sony Spresense through the extension board.
Power consumption was measured using the Yocto-Amp current sensor, connected in series with an external 5V source.
If you use our work for your own research, please cite us with the below:
@Article{abushahla2025cognitive,
AUTHOR = {Abushahla, Hamza A. and Varam, Dara and AlHajri, Mohamed I.},
TITLE = {Cognitive Radio Spectrum Sensing on the Edge: A Quantization-Aware Deep Learning Approach},
JOURNAL = { },
VOLUME = {},
YEAR = {},
NUMBER = {},
ARTICLE-NUMBER = {},
URL = {},
ISSN = {},
ABSTRACT = {},
DOI = {}
}
You can also reach out through email to:
- Hamza Abushahla - b00090279@alumni.aus.edu
- Dara Varam - b00081313@alumni.aus.edu
- Dr. Mohamed AlHajri - mialhajri@aus.edu