This project aims to solve a critical challenge in planetary seismology: reducing the power and bandwidth required to transmit continuous seismic data from space missions back to Earth. By distinguishing seismic signals from noise, only the relevant seismic events are sent, optimizing the use of power and bandwidth.
We leverage machine learning to develop an efficient model for identifying seismic events within noisy planetary data, such as those collected by the Apollo missions and the Mars InSight Lander. The goal is to minimize the amount of irrelevant data transmitted by detecting the start of seismic events and ignoring the noise.
Our approach combines several machine learning techniques to create a robust and accurate detection system:
-
CNN + RNN Architecture:
-
Vision-Based Model (First break picking model):
-
Pre-Trained Model (Phasenet):
- We fine-tune the pre-trained Phasenet model, which is specifically designed for seismic event detection. Phasenet is already well-suited for recognizing seismic phases and will complement the other models.
-
Ensemble Voting Mechanism:
- The seismic data is passed to the CNN + RNN model, U-Net model, and a fine-tuned Phasenet model.
- Each model processes the data independently and outputs a decision: whether the input is noise or a valid seismic event.
- The decisions are then combined using a majority voting system. If the majority of the models detect a seismic event, the data is flagged as relevant; otherwise, it is classified as noise.
This ensures robustness, as the ensemble of models increases the accuracy of detecting seismic events within noisy data.
-
Apollo Missions: Seismic data recorded by the Apollo lunar missions, particularly focusing on moonquakes.
-
Mars InSight Lander: Seismic data collected from Mars, including marsquakes and other planetary phenomena.
-
We used the dataset provided by the NASA SpaceApps challenge
- Python 3
- PyTorch
- NumPy
- Pandas
- Matplotlib
- ObsPy (for reading seismic data)
- SciPy
- Pre-trained Phasenet model and U-Net model
This project is licensed under the MIT License. See the LICENSE file for details.