This repository hosts all benchmarking, evaluation, and fine-tuning experiments on large pre-trained time-series models (LPTMs) under Dr. Prakash’s supervision.
I evaluated Foundational Time-Series Models on eight standard datasets:
- ETT1
- ETT2
- Flu-US
- PEM-Bays
- NY-Bike Demand
- NY-Taxi Demand
- Nasdaq
- M4
I compared and fine-tuned the following Foundational Time-Series Models:
All experiments included systematic fine-tuning on each model to assess adaptability across datasets.
In backend/
, you’ll find a RESTful Flask API deployed on a private NVIDIA DGX server using reverse-proxy and continuous testing with Postman & Ngrok. It supports:
- Model loading & versioning
- Dataset uploads
- On-the-fly fine-tuning
- Inference endpoints
- Data: Raw benchmark datasets live in
data/
. - Visualizations: All performance charts and plots are saved under
benchmark_visualizations/
.
Detailed notebooks and logs of each experimental run are in experiments/
. These include:
- Training/fine-tuning configurations
- Metrics tracking (MAE, RMSE, etc.)
- Ablation studies and hyperparameter sweeps