Hardware Specs:
RAM: 13.61 GB
GPU: Tesla T4, RAM: 15.36 GB
CPU: Intel(R) Xeon(R) CPU @ 2.00GHz
Software specs:
OS: Linux-6.1.58+-x86_64-with-glibc2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
CUDA version: 12.1
-
Open the Notebook:
- Open the notebook file (
_notebooks/Final_Training_Pipeline.ipynb
) in your Jupyter Notebook or Google Colab environment.
- Open the notebook file (
-
Setup Environment:
- Ensure that the notebook environment has all necessary dependencies(can be found in _notebooks/requirements.txt) installed. You may need to install additional libraries if not already present.
- The setup includes configuring Weights and Biases. This allows experimental results to be available online in graphical form
-
Configure Dataset and Parameters:
- Locate the cell labeled "Configuration".
- Set the following parameters:
- Dataset Name: Choose the name of the dataset you want to work with. The available options are:
- 'cola'
- 'mrpc'
- 'sst2'
- Batch Size: Set the batch size for training and evaluation. The recommended batch size used in our experiments is 16.
- Seed: Specify the seed value for reproducibility if required(We averaged our experiments over the seeds 40,50 and 60).
- Dataset Name: Choose the name of the dataset you want to work with. The available options are:
-
Setting experimental combinations/hyperparameters:
- Locate the "Experiments" cell in the notebook
- Possible combinations for the hyperparameters in experimental combinations (experiment_combs):
- layers_to_prune = None, "attention", "ffnn", "attention and ffnn"
- pruning_method = None, "l1", "fisher", "l1_and_fisher", "fisher_and_l1"
- pruning_ratio = Anything between 0 to 1
- pruning_rate_decay = False or "linear"
- warm_up = Enter the ratio of the epoch you want to skip for warmup
-
Run Cells Sequentially:
- After configuring the parameters, run the cells in the notebook sequentially. This ensures that dependencies are loaded, data is prepared, models are trained, and results are evaluated in the correct order.
-
Save Predictions to a Text File in Google Drive:
- After completing the notebook execution, you can save the model predictions to a text file in your Google Drive for further analysis or reporting.