You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20-73Lines changed: 20 additions & 73 deletions
Original file line number
Diff line number
Diff line change
@@ -221,8 +221,8 @@ The distribution of the different classes in the SST dataset is not equal (class
221
221
To balance the QQP and SST trainset we add weights to our Cross-Entropy loss function such that a training sample from a small class is assigned with an higher weight. This resulted in the following performance:
222
222
| Model name | SST accuracy | QQP accuracy | STS correlation |
Use the same command as in the previous section and add the argument ```--para_sep True --weights True``` for reproducing the results.
228
228
@@ -232,13 +232,12 @@ With this approach we could improve the performance on the SST dataset compared
232
232
...
233
233
234
234
#### Additional layers
235
-
Another problem we earlier observed was that the task contradict each other, i.e. in separating QQP training the paraphrasing accuracy increased but the other to accuracies decreased. We try to solve these conflicts by adding a simple neural network with one hidden layer as classifier for each task instead of only a linear classifier. The idea is that each task gets more parameters to adjust which are not influenced by the other tasks. As activation function in the neuronal network we tested ReLu and tanh activation layers between the hidden layer and the output, but both options performed equally poor.
236
-
| Model name | SST train_accuracy | QQP train_accuracy | STS train_correlation |
Another problem we earlier observed was that the task contradict each other, i.e. in separating QQP training the paraphrasing accuracy increased but the other to accuracies decreased. We try to solve these conflicts by adding a simple neural network with one hidden layer as classifier for each task instead of only a linear classifier. The idea is that each task gets more parameters to adjust which are not influenced by the other tasks. As activation function in the neuronal network we tested ReLu and tanh activation layers between the hidden layer and the output. The ReLu activation function performed better. Furthermore, we tried to freeze the BERT parameters in the last trainings epohs and only train the classifier parameters. This improved the performance especially on the SST dataset.
236
+
| Model name | SST accuracy | QQP accuracy | STS correlation |
| Adam extra classifier training| 51,6% | 88,5% | 84,3 % |
242
241
243
242
Use the same command as in the previous section and add the argument ```--para_sep True --weights True --add_layers True``` for reproducing the results.
For ideas 1, 3 we get the original self attention by having specific parameters. We also found a paper that showed the second idea. The goal was that the model uses the original parameters but having more freedom in manipulating them by adding few extra parameters inside all the bert layers. We later realized that all 3 ideas could be combined resulting in 8 different models (1 baseline + 7 extra):
332
-
333
-
| Model name | SST accuracy | QQP accuracy | STS correlation |
Our baseline was different because we used other starting parameters (greater batch size, fewer parameters). We did this to reduce the training time for this experiment, see also ``submit_custom_attention.sh``:
Except for the SparsemaxSelfAttention STS correlation, all values declined. The problem is highly due to overfitting. Making the model even more complex makes overfitting worse, thus we get worse performance.
315
+
- At this Station we are considering/trying three ideas of Generalisations by hyperparameters on the Bert-Self-Attention (see (https://gitlab.gwdg.de/lukas.niegsch/language-ninjas/-/issues/54))
316
+
- Although the idea of envolving more hyperparameters, should improve the result, however because of overfitting we are getting even a bit lower accuracy.
[Splitted and reordererd batches](https://gitlab.gwdg.de/lukas.niegsch/language-ninjas/-/milestones/12#tab-issues)
355
-
356
-
The para dataset is much larger than the other two. Originally, we trained para last and then evaluate all 3 independent from each other. This has the effect that the model is optimized towards para, but forgets information from sst and sts. We moved para first and then did the other two last.
357
-
358
-
Furthermore, all 3 datasets are learned one after another. This means that the gradiants may point in 3 different directions which we follow one after another. However, our goal is to move in the general direction for all 3 tasks together. We tried splitting the datasets into 6 different chunks (large para), (tiny sst, tiny para), (sts_size sts, sts_size para, sts_size sst). Important here is that the last 3 batches are the same size. Thus we can train all tasks without having para dominate the others.
359
-
360
-
Lastly, we tried training the batches for the last 3 steps in a round robin way (sts, para, sst, sts, para, sst, ...).
361
-
362
-
| Model name | SST accuracy | QQP accuracy | STS correlation |
We used the same script as for the custom attention, but only used the orignal self attention. The reordered training is enabled by default because it gave the best performance. The round robin training can be enabled using the ``--cyclic_finetuning`` flag.
The reordering improved the performance, most likely just because the para comes first. The round robin did not improve it further, maybe switching after each batch is too much.
375
-
321
+
- At this Step we are considring a specific order of batches by splitting the the datasets and put them in a specific order, (see (https://gitlab.gwdg.de/lukas.niegsch/language-ninjas/-/issues/59)).
322
+
- The idea works. We recieve at least 1% more accurcy at each task.
376
323
### Combined Loss
377
324
378
325
This could work as a kind of regularization, because it is not training on a single task and overfitting, but it uses all losses to optimize.
@@ -572,28 +519,28 @@ This could be achieved be generating more (true) data from the datasets sst and
572
519
- Dropout and weight decay tuning for BERT (AdamW and Sophia)
573
520
574
521
## Member Contributions
575
-
Dawor, Moataz: Generalisations on Custom Attention, Splitted and reordererd batches, analysis_dataset
522
+
Dawor, Moataz: Generalisations on Custom Attention, Splitted and reordererd batches, analysis_dataset
576
523
577
524
Lübbers, Christopher L.: Part 1 complete; Part 2: sBERT, Tensorboard (metrics + profiler), sBERT-Baseline, SOPHIA, SMART, Optuna, sBERT-Optuna for Optimizer, Optuna for sBERT and BERT-SMART, Optuna for sBERT-regularization, sBERT with combinded losses, sBERT with gradient surgery, README-Experiments for those tasks, README-Methodology, final model, ai usage card
578
525
579
-
Niegsch, Lukas*: Generalisations on Custom Attention, Splitted and reordererd batches, repository maintenance (merging, lfs, some code refactoring)
526
+
Niegsch, Lukas*: Generalisations on Custom Attention, Splitted and reordererd batches,
580
527
581
-
Schmidt, Finn Paul:
528
+
Schmidt, Finn Paul: sBert ultitzask training, Sophia dropout layers, Sophia seperated paraphrasing training, Sophia weighted loss, Optuna study on the dropout and hyperparameters, BERT baseline adam, BERT additional layers, error_analysis
582
529
583
530
584
531
## Submit commands
585
532
586
-
Für sophia base mit optimierten parametern zu trainieren:
533
+
To train the sophia base model with optimised parameters run :
0 commit comments