You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I built a wrapper in which DL algorithm train at initialisation
- If self-supervised, one can give data to the algorithm to train from.
- If supervised, or if no data is given, data is simulated
- For IVIM-NET this happens within the wrapper
- For Super IVIM DC this happens in the package (as I did not see an option to give training data)
Then testing occurs. For speed, I give all testing data in 1 go. Also, as deep learning is known to predominantly work better than LSQ in noisy data, and actually do a poor job in noise-less data, I made a second DL-specific dataset which contains much more noise. Also, I made DL-specific boundaries for passing unit testing.
self.optim='adam'# these are the optimisers implementd. Choices are: 'sgd'; 'sgdr'; 'adagrad' adam
100
+
self.lr=0.00003# this is the learning rate.
101
+
self.patience=10# this is the number of epochs without improvement that the network waits untill determining it found its optimum
102
+
self.batch_size=128# number of datasets taken along per iteration
103
+
self.maxit=500# max iterations per epoch
104
+
self.split=0.9# split of test and validation data
105
+
self.load_nn=False# load the neural network instead of retraining
106
+
self.loss_fun='rms'# what is the loss used for the model. rms is root mean square (linear regression-like); L1 is L1 normalisation (less focus on outliers)
107
+
self.skip_net=False# skip the network training and evaluation
108
+
self.scheduler=False# as discussed in the article, LR is important. This approach allows to reduce the LR itteratively when there is no improvement throughout an 5 consecutive epochs
self.dropout=0.1# 0.0/0.1 chose how much dropout one likes. 0=no dropout; internet says roughly 20% (0.20) is good, although it also states that smaller networks might desire smaller amount of dropout
118
+
self.batch_norm=True# False/True turns on batch normalistion
119
+
self.parallel='parallel'# defines whether the network exstimates each parameter seperately (each parameter has its own network) or whether 1 shared network is used instead
120
+
self.con='sigmoid'# defines the constraint function; 'sigmoid' gives a sigmoid function giving the max/min; 'abs' gives the absolute of the output, 'none' does not constrain the output
self.fitS0=True# indicates whether to fit S0 (True) or fix it to 1 (for normalised signals); I prefer fitting S0 as it takes along the potential error is S0.
127
+
self.depth=2# number of layers
128
+
self.width=0# new option that determines network width. Putting to 0 makes it as wide as the number of b-values
129
+
boundsrange=0.3* (np.array(self.cons_max)-np.array(self.cons_min)) # ensure that we are on the most lineair bit of the sigmoid function
0 commit comments