You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Documents/Training-PPO.md
+6-5Lines changed: 6 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -23,14 +23,14 @@ The example [Getting Started with the 3D Balance Ball Environment](Getting-Start
23
23
## Explanation of fields in the inspector
24
24
We use similar parameters as in Unity ML-Agents. If something is confusing, read see their [document](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-PPO.md) for mode datails.
25
25
26
-
### TrainerPPO.cs
26
+
####TrainerPPO.cs
27
27
*`isTraining`: Toggle this to switch between training and inference mode. Note that if isTraining if false when the game starts, the training part of the PPO model will not be initialize and you won't be able to train it in this run. Also,
28
28
*`parameters`: You need to assign this field with a TrainerParamsPPO scriptable object.
29
29
*`continueFromCheckpoint`: If true, when the game starts, the trainer will try to load the saved checkpoint file to resume previous training.
30
30
*`checkpointPath`: the path of the checkpoint, including the file name.
31
31
*`steps`: Just to show you the current step of the training.
32
32
33
-
### TrainerParamsPPO
33
+
####TrainerParamsPPO
34
34
*`learningRate`: Learning rate used to train the neural network.
35
35
*`maxTotalSteps`: Max steps the trainer will be training.
36
36
*`saveModelInterval`: The trained model will be saved every this amount of steps.
@@ -45,12 +45,12 @@ We use similar parameters as in Unity ML-Agents. If something is confusing, read
45
45
*`numEpochPerTrain`: For each training, the data in the buffer will be used repeatedly this amount of times.
46
46
*`useHeuristicChance`: See [Training with Heuristics](#training-with-heuristics).
47
47
48
-
### RLModelPPO.cs
48
+
####RLModelPPO.cs
49
49
*`checkpointToLoad`: If you assign a model's saved checkpoint file to it, this will be loaded when model is initialized, regardless of the trainer's loading. Might be used when you are not using a trainer.
50
50
*`Network`: You need to assign this field with a scriptable object that implements RLNetworkPPO.cs.
51
51
*`optimizer`: The time of optimizer to use for this model when training. You can also set its parameters here.
52
52
53
-
### RLNetworkSimpleAC
53
+
####RLNetworkSimpleAC
54
54
This is a simple implementation of RLNetworkAC that you can create a plug it in as a neural network definition for any RLModelPPO. PPO uses actor/critic structure(See PPO algorithm).
55
55
-`actorHiddenLayers`/`criticHiddenLayers`: Hidden layers of the network. The array size if the number of hidden layers. In each element, there are for parameters that defines each layer. Those do not have default values, so you have to fill them.
56
56
- size: Size of this hidden layer.
@@ -69,6 +69,7 @@ If you already know some policy that is better than random policy, you might giv
69
69
70
70
Note that your AgentDependentDeicision is only used in training mode. The chance of using it in each step for agent with the script attached depends on `useHeuristicChance`.
71
71
72
-
72
+
## Create your own neural network architecture
73
+
If you want to have your own neural network architecture instead of the one provided by [`RLNetworkSimpleAC`](#rlnetworksimpleac), you can inherit `RLNetworkAC` class to build your own neural network. See the [sourcecode](https://github.com/tcmxx/UnityTensorflowKeras/blob/tcmxx/docs/Assets/UnityTensorflow/Learning/PPO/TrainerPPO.cs) of `RLNetworkAC.cs` for documentation.
Copy file name to clipboardExpand all lines: Documents/Training-SL.md
+11-8Lines changed: 11 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ The example scene `UnityTensorflow/Examples/Pong/PongSL` shows how to use superv
13
13
3. Assign the Trainer to the `Trainer` field of your Brain.
14
14
3. Create a Model
15
15
1. Attach a `SupervisedLearningModel.cs` to any GameObject.
16
-
2. Create a `SupervisedLearningNetwork` scriptable object in your project and assign it to the Network field in `SupervisedLearningModel.cs`.
16
+
2. Create a `SupervisedLearningNetworkSimple` scriptable object in your project and assign it to the Network field in `SupervisedLearningModel.cs`.
17
17
3. Assign the created Model to the `modelRef` field of in `TrainerMimic.cs`
18
18
19
19
4. Create a Decision
@@ -26,7 +26,7 @@ The example scene `UnityTensorflow/Examples/Pong/PongSL` shows how to use superv
26
26
* The `isCollectinData` field in trainer needs to be true to collect training data.
27
27
28
28
## Explanation of fields in the inspector
29
-
### TrainerMimic.cs
29
+
####TrainerMimic.cs
30
30
*`isTraining`: Toggle this to switch between training and inference mode. Note that if isTraining if false when the game starts, the training part of the PPO model will not be initialize and you won't be able to train it in this run. Also,
31
31
*`parameters`: You need to assign this field with a TrainerParamsMimic scriptable object.
32
32
*`continueFromCheckpoint`: If true, when the game starts, the trainer will try to load the saved checkpoint file to resume previous training.
@@ -35,7 +35,7 @@ The example scene `UnityTensorflow/Examples/Pong/PongSL` shows how to use superv
35
35
* 'isCollectingData': If the training is collecting training data from Agents with Decision.
36
36
*`dataBufferCount`: Current collected data count.
37
37
38
-
### TrainerParamsMimic
38
+
####TrainerParamsMimic
39
39
*`learningRate`: Learning rate used to train the neural network.
40
40
*`maxTotalSteps`: Max steps the trainer will be training.
41
41
*`saveModelInterval`: The trained model will be saved every this amount of steps.
@@ -44,12 +44,12 @@ The example scene `UnityTensorflow/Examples/Pong/PongSL` shows how to use superv
44
44
*`requiredDataBeforeTraining`: How many collected data count is needed before it start to traing the neural network.
45
45
*`maxBufferSize`: Max buffer size of collected data. If the data buffer count exceeds this number, old data will be overrided. Set this to 0 to remove the limit.
46
46
47
-
### SupervisedLearningModel.cs
47
+
####SupervisedLearningModel.cs
48
48
*`checkpointToLoad`: If you assign a model's saved checkpoint file to it, this will be loaded when model is initialized, regardless of the trainer's loading. Might be used when you are not using a trainer.
49
49
*`Network`: You need to assign this field with a scriptable object that implements RLNetworkPPO.cs.
50
50
*`optimizer`: The optimizer to use for this model when training. You can also set its parameters here.
51
51
52
-
### SupervisedLearningNetworkSimple
52
+
####SupervisedLearningNetworkSimple
53
53
This is a simple implementation of SuperviseLearningNetowrk that you can create a plug it in as a neural network definition for any SupervisedLearningModel.
54
54
-`hiddenLayers`: Hidden layers of the network. The array size if the number of hidden layers. In each element, there are for parameters that defines each layer. Those do not have default values, so you have to fill them.
55
55
- size: Size of this hidden layer.
@@ -66,15 +66,15 @@ You can also use a [conditional GAN](https://arxiv.org/abs/1411.1784) model inst
66
66
67
67
Note that currently the GAN network we made does not support visual observation.
68
68
69
-
### Steps
69
+
####Steps
70
70
Most the same steps as using regular [supervised learning](Overall Steps) as before, but change step 3 to create a GAN model, and change the `TrainerParamsMimic` in step 2-2 to `TrainerParamsGAN` instead.
71
71
72
72
- Create a GAN model:
73
73
1. Attach a `GANModel.cs` to any GameObject.
74
74
2. Create a `GANNetworkDense` scriptable object in your project and assign it to the Network field in `GANModel.cs`.
75
75
3. Assign the created Model to the `modelRef` field of in `TrainerMimic.cs`
76
76
77
-
### GANModel.cs
77
+
####GANModel.cs
78
78
*`checkpointToLoad`: If you assign a model's saved checkpoint file to it, this will be loaded when model is initialized, regardless of the trainer's loading. Might be used when you are not using a trainer.
79
79
*`Network`: You need to assign this field with a scriptable object that implements RLNetworkPPO.cs.
80
80
*`generatorL2LossWeight`: L2 loss weight of the generator. Usually 0 is fine.
@@ -85,8 +85,11 @@ Most the same steps as using regular [supervised learning](Overall Steps) as bef
85
85
*`discriminatorOptimizer`: The optimizer to use for this model to train discriminator.
86
86
*`initializeOnAwake`: Whether to initialize the GAN model on awake baed on shapes defined above. For ML-Agent environment, set this to false.
87
87
88
-
### TrainerParamsGAN
88
+
####TrainerParamsGAN
89
89
See [TrainerParamsMimic](#trainerparamsmimic) for other parameters not listed below.
90
90
*`discriminatorTrainCount`: How many times the discriminator will be trained each training step.
91
91
*`generatorTrainCount`: How many times the generator will be trained each training step.
92
92
*`usePrediction`: Whether use [prediction method](https://www.semanticscholar.org/paper/Stabilizing-Adversarial-Nets-With-Prediction-Yadav-Shah/ec25504486d8751e00e613ca6fa64b256e3581c8) to stablize the training.
93
+
94
+
## Create your own neural network architecture
95
+
If you want to have your own neural network architecture instead of the one provided by [`SupervisedLearningNetworkSimple`](#supervisedlearningnetworksimple), you can inherit `SupervisedLearningNetwork` class to build your own neural network. See the [sourcecode](https://github.com/tcmxx/UnityTensorflowKeras/blob/tcmxx/docs/Assets/UnityTensorflow/Learning/Mimic/SupervisedLearningNetwork.cs) of `SupervisedLearningNetwork.cs` for documentation.
0 commit comments