-
Notifications
You must be signed in to change notification settings - Fork 25
User Tutorial 0. Quick walkthrough
- Download the latest release, and install the Herd Agent on your own machine (or any other on your same subnet).
Badger has three tabs:
- Editor
- Monitor
- Reports In this tutorial we will take a quick look at them.
In this tab we can open, edit, save experiments, and even run them locally. We will load an existing experiment:
experiments/examples/uv-cacla-pid.simion.proj
(there are several other examples in this folder)
We can now view and edit the parameters of the experiment. We can see that the selected world is Underwater-vehicle and a (9) beside the name of the project (uv-cacla-pid). This number represents the number of different combinations of parameters. This example will require running 9 experimental units.
If we scroll down a bit we can see three parameters that have been forked (given more than one value):
These three parameters have been given three different values each, thus resulting in 3*3=9 parameter combinations, or 9 experimental units.
We can more values to any of these forked parameters, remove some, fork other parameters (Right-click->Fork) or unfork them (Right-click->Unfork).
Once we are happy with our settings, we can either:
- run the complete experiment using the available Herd Agents (Launch)
- run one experimental unit locally and visualize it (Play beside the name of the experiment)
In this tutorial, we will do the former. First, the experiment will 3 forks and 3 values each will saved as a experiment batch (where we select) that is basically a file referencing 9 experimental units (with no forked parameters). All these experimental units are stored in a folder with the same name as the experiment batch. Next, Badger will switch to the Monitor tab.
In this tab, we can view the Herd Agents detected on our own subnet, view their capabilities (OS, #Cores, CUDA support…) and select those we want to use. If we came to this tab after pressing Launch in the Editor tab, the experiment batch will already be selected and loaded, but we can also load an existing experiment batch.
Once an experiment batch has been loaded, it can be run by pressing the Play button. Experimental units will be grouped in jobs that will be sent to Herd Agents depending on the number of cores. Badger will select the appropriate binaries to Herd Agents depending on the OS: Linux-64, Win-64 or Win-32. On the right, we can monitor the evolution of the experiment:
- On the top, we can see the average rewards obtained in the evaluation episodes.
- On the bottom, we can see for each job: the IP address of the Herd Agent running it, the state of every experimental unit (sending files, running, …), and we can also view the complete log generated by clicking on the experiment icon.
Once all the experimental units have been executed, we can click the Generate Reports button and Badger will switch to the Reports tab.
In this last tab, we can generate plots and statistics from the log files generated by the experimental units we just executed (the log files are automatically sent back to Badger).
On the left, we can see the list of experiments (we only ran one but we could have run several at once) and the fork hierarchy with their values. We can group experimental units by the value of a forked parameter (Right-click->Group by this fork). This will select only one experimental unit for each value given to the forked parameter. In the example above, we can group by the fork Actor-Gain, selecting only one experimental unit from those with value 0.001, one with value 0.002 and one with value 0.003. We can configure the Track groups selection below (this will be shown after selecting Group by this fork).
Below, we have a list of the variables saved in the log files. We select those we want to analyze, select whether we want the actual values or their absolute values (i.e., for absolute errors) and the source:
- Last evaluation: only consider the values logged in the last evaluation episode
- Evaluation averages: consider the averages of each evaluation episode (one value per evaluation episode)
- All training episodes: all the values logged in the training episodes. Each training episode will generate a different series of values.
- All evaluation episodes: all the values logged in the evaluation episodes. Each evaluation episode will generate a different series of values.
Below the list of variables, we have some additional parameters:
- Group by experiment: If the batch has several experiments, check this to group tracks by experiment.
- Limit tracks: If checked, we can limit the number of tracks (experimental units). The selection is configured in the same way as the Group by fork.
- Use fork selection: If checked, a check-box will appear beside every fork-value in the fork hierarchical view so that we can select only some of them.
- Time offset: If not zero, only values logged after Time offset seconds will be considered. Useful when using the FAST world, in which it's recommended to ignore the first simulated 30 seconds so that the system is first stabilized.
- Resample data: We can use this, for example, to resample the resolution of a track if we have too many values in a track and want to improve the visibility of the plot.
- Min. Length: If not zero, only tracks with at least Min. Length seconds recorded will be considered.
Once all the parameters are set, we can run the query (button above the parameters). The log query result will be shown on the right as a set of plots (one for each variable selected) and statistics. On the right of the plots, the Settings button allows us to modify: the font of the text, the placement/visibility of the legend, which tracks to show, and so on. In the statistics below, we can right-click on one of the tracks (experimental units) and visualize the experiment off-line (Right-click->View experiment) or the value functions learned by the agents (Right-click->View functions).
Now, you can go to the next tutorial.