Skip to content

AlanDoesCS/Easy-Java-RL-Library

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Easy Java Reinforcement Learning Library (EJRLL)

A simple neural network library for training Deep Q Networks in different environments.

GitHub Issues or Pull Requests GitHub commit activity GitHub contributors GitHub Repo stars GitHub forks


Environments

  • 2D perlin noise
  • Maze (generated using recursive backtracking)
  • Pseudorandom noise

RL Algorithms

  • DQN
  • Double DQN

Replay

  • Replay Buffer
  • Prioritized Experience Replay

Optimizers

  • Adam

Example usage:

public class Main {
    public static void main(String[] args) {
        Environment.setStateType(Environment.StateType.PositionVectorOnly);
        Environment.setDimensions(10, 10);
        Environment.setActionSpace(4);

        DDQNAgentTrainer trainer;
        try {
            trainer = new DDQNAgentTrainer(Set.of(EmptyGridEnvironment.class, RandomGridEnvironment.class, PerlinGridEnvironment.class, MazeGridEnvironment.class));
        } catch (InvalidTypeException e) {
            e.printStackTrace();
            return;
        }

        List<Layer> layers = new ArrayList<>();
        LeakyReLU leakyRelu = new LeakyReLU(0.1f);
        float lambda = 0.0001f;

        // StateSpace is 104, ActionSpace is 5
        layers.add(new MLPLayer(Environment.getStateSpace(), 64, leakyRelu, 0, lambda));
        layers.add(new MLPLayer(64, 64, leakyRelu, 0, lambda));
        layers.add(new MLPLayer(64, Environment.getActionSpace(), new Linear(), 0, lambda));

        DDQNAgent ddqnAgent = new DDQNAgent(
                Environment.getActionSpace(),  // action space
                layers,                        // layers
                1,                             // initial epsilon
                0.9999,                        // epsilon decay
                0.01,                          // epsilon min
                0.999,                         // gamma
                0.0001,                        // learning rate
                0.99995,                       // learning rate decay
                0.000001f,                     // learning rate minimum
                0.005                          // tau
        );

        trainer.trainAgent(
                ddqnAgent,                     // agent
                600000,                        // num episodes
                500,                           // save period
                1,                             // visualiser update period
                "plot", "ease", "axis_ticks", "show_path", "verbose" // varargs
        );
    }
}

Papers & Resources Used

This list is incomplete, but I will try and ensure I add all the sources I used eventually

Papers

Videos

About

A simple RL library, with a focus on DQNs

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages