Skip to content

FlorianRupp/g-pcgrl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

G-PCGRL: Procedural Graph Data Generation via Reinforcement Learning

This is the code base for the conference paper of the same name, published in the proceedings of the IEEE Conference on Games 2024.

TL;DR;

  • G-PCGRL is a controllable approach to using PCGRL to generate graph data by manipulating a graph’s adjacency matrix.
    • Therefore, we create the graph-narrow and graph-wide representations.
  • Valid graphs are defined by sets of constraints. Each model is trained on such a set of constraints.
  • Models are controllable in terms of the size of the graph and the types of nodes in the graph.
  • Since it is less dependent on randomness than other methods (e.g., hill climbing, evolutionary algorithm), G-PCGRL is fast and robust in generating content.
Mage Economy

Demo

For a demo of how to control a trained model to generate a graph for a set of constraints, see the demo.ipynb Jupyter notebook.

Limitations

  • The generation of larger graphs is limited. To generate larger graphs we recommend concatenating subgraphs generated by one model, but with different configurations. See the paper for details.
  • Currently, the constraint definition is very simple. Only positive lists are possible, for instance it is not possible to define a min/max connection per node type.

Future work

  • Extend the scaling of the method (e.g. use CNN or GNN layers for feature extraction).
  • Experiment with additional constraint definitions to extend the capabilities of constraint modeling.

Bibliography

If you use this code, please use this for citations (bibtex):

@inproceedings{rupp_gpcgrl_2024,
  author={Rupp, Florian and Eckert, Kai},
  booktitle={2024 IEEE Conference on Games (CoG)}, 
  title={G-PCGRL: Procedural Graph Data Generation via Reinforcement Learning}, 
  year={2024},
  doi={10.1109/CoG60054.2024.10645633}}

Used code

Khalifa et al.: Pcgrl: Procedural Content Generation via Reinforcement Learning.

  • The code in /gym_pcgrl is partially taken from the original code base here (MIT License).
  • For this research it has been extended and adjusted.
@inproceedings{khalifa_pcgrl_2020,
	title = {Pcgrl: {Procedural} content generation via reinforcement learning},
	volume = {16},
	booktitle = {Proceedings of the {AAAI} {Conference} on {Artificial} {Intelligence} and {Interactive} {Digital} {Entertainment}},
	author = {Khalifa, Ahmed and Bontrager, Philip and Earle, Sam and Togelius, Julian},
	year = {2020},
	pages = {95--101},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published