|
1 |
| -# pytorch-blender |
| 1 | +# blendtorch v0.2 |
| 2 | + |
2 | 3 |
|
3 |
| -Seamless integration of Blender renderings into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. This repository contains a minimal demonstration that harvests images and meta data from ever changing Blender renderings. |
| 4 | +**blendtorch** is a Python framework to seamlessly integrate [Blender](http://blender.orf) renderings into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations in real-time and thus avoid stalling model training in many cases. |
4 | 5 |
|
5 |
| -``` |
6 |
| -python pytorch_sample.py |
7 |
| -``` |
8 |
| -renders a set of images of random rotated cubes to `./tmp/output_##.png`, such as the following |
9 |
| - |
10 |
| - |
11 |
| - |
12 |
| -This image is generated by 4 Blender instances, randomly perturbating a minimal scene. The results are collated in a `BlenderDataset`. A PyTorch `DataLoader` with batch size 4 is used to grab from the dataset and produce the figure. |
13 |
| - |
14 |
| -The code accompanies our [academic work](https://arxiv.org/abs/1907.01879) in the field of machine learning from artificial images. When using please cite the following work |
15 |
| -``` |
16 |
| -@misc{cheindkpts2019, |
17 |
| -Author = {Christoph Heindl and Sebastian Zambal and Josef Scharinger}, |
18 |
| -Title = {Learning to Predict Robot Keypoints Using Artificially Generated Images}, |
19 |
| -Year = {2019}, |
20 |
| -Eprint = {arXiv:1907.01879}, |
21 |
| -Note = {To be published at ETFA 2019}, |
22 |
| -} |
23 |
| -``` |
| 6 | +Feature summary |
| 7 | + - ***Data Streaming***: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender.</br>More info [\[examples/datagen\]](examples/datagen) |
| 8 | + - ***OpenAI Gym Support***: Create and run remotely controlled Blender gyms to train reinforcement agents. Blender serves as simulation, visualization, and interactive live manipulation environment. |
| 9 | + </br>More info [\[examples/control\]](examples/control) |
24 | 10 |
|
25 |
| -## Code outline |
26 |
| -```Python |
27 |
| -import torch.utils.data as data |
| 11 | +The figure below visualizes a single image/label batch received by PyTorch from four parallel Blender instances. Each Blender process repeatedly performs motion simulations of randomized cubes. |
28 | 12 |
|
29 |
| -import blendtorch as bt |
| 13 | +<p align="center"> |
| 14 | +<img src="etc/result_physics.png" width="500"> |
| 15 | +</p> |
30 | 16 |
|
31 |
| -# Standard PyTorch Dataset convention |
32 |
| -class MyDataset: |
| 17 | +## Getting started |
| 18 | + 1. Read the installation instructions below |
| 19 | + 1. To get started with **blendtorch** for training data training read [\[examples/datagen\]](examples/datagen). |
| 20 | + 1. To learn about using **blendtorch** for creating reinforcement training environments read [\[examples/control\]](examples/control). |
33 | 21 |
|
34 |
| - def __init__(self, blender_launcher, transforms=None): |
35 |
| - self.recv = bt.Receiver(blender_launcher) |
36 |
| - self.transforms = transforms |
| 22 | +## Installation |
37 | 23 |
|
38 |
| - def __len__(self): |
39 |
| - # Virtually anything you'd like to end episodes. |
40 |
| - return 100 |
| 24 | +**blendtorch** is composed of two distinct sub-packages: `bendtorch.btt` (in [pkg_pytorch](./pkg_pytorch)) and `blendtorch.btb` (in [pkg_blender](./pkg_blender)), providing the PyTorch and Blender views on **blendtorch**. |
41 | 25 |
|
42 |
| - def __getitem__(self, idx): |
43 |
| - # Data is a dictionary of {image, xy, id}, |
44 |
| - # see publisher script |
45 |
| - d = self.recv(timeoutms=5000) |
46 |
| - return d['image'], d['xy'], d['id'] |
| 26 | +### Prerequisites |
| 27 | +This package has been tested with |
| 28 | + - [Blender](https://www.blender.org/) >= 2.83 (Python 3.7) |
| 29 | + - [PyTorch](http://pytorch.org) >= 1.50 (Python 3.7/3.8) |
| 30 | +running Windows 10 and Linux. |
47 | 31 |
|
48 |
| -kwargs = { |
49 |
| - 'num_instances': 2, |
50 |
| - 'script': 'blender.py', |
51 |
| - 'scene': 'scene.blend', |
52 |
| -} |
53 |
| - |
54 |
| -with bt.BlenderLauncher(**kwargs) as bl: |
55 |
| - ds = MyDataset(bl) |
56 |
| - dl = data.DataLoader(ds, batch_size=4, num_workers=0) |
| 32 | +Other versions might work as well, but have not been tested. |
57 | 33 |
|
58 |
| - for idx in range(10): |
59 |
| - x, coords, ids = next(iter(dl)) |
60 |
| - print(f'Received from {ids}') |
| 34 | +### Clone this repository |
| 35 | +``` |
| 36 | +git clone https://github.com/cheind/pytorch-blender.git <DST> |
61 | 37 | ```
|
62 | 38 |
|
63 |
| -## Runtimes |
64 |
| - |
65 |
| -The runtimes for the demo scene (really quick to render) is shown below. |
| 39 | +### Extend `PATH` |
| 40 | +Ensure Blender executable is in your environments lookup `PATH`. On Windows this can be accomplished by |
| 41 | +``` |
| 42 | +set PATH=c:\Program Files\Blender Foundation\Blender 2.83;%PATH% |
| 43 | +``` |
66 | 44 |
|
67 |
| -| Blender Instances | Runtime ms/batch | |
68 |
| -|:-:|:-:| |
69 |
| -| 1 | `103 ms ± 5.17 ms` | |
70 |
| -| 2 | `43.7 ms ± 10.3 ms` | |
| 45 | +### Install **blendtorch** Blender part |
| 46 | +``` |
| 47 | +blender --background --python <DST>/scripts/install_btb.py |
| 48 | +``` |
| 49 | +installs `blendtorch-btb` into the Python environment bundled with Blender. |
71 | 50 |
|
72 |
| -The above timings include rendering, transfer and encoding/decoding. Depending on the complexity of renderings you might want to tune the number of instances. |
| 51 | +### Install **blendtorch** PyTorch part |
| 52 | +``` |
| 53 | +pip install -e <DST>/pkg_pytorch |
| 54 | +``` |
| 55 | +installs `blendtorch-btt` into the Python environment that you intend to run PyTorch from. While not required, it is advised to install OpenAI gym if you intend to use **blendtorch** for reinforcement learning |
| 56 | +``` |
| 57 | +pip install gym |
| 58 | +``` |
| 59 | +### Developer instructions |
| 60 | +This step is optional. If you plan to run the unit tests |
| 61 | +``` |
| 62 | +pip install -r requirements_dev.txt |
| 63 | +pytest tests/ |
| 64 | +``` |
73 | 65 |
|
74 |
| -## Prerequisites |
75 |
| -The following packages need to be available in your PyTorch environment and Blender environment: |
76 |
| - - Python >= 3.7 |
77 |
| - - [Blender](https://www.blender.org/) >= 2.79 |
78 |
| - - [PyTorch](http://pytorch.org) >= 0.4 |
79 |
| - - [PyZMQ](https://pyzmq.readthedocs.io/en/latest/) |
80 |
| - - [Pillow/PIL](https://pillow.readthedocs.io/en/stable/installation.html) |
| 66 | +### Troubleshooting |
| 67 | +Run |
| 68 | +``` |
| 69 | +blender --version |
| 70 | +``` |
| 71 | +and check if the correct Blender version (>=2.83) is written to console. Next, ensure that `blendtorch-btb` installed correctly |
| 72 | +``` |
| 73 | +blender --background --python-use-system-env --python-expr "import blendtorch.btb as btb; print(btb.__version__)" |
| 74 | +``` |
| 75 | +which should print **blendtorch** version number on success. Next, ensure that `blendtorch-btt` installed correctly |
| 76 | +``` |
| 77 | +python -c "import blendtorch.btt as btt; print(btt.__version__)" |
| 78 | +``` |
| 79 | +which should print **blendtorch** version number on success. |
81 | 80 |
|
82 |
| -Both packages are installable via `pip`. In order add packages to your Blender packaged Python distribution, execute the following commands (usually administrator privileges are required on Windows) |
| 81 | +## Architecture |
| 82 | +Please see [\[examples/datagen\]](examples/datagen) and [examples/control\]](examples/control) for an in-depth architectural discussion. |
83 | 83 |
|
| 84 | +## Cite |
| 85 | +The code accompanies our [academic work](https://arxiv.org/abs/1907.01879) in the field of machine learning from artificial images. When using please cite the following work |
84 | 86 | ```
|
85 |
| -"<BLENDERPATH>2.79\python\bin\python.exe" -m ensurepip |
86 |
| -"<BLENDERPATH>2.79\python\bin\python.exe" -m pip install pyzmq |
87 |
| -"<BLENDERPATH>2.79\python\bin\python.exe" -m pip install pillow |
| 87 | +@inproceedings{robotpose_etfa2019_cheind, |
| 88 | + author={Christoph Heindl and Sebastian Zambal and Josef Scharinger}, |
| 89 | + title={Learning to Predict Robot Keypoints Using Artificially Generated Images}, |
| 90 | + booktitle={ |
| 91 | + 24th IEEE International Conference on |
| 92 | + Emerging Technologies and Factory Automation (ETFA) |
| 93 | + }, |
| 94 | + year={2019}, |
| 95 | + pages={1536-1539}, |
| 96 | + doi={10.1109/ETFA.2019.8868243}, |
| 97 | + isbn={978-1-7281-0303-7}, |
| 98 | +} |
88 | 99 | ```
|
89 |
| -where `<BLENDERPATH>` is the file path to the directory containing the Blender executable. |
90 | 100 |
|
91 |
| -**Note** The Blender executable needs to be in your PATH. On Windows it does not suffice to temporarily modify the PATH variable, as no derived shell is spawned and temporary environment variables are not passed on. |
| 101 | +## Runtimes |
| 102 | +The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmarks/benchmark.py](./benchmarks/benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating. |
| 103 | + |
| 104 | +| Blender Instances | Runtime sec/batch | Runtime sec/image | Arguments| |
| 105 | +|:-:|:-:|:-:|:-:| |
| 106 | +| 1 | 0.236 | 0.030| UI refresh| |
| 107 | +| 2 | 0.14 | 0.018| UI refresh| |
| 108 | +| 4 | 0.099 | 0.012| UI refresh| |
| 109 | +| 5 | 0.085 | 0.011| no UI refresh| |
92 | 110 |
|
93 |
| -## How it works |
94 |
| -An instance of [BlenderLaunch](blendtorch/launcher.py) is responsible for starting and stopping background Blender instances. The script `blender.py` and additional arguments are passed to the starting Blender instance. `blender.py` creates a publisher socket for communication and starts producing random renderings. Meanwhile, a PyTorch dataset uses a [Receiver](blendtorch/receiver.py) instance to read data from publishers. |
| 111 | +Note: If no image transfer is needed, i.e in reinforcement learning of physical simulations, 2000Hz are easily achieved. |
95 | 112 |
|
96 | 113 | ## Caveats
|
97 |
| -- In background mode, Blender `ViewerNodes` are not updated, so rendering have to be written to files. Currently, the Blender script re-imports the written image and sends it as message to any subscriber. This way, we do not need to keep track of which files have already been read and can be deleted, which simplifies communication. |
98 |
| -- In this sample. only the main composite rendering is transmitted. You might want to use `FileOutputNode` instead, to save multiple images per frame. |
99 |
| -- Currently you need to use `num_workers=0` when creating a PyTorch `DataLoader` as the `Receiver` object is capable of multi-process pickling. |
100 |
| - |
| 114 | +- Despite offscreen rendering is supported in Blender 2.8x it requires a UI frontend and thus cannot run in `--background` mode. |
| 115 | +- The renderings produced by Blender are by default in linear color space and thus will appear darker than expected when displayed. |
0 commit comments