|
1 |
| -# pytorch-blender |
| 1 | +# blendtorch |
2 | 2 |
|
3 |
| -Seamless integration of Blender renderings into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. This repository contains a minimal demonstration that harvests images and meta data from ever changing Blender renderings. |
| 3 | +**blendtorch** is a Python framework to seamlessly integrate [Blender](http://blender.orf) renderings into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations at 60FPS and thus avoid stalling model training in many cases. |
4 | 4 |
|
| 5 | +Feature summary |
| 6 | +- Blender Eevee support for real-time rendering. |
| 7 | +- Seamless streaming into PyTorch data pipelines. |
| 8 | +- Supports arbitrary pickle-able objects to be send alongside images/videos. |
| 9 | +- Builtin recording capability to replay data without Blender. |
| 10 | + |
| 11 | +## Minimal sample |
| 12 | +Running [demo.py](./demo.py) using the [cube](./scenes/) scene |
5 | 13 | ```
|
6 |
| -python pytorch_sample.py |
| 14 | +python demo.py cube |
7 | 15 | ```
|
8 |
| -renders a set of images of random rotated cubes to `./tmp/output_##.png`, such as the following |
| 16 | +will generate batch visualizations in `./tmp/output_##.png` like the following |
9 | 17 |
|
10 | 18 | 
|
11 | 19 |
|
12 |
| -This image is generated by 4 Blender instances, randomly perturbating a minimal scene. The results are collated in a `BlenderDataset`. A PyTorch `DataLoader` with batch size 4 is used to grab from the dataset and produce the figure. |
| 20 | +This image is generated by reading from 2 Blender instances that randomly perturbate a minimal scene. Individual results (images + corner annotations) are received through a standard PyTorch `Dataset`. We configure a `DataLoader` to form batches of size 4. We iterate the `DataLoader` and create an output image for each batch using `matplotlib`. |
| 21 | + |
| 22 | +Shown below is a batch visualization from 4 Blender instances running physics enabled falling cubes scene. |
| 23 | + |
| 24 | + |
| 25 | + |
| 26 | +To reproduce, run |
| 27 | +``` |
| 28 | +python demo.py cube_physics |
| 29 | +``` |
13 | 30 |
|
| 31 | +## Cite |
14 | 32 | The code accompanies our [academic work](https://arxiv.org/abs/1907.01879) in the field of machine learning from artificial images. When using please cite the following work
|
15 | 33 | ```
|
16 |
| -@misc{cheindkpts2019, |
17 |
| -Author = {Christoph Heindl and Sebastian Zambal and Josef Scharinger}, |
18 |
| -Title = {Learning to Predict Robot Keypoints Using Artificially Generated Images}, |
19 |
| -Year = {2019}, |
20 |
| -Eprint = {arXiv:1907.01879}, |
21 |
| -Note = {To be published at ETFA 2019}, |
| 34 | +@inproceedings{robotpose_etfa2019_cheind, |
| 35 | + author={Christoph {Heindl} and Sebastian Zambal and Josef {Scharinger}}, |
| 36 | + title={Learning to Predict Robot Keypoints Using Artificially Generated Images}, |
| 37 | + booktitle={24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)}, |
| 38 | + year={2019}, |
| 39 | + publisher={IEEE}, |
| 40 | + pages={1536-1539}, |
| 41 | + doi={10.1109/ETFA.2019.8868243}, |
| 42 | + isbn={978-1-7281-0303-7}, |
22 | 43 | }
|
23 | 44 | ```
|
24 | 45 |
|
25 |
| -## Code outline |
26 |
| -```Python |
27 |
| -import torch.utils.data as data |
28 |
| - |
29 |
| -import blendtorch as bt |
30 |
| - |
31 |
| -# Standard PyTorch Dataset convention |
32 |
| -class MyDataset: |
33 |
| - |
34 |
| - def __init__(self, blender_launcher, transforms=None): |
35 |
| - self.recv = bt.Receiver(blender_launcher) |
36 |
| - self.transforms = transforms |
37 |
| - |
38 |
| - def __len__(self): |
39 |
| - # Virtually anything you'd like to end episodes. |
40 |
| - return 100 |
41 |
| - |
42 |
| - def __getitem__(self, idx): |
43 |
| - # Data is a dictionary of {image, xy, id}, |
44 |
| - # see publisher script |
45 |
| - d = self.recv(timeoutms=5000) |
46 |
| - return d['image'], d['xy'], d['id'] |
47 |
| - |
48 |
| -kwargs = { |
49 |
| - 'num_instances': 2, |
50 |
| - 'script': 'blender.py', |
51 |
| - 'scene': 'scene.blend', |
52 |
| -} |
| 46 | +## Prerequisites |
| 47 | +This package has been tested using the following packages |
| 48 | + - [Blender](https://www.blender.org/) >= 2.83 (Python 3.7) |
| 49 | + - [PyTorch](http://pytorch.org) >= 1.50 (Python 3.7) |
53 | 50 |
|
54 |
| -with bt.BlenderLauncher(**kwargs) as bl: |
55 |
| - ds = MyDataset(bl) |
56 |
| - dl = data.DataLoader(ds, batch_size=4, num_workers=0) |
| 51 | +Other versions might work as well, but have not been tested. |
57 | 52 |
|
58 |
| - for idx in range(10): |
59 |
| - x, coords, ids = next(iter(dl)) |
60 |
| - print(f'Received from {ids}') |
| 53 | +## Installation |
| 54 | +First install the prerequisites and clone **blendtorch** to `<SRC>` |
| 55 | +``` |
| 56 | +git clone https://github.com/cheind/pytorch-blender.git <SRC> |
| 57 | +``` |
| 58 | +Next, ensure Blender executable can be found via `PATH` environment variable and install Python dependencies into Blender's packaged Python distribution |
| 59 | +``` |
| 60 | +blender --background --python <SRC>/pkg_blender/install_dependencies.py |
| 61 | +``` |
| 62 | +To access **blendtorch** from PyTorch and Blender, we currently recommend updating your `PYTHONPATH` as follows (Windows) |
| 63 | +``` |
| 64 | +set PYTHONPATH=%PYTHONPATH%;<SRC>/pkg_pytorch;<SRC>/pkg_blender |
| 65 | +``` |
| 66 | +or (Mac or GNU/Linux) |
| 67 | +``` |
| 68 | +export PYTHONPATH="${PYTHONPATH}:<SRC>/pkg_pytorch:<SRC>/pkg_blender" |
61 | 69 | ```
|
62 | 70 |
|
63 | 71 | ## Runtimes
|
| 72 | +The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmark.py](./benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating. |
64 | 73 |
|
65 |
| -The runtimes for the demo scene (really quick to render) is shown below. |
66 |
| - |
67 |
| -| Blender Instances | Runtime ms/batch | |
68 |
| -|:-:|:-:| |
69 |
| -| 1 | `103 ms ± 5.17 ms` | |
70 |
| -| 2 | `43.7 ms ± 10.3 ms` | |
| 74 | +| Blender Instances | Runtime sec/batch | Runtime sec/image |
| 75 | +|:-:|:-:|:-:| |
| 76 | +| 1 | 0.236 | 0.030| |
| 77 | +| 2 | 0.14 | 0.018| |
| 78 | +| 4 | 0.099 | 0.012| |
71 | 79 |
|
72 |
| -The above timings include rendering, transfer and encoding/decoding. Depending on the complexity of renderings you might want to tune the number of instances. |
| 80 | +## Architecture |
| 81 | +**blendtorch** is composed of two distinct sub-packages: `bendtorch.btt`, in folder [pkg_pytorch](./pkg_pytorch]) and `blendtorch.btb`,in folder [pkg_blender](./pkg_blender]), providing the PyTorch and Blender views on **blendtorch**. |
73 | 82 |
|
74 |
| -## Prerequisites |
75 |
| -The following packages need to be available in your PyTorch environment and Blender environment: |
76 |
| - - Python >= 3.7 |
77 |
| - - [Blender](https://www.blender.org/) >= 2.79 |
78 |
| - - [PyTorch](http://pytorch.org) >= 0.4 |
79 |
| - - [PyZMQ](https://pyzmq.readthedocs.io/en/latest/) |
80 |
| - - [Pillow/PIL](https://pillow.readthedocs.io/en/stable/installation.html) |
81 |
| - |
82 |
| -Both packages are installable via `pip`. In order add packages to your Blender packaged Python distribution, execute the following commands (usually administrator privileges are required on Windows) |
| 83 | +### PyTorch |
| 84 | +At a top level `blendtorch.btt` provides `BlenderLauncher` to launch and close Blender instances, and communication a channel `BlenderInputChannel` to receive from those instances. Communication is based on [ZMQ](https://zeromq.org/) utilizing a `PUSH/PULL` pattern to support various kinds of parallelism. Besides, `blendtorch.btb` provides a raw `Recorder` that saves pickled Blender messages which can later be replayed using `FileInputChannel`. |
83 | 85 |
|
84 |
| -``` |
85 |
| -"<BLENDERPATH>2.79\python\bin\python.exe" -m ensurepip |
86 |
| -"<BLENDERPATH>2.79\python\bin\python.exe" -m pip install pyzmq |
87 |
| -"<BLENDERPATH>2.79\python\bin\python.exe" -m pip install pillow |
88 |
| -``` |
89 |
| -where `<BLENDERPATH>` is the file path to the directory containing the Blender executable. |
90 |
| - |
91 |
| -**Note** The Blender executable needs to be in your PATH. On Windows it does not suffice to temporarily modify the PATH variable, as no derived shell is spawned and temporary environment variables are not passed on. |
92 |
| - |
93 |
| -## How it works |
94 |
| -An instance of [BlenderLaunch](blendtorch/launcher.py) is responsible for starting and stopping background Blender instances. The script `blender.py` and additional arguments are passed to the starting Blender instance. `blender.py` creates a publisher socket for communication and starts producing random renderings. Meanwhile, a PyTorch dataset uses a [Receiver](blendtorch/receiver.py) instance to read data from publishers. |
| 86 | +### Blender |
| 87 | +The package `blendtorch.btb` provides offscreen rendering capabilities `OffScreenRenderer`, animation control `Controller` and `BlenderOutputChannel` to publish any pickle-able message. When Blender instances are launched by `blendtorch.btt.BlenderLauncher`, each instance receives specific arguments to determine binding addresses and **blendtorch** instance ids that can later be used determine which instance sent specific messages. |
95 | 88 |
|
96 | 89 | ## Caveats
|
97 |
| -- In background mode, Blender `ViewerNodes` are not updated, so rendering have to be written to files. Currently, the Blender script re-imports the written image and sends it as message to any subscriber. This way, we do not need to keep track of which files have already been read and can be deleted, which simplifies communication. |
98 |
| -- In this sample. only the main composite rendering is transmitted. You might want to use `FileOutputNode` instead, to save multiple images per frame. |
99 |
| -- Currently you need to use `num_workers=0` when creating a PyTorch `DataLoader` as the `Receiver` object is capable of multi-process pickling. |
100 |
| - |
| 90 | +- Despite offscreen rendering is supported in Blender 2.8x it requires a UI frontend and thus cannot run in `--background` mode. |
| 91 | +- The renderings produced by Blender are in linear color space and thus will appear darker than expected when displayed. See `gamma_correct` transform [demo.py](./demo.py) to fix this. |
| 92 | +- Currently we do not have support for a feedback channel from PyTorch to Blender. |
0 commit comments