Skip to content

Commit dbd34f8

Browse files
committed
Merge branch 'release/v1.0'
2 parents 4f156ce + a876979 commit dbd34f8

31 files changed

+992
-321
lines changed

Readme.md

Lines changed: 68 additions & 76 deletions
Original file line numberDiff line numberDiff line change
@@ -1,100 +1,92 @@
1-
# pytorch-blender
1+
# blendtorch
22

3-
Seamless integration of Blender renderings into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. This repository contains a minimal demonstration that harvests images and meta data from ever changing Blender renderings.
3+
**blendtorch** is a Python framework to seamlessly integrate [Blender](http://blender.orf) renderings into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations at 60FPS and thus avoid stalling model training in many cases.
44

5+
Feature summary
6+
- Blender Eevee support for real-time rendering.
7+
- Seamless streaming into PyTorch data pipelines.
8+
- Supports arbitrary pickle-able objects to be send alongside images/videos.
9+
- Builtin recording capability to replay data without Blender.
10+
11+
## Minimal sample
12+
Running [demo.py](./demo.py) using the [cube](./scenes/) scene
513
```
6-
python pytorch_sample.py
14+
python demo.py cube
715
```
8-
renders a set of images of random rotated cubes to `./tmp/output_##.png`, such as the following
16+
will generate batch visualizations in `./tmp/output_##.png` like the following
917

1018
![](etc/result.png)
1119

12-
This image is generated by 4 Blender instances, randomly perturbating a minimal scene. The results are collated in a `BlenderDataset`. A PyTorch `DataLoader` with batch size 4 is used to grab from the dataset and produce the figure.
20+
This image is generated by reading from 2 Blender instances that randomly perturbate a minimal scene. Individual results (images + corner annotations) are received through a standard PyTorch `Dataset`. We configure a `DataLoader` to form batches of size 4. We iterate the `DataLoader` and create an output image for each batch using `matplotlib`.
21+
22+
Shown below is a batch visualization from 4 Blender instances running physics enabled falling cubes scene.
23+
24+
![](etc/result_physics.png)
25+
26+
To reproduce, run
27+
```
28+
python demo.py cube_physics
29+
```
1330

31+
## Cite
1432
The code accompanies our [academic work](https://arxiv.org/abs/1907.01879) in the field of machine learning from artificial images. When using please cite the following work
1533
```
16-
@misc{cheindkpts2019,
17-
Author = {Christoph Heindl and Sebastian Zambal and Josef Scharinger},
18-
Title = {Learning to Predict Robot Keypoints Using Artificially Generated Images},
19-
Year = {2019},
20-
Eprint = {arXiv:1907.01879},
21-
Note = {To be published at ETFA 2019},
34+
@inproceedings{robotpose_etfa2019_cheind,
35+
author={Christoph {Heindl} and Sebastian Zambal and Josef {Scharinger}},
36+
title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
37+
booktitle={24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)},
38+
year={2019},
39+
publisher={IEEE},
40+
pages={1536-1539},
41+
doi={10.1109/ETFA.2019.8868243},
42+
isbn={978-1-7281-0303-7},
2243
}
2344
```
2445

25-
## Code outline
26-
```Python
27-
import torch.utils.data as data
28-
29-
import blendtorch as bt
30-
31-
# Standard PyTorch Dataset convention
32-
class MyDataset:
33-
34-
def __init__(self, blender_launcher, transforms=None):
35-
self.recv = bt.Receiver(blender_launcher)
36-
self.transforms = transforms
37-
38-
def __len__(self):
39-
# Virtually anything you'd like to end episodes.
40-
return 100
41-
42-
def __getitem__(self, idx):
43-
# Data is a dictionary of {image, xy, id},
44-
# see publisher script
45-
d = self.recv(timeoutms=5000)
46-
return d['image'], d['xy'], d['id']
47-
48-
kwargs = {
49-
'num_instances': 2,
50-
'script': 'blender.py',
51-
'scene': 'scene.blend',
52-
}
46+
## Prerequisites
47+
This package has been tested using the following packages
48+
- [Blender](https://www.blender.org/) >= 2.83 (Python 3.7)
49+
- [PyTorch](http://pytorch.org) >= 1.50 (Python 3.7)
5350

54-
with bt.BlenderLauncher(**kwargs) as bl:
55-
ds = MyDataset(bl)
56-
dl = data.DataLoader(ds, batch_size=4, num_workers=0)
51+
Other versions might work as well, but have not been tested.
5752

58-
for idx in range(10):
59-
x, coords, ids = next(iter(dl))
60-
print(f'Received from {ids}')
53+
## Installation
54+
First install the prerequisites and clone **blendtorch** to `<SRC>`
55+
```
56+
git clone https://github.com/cheind/pytorch-blender.git <SRC>
57+
```
58+
Next, ensure Blender executable can be found via `PATH` environment variable and install Python dependencies into Blender's packaged Python distribution
59+
```
60+
blender --background --python <SRC>/pkg_blender/install_dependencies.py
61+
```
62+
To access **blendtorch** from PyTorch and Blender, we currently recommend updating your `PYTHONPATH` as follows (Windows)
63+
```
64+
set PYTHONPATH=%PYTHONPATH%;<SRC>/pkg_pytorch;<SRC>/pkg_blender
65+
```
66+
or (Mac or GNU/Linux)
67+
```
68+
export PYTHONPATH="${PYTHONPATH}:<SRC>/pkg_pytorch:<SRC>/pkg_blender"
6169
```
6270

6371
## Runtimes
72+
The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmark.py](./benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating.
6473

65-
The runtimes for the demo scene (really quick to render) is shown below.
66-
67-
| Blender Instances | Runtime ms/batch |
68-
|:-:|:-:|
69-
| 1 | `103 ms ± 5.17 ms` |
70-
| 2 | `43.7 ms ± 10.3 ms` |
74+
| Blender Instances | Runtime sec/batch | Runtime sec/image
75+
|:-:|:-:|:-:|
76+
| 1 | 0.236 | 0.030|
77+
| 2 | 0.14 | 0.018|
78+
| 4 | 0.099 | 0.012|
7179

72-
The above timings include rendering, transfer and encoding/decoding. Depending on the complexity of renderings you might want to tune the number of instances.
80+
## Architecture
81+
**blendtorch** is composed of two distinct sub-packages: `bendtorch.btt`, in folder [pkg_pytorch](./pkg_pytorch]) and `blendtorch.btb`,in folder [pkg_blender](./pkg_blender]), providing the PyTorch and Blender views on **blendtorch**.
7382

74-
## Prerequisites
75-
The following packages need to be available in your PyTorch environment and Blender environment:
76-
- Python >= 3.7
77-
- [Blender](https://www.blender.org/) >= 2.79
78-
- [PyTorch](http://pytorch.org) >= 0.4
79-
- [PyZMQ](https://pyzmq.readthedocs.io/en/latest/)
80-
- [Pillow/PIL](https://pillow.readthedocs.io/en/stable/installation.html)
81-
82-
Both packages are installable via `pip`. In order add packages to your Blender packaged Python distribution, execute the following commands (usually administrator privileges are required on Windows)
83+
### PyTorch
84+
At a top level `blendtorch.btt` provides `BlenderLauncher` to launch and close Blender instances, and communication a channel `BlenderInputChannel` to receive from those instances. Communication is based on [ZMQ](https://zeromq.org/) utilizing a `PUSH/PULL` pattern to support various kinds of parallelism. Besides, `blendtorch.btb` provides a raw `Recorder` that saves pickled Blender messages which can later be replayed using `FileInputChannel`.
8385

84-
```
85-
"<BLENDERPATH>2.79\python\bin\python.exe" -m ensurepip
86-
"<BLENDERPATH>2.79\python\bin\python.exe" -m pip install pyzmq
87-
"<BLENDERPATH>2.79\python\bin\python.exe" -m pip install pillow
88-
```
89-
where `<BLENDERPATH>` is the file path to the directory containing the Blender executable.
90-
91-
**Note** The Blender executable needs to be in your PATH. On Windows it does not suffice to temporarily modify the PATH variable, as no derived shell is spawned and temporary environment variables are not passed on.
92-
93-
## How it works
94-
An instance of [BlenderLaunch](blendtorch/launcher.py) is responsible for starting and stopping background Blender instances. The script `blender.py` and additional arguments are passed to the starting Blender instance. `blender.py` creates a publisher socket for communication and starts producing random renderings. Meanwhile, a PyTorch dataset uses a [Receiver](blendtorch/receiver.py) instance to read data from publishers.
86+
### Blender
87+
The package `blendtorch.btb` provides offscreen rendering capabilities `OffScreenRenderer`, animation control `Controller` and `BlenderOutputChannel` to publish any pickle-able message. When Blender instances are launched by `blendtorch.btt.BlenderLauncher`, each instance receives specific arguments to determine binding addresses and **blendtorch** instance ids that can later be used determine which instance sent specific messages.
9588

9689
## Caveats
97-
- In background mode, Blender `ViewerNodes` are not updated, so rendering have to be written to files. Currently, the Blender script re-imports the written image and sends it as message to any subscriber. This way, we do not need to keep track of which files have already been read and can be deleted, which simplifies communication.
98-
- In this sample. only the main composite rendering is transmitted. You might want to use `FileOutputNode` instead, to save multiple images per frame.
99-
- Currently you need to use `num_workers=0` when creating a PyTorch `DataLoader` as the `Receiver` object is capable of multi-process pickling.
100-
90+
- Despite offscreen rendering is supported in Blender 2.8x it requires a UI frontend and thus cannot run in `--background` mode.
91+
- The renderings produced by Blender are in linear color space and thus will appear darker than expected when displayed. See `gamma_correct` transform [demo.py](./demo.py) to fix this.
92+
- Currently we do not have support for a feedback channel from PyTorch to Blender.

benchmark.py

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
import torch.utils.data as data
2+
import argparse
3+
import time
4+
from contextlib import ExitStack
5+
6+
from blendtorch import btt
7+
8+
from demo import MyDataset, gamma_correct
9+
10+
BATCH = 8
11+
INSTANCES = 4
12+
WORKER_INSTANCES = 2
13+
14+
def main():
15+
parser = argparse.ArgumentParser()
16+
parser.add_argument('scene', help='Blender scene name to run')
17+
args = parser.parse_args()
18+
19+
with ExitStack() as es:
20+
bl = es.enter_context(
21+
btt.BlenderLauncher(
22+
num_instances=INSTANCES,
23+
script=f'scenes/{args.scene}.py',
24+
scene=f'scenes/{args.scene}.blend'
25+
)
26+
)
27+
channel = btt.BlenderInputChannel(addresses=bl.launch_info.addresses)
28+
ds = MyDataset(channel, stream_length=256)
29+
dl = data.DataLoader(ds, batch_size=BATCH, num_workers=WORKER_INSTANCES, shuffle=False)
30+
31+
t0 = None
32+
imgshape = None
33+
34+
for item in dl:
35+
if t0 is None: # 1st is warmup
36+
t0 = time.time()
37+
imgshape = item[0].shape
38+
39+
t1 = time.time()
40+
N = len(ds) - BATCH
41+
B = len(ds)//BATCH - 1
42+
print(f'Time {(t1-t0)/N:.3f}sec/image, {(t1-t0)/B:.3f}sec/batch, shape {imgshape}')
43+
44+
if __name__ == '__main__':
45+
main()

blender.py

Lines changed: 0 additions & 70 deletions
This file was deleted.

blendtorch/__init__.py

Lines changed: 0 additions & 4 deletions
This file was deleted.

blendtorch/launcher.py

Lines changed: 0 additions & 86 deletions
This file was deleted.

0 commit comments

Comments
 (0)