You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**blendtorch** is a Python framework to seamlessly integrate [Blender](http://blender.org) into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations in real-time and thus avoid stalling model training in many cases.
5
5
6
6
Feature summary
7
-
-***Data Streaming***: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender.</br>More info [\[examples/datagen\]](examples/datagen)
7
+
-***Data Streaming***: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender. Bi-directional communication channels allow Blender simulations to adapt during network training. </br>More info [\[examples/datagen\]](examples/datagen), [\[examples/compositor_normals_depth\]](examples/compositor_normals_depth), [\[examples/densityopt\]](examples/densityopt)
8
8
-***OpenAI Gym Support***: Create and run remotely controlled Blender gyms to train reinforcement agents. Blender serves as simulation, visualization, and interactive live manipulation environment.
9
9
</br>More info [\[examples/control\]](examples/control)
10
10
11
-
The figure below visualizes a single image/label batch received by PyTorch from four parallel Blender instances. Each Blender process repeatedly performs motion simulations of randomized cubes.
11
+
The figure below visualizes the basic concept of **blendtorch** used in the context of generating artificial training data for a real-world detection task.
12
12
13
-
<palign="center">
14
-
<imgsrc="etc/result_physics.png"width="500">
15
-
</p>
13
+
<divalign="center">
14
+
<imgsrc="etc/blendtorch_intro_v3.svg"width="90%">
15
+
</div>
16
16
17
17
## Getting started
18
18
1. Read the installation instructions below
19
19
1. To get started with **blendtorch** for training data training read [\[examples/datagen\]](examples/datagen).
20
20
1. To learn about using **blendtorch** for creating reinforcement training environments read [\[examples/control\]](examples/control).
21
21
22
+
## Cite
23
+
The code accompanies our academic work [[1]](https://arxiv.org/abs/1907.01879),[[2]](https://arxiv.org/abs/2010.11696) in the field of machine learning from artificial images. Please consider the following publications when citing **blendtorch**
24
+
```
25
+
@inproceedings{robotpose_etfa2019_cheind,
26
+
author={Christoph Heindl, Sebastian Zambal, Josef Scharinger},
27
+
title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
28
+
booktitle={
29
+
24th IEEE International Conference on
30
+
Emerging Technologies and Factory Automation (ETFA)
31
+
},
32
+
year={2019}
33
+
}
34
+
35
+
@inproceedings{blendtorch_icpr2020_cheind,
36
+
author = {Christoph Heindl, Lukas Brunner, Sebastian Zambal and Josef Scharinger},
37
+
title = {BlendTorch: A Real-Time, Adaptive Domain Randomization Library},
38
+
booktitle = {
39
+
1st Workshop on Industrial Machine Learning
40
+
at International Conference on Pattern Recognition (ICPR2020)
41
+
},
42
+
year = {2020},
43
+
}
44
+
```
45
+
22
46
## Installation
23
47
24
48
**blendtorch** is composed of two distinct sub-packages: `bendtorch.btt` (in [pkg_pytorch](./pkg_pytorch)) and `blendtorch.btb` (in [pkg_blender](./pkg_blender)), providing the PyTorch and Blender views on **blendtorch**.
Ensure Blender executable is in your environments lookup `PATH`. On Windows this can be accomplished by
41
65
```
42
-
set PATH=c:\Program Files\Blender Foundation\Blender 2.83;%PATH%
66
+
set PATH=c:\Program Files\Blender Foundation\Blender 2.91;%PATH%
43
67
```
44
68
69
+
### Complete Blender settings
70
+
Open Blender at least once, and complete the initial settings. If this step is missed, some of the tests (especially the tests relating RL) will fail (Blender 2.91).
which should print **blendtorch** version number on success.
80
108
81
109
## Architecture
82
-
Please see [\[examples/datagen\]](examples/datagen) and [examples/control\]](examples/control) for an in-depth architectural discussion.
83
-
84
-
## Cite
85
-
The code accompanies our [academic work](https://arxiv.org/abs/1907.01879) in the field of machine learning from artificial images. When using please cite the following work
86
-
```
87
-
@inproceedings{robotpose_etfa2019_cheind,
88
-
author={Christoph Heindl and Sebastian Zambal and Josef Scharinger},
89
-
title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
90
-
booktitle={
91
-
24th IEEE International Conference on
92
-
Emerging Technologies and Factory Automation (ETFA)
93
-
},
94
-
year={2019},
95
-
pages={1536-1539},
96
-
doi={10.1109/ETFA.2019.8868243},
97
-
isbn={978-1-7281-0303-7},
98
-
}
99
-
```
110
+
Please see [\[examples/datagen\]](examples/datagen) and [\[examples/control\]](examples/control) for an in-depth architectural discussion. Bi-directional communication is explained in [\[examples/densityopt\]](examples/densityopt).
100
111
101
112
## Runtimes
102
-
The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmarks/benchmark.py](./benchmarks/benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating.
113
+
114
+
The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmarks/benchmark.py](./benchmarks/benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating. Reported timings are for Blender 2.8. Blender 2.9 performs equally well on this scene, but is usually faster for more complex renderings.
This directory showcases synthetic data generation using **blendtorch** for supervised machine learning. In particular, we use composite rendering to extract normals and depths from a randomized scene. The scene is composed of fixed plane and a number of parametric 3D supershapes. Using physics, we drop a random initial constellation of objects onto the plane. Once the object come to rest (we speed up the physics, so this roughly happens after a single frame), we publish dense camera depth and normal information.
4
+
5
+
<palign="center">
6
+
<imgsrc="etc/normals_depth.png"width="500">
7
+
</p>
8
+
9
+
### Composite rendering
10
+
This sample uses the compositor to access different render passes. Unfortunately, Blender (2.9) does not offer a straight forward way to access the result of various render passes in memory. Therefore, `btb.CompositeRenderer` requires `FileOutput` nodes for temporary storage of data. For this purpose a fast OpenEXR reader, [py-minexr](https://github.com/cheind/py-minexr) was developed and integrated into **blendtorch**.
11
+
12
+
### Normals
13
+
Camera normals are generated by a custom geometry-based material. Since colors must be in range (0,1), but normals are in (-1,1) a transformation is applied to make them compatible with color ranges. Hence, in PyTorch apply the following transformation to get true normals
0 commit comments