Skip to content

Commit 31a2768

Browse files
committed
Merge branch 'release/v0.4'
2 parents bf5ee25 + 5e58154 commit 31a2768

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+2139
-283
lines changed

.gitignore

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -103,8 +103,7 @@ venv.bak/
103103
# mypy
104104
.mypy_cache/
105105

106-
tmp/*
107-
examples/datagen/tmp/*
106+
**/tmp/*
108107
*.blend1
109-
108+
**/tmp/*
110109
!__keep__

.travis.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11

2+
dist: xenial
23
language: python
34
python:
45
- 3.7
56
- 3.8
7+
services:
8+
- xvfb
69

710
cache:
811
pip: true

Readme.md

Lines changed: 40 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,55 @@
1-
# blendtorch v0.2
2-
![](https://travis-ci.org/cheind/pytorch-blender.svg?branch=develop)
1+
# blendtorch
2+
[![](https://travis-ci.org/cheind/pytorch-blender.svg?branch=develop)](https://travis-ci.org/cheind/pytorch-blender)
33

44
**blendtorch** is a Python framework to seamlessly integrate [Blender](http://blender.org) into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations in real-time and thus avoid stalling model training in many cases.
55

66
Feature summary
7-
- ***Data Streaming***: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender.</br>More info [\[examples/datagen\]](examples/datagen)
7+
- ***Data Streaming***: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender. Bi-directional communication channels allow Blender simulations to adapt during network training. </br>More info [\[examples/datagen\]](examples/datagen), [\[examples/compositor_normals_depth\]](examples/compositor_normals_depth), [\[examples/densityopt\]](examples/densityopt)
88
- ***OpenAI Gym Support***: Create and run remotely controlled Blender gyms to train reinforcement agents. Blender serves as simulation, visualization, and interactive live manipulation environment.
99
</br>More info [\[examples/control\]](examples/control)
1010

11-
The figure below visualizes a single image/label batch received by PyTorch from four parallel Blender instances. Each Blender process repeatedly performs motion simulations of randomized cubes.
11+
The figure below visualizes the basic concept of **blendtorch** used in the context of generating artificial training data for a real-world detection task.
1212

13-
<p align="center">
14-
<img src="etc/result_physics.png" width="500">
15-
</p>
13+
<div align="center">
14+
<img src="etc/blendtorch_intro_v3.svg" width="90%">
15+
</div>
1616

1717
## Getting started
1818
1. Read the installation instructions below
1919
1. To get started with **blendtorch** for training data training read [\[examples/datagen\]](examples/datagen).
2020
1. To learn about using **blendtorch** for creating reinforcement training environments read [\[examples/control\]](examples/control).
2121

22+
## Cite
23+
The code accompanies our academic work [[1]](https://arxiv.org/abs/1907.01879),[[2]](https://arxiv.org/abs/2010.11696) in the field of machine learning from artificial images. Please consider the following publications when citing **blendtorch**
24+
```
25+
@inproceedings{robotpose_etfa2019_cheind,
26+
author={Christoph Heindl, Sebastian Zambal, Josef Scharinger},
27+
title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
28+
booktitle={
29+
24th IEEE International Conference on
30+
Emerging Technologies and Factory Automation (ETFA)
31+
},
32+
year={2019}
33+
}
34+
35+
@inproceedings{blendtorch_icpr2020_cheind,
36+
author = {Christoph Heindl, Lukas Brunner, Sebastian Zambal and Josef Scharinger},
37+
title = {BlendTorch: A Real-Time, Adaptive Domain Randomization Library},
38+
booktitle = {
39+
1st Workshop on Industrial Machine Learning
40+
at International Conference on Pattern Recognition (ICPR2020)
41+
},
42+
year = {2020},
43+
}
44+
```
45+
2246
## Installation
2347

2448
**blendtorch** is composed of two distinct sub-packages: `bendtorch.btt` (in [pkg_pytorch](./pkg_pytorch)) and `blendtorch.btb` (in [pkg_blender](./pkg_blender)), providing the PyTorch and Blender views on **blendtorch**.
2549

2650
### Prerequisites
2751
This package has been tested with
28-
- [Blender](https://www.blender.org/) >= 2.83 (Python 3.7)
52+
- [Blender](https://www.blender.org/) >= 2.83/2.91 (Python 3.7)
2953
- [PyTorch](http://pytorch.org) >= 1.50 (Python 3.7/3.8)
3054
running Windows 10 and Linux.
3155

@@ -39,9 +63,12 @@ git clone https://github.com/cheind/pytorch-blender.git <DST>
3963
### Extend `PATH`
4064
Ensure Blender executable is in your environments lookup `PATH`. On Windows this can be accomplished by
4165
```
42-
set PATH=c:\Program Files\Blender Foundation\Blender 2.83;%PATH%
66+
set PATH=c:\Program Files\Blender Foundation\Blender 2.91;%PATH%
4367
```
4468

69+
### Complete Blender settings
70+
Open Blender at least once, and complete the initial settings. If this step is missed, some of the tests (especially the tests relating RL) will fail (Blender 2.91).
71+
4572
### Install **blendtorch** Blender part
4673
```
4774
blender --background --python <DST>/scripts/install_btb.py
@@ -56,6 +83,7 @@ installs `blendtorch-btt` into the Python environment that you intend to run PyT
5683
```
5784
pip install gym
5885
```
86+
5987
### Developer instructions
6088
This step is optional. If you plan to run the unit tests
6189
```
@@ -79,27 +107,11 @@ python -c "import blendtorch.btt as btt; print(btt.__version__)"
79107
which should print **blendtorch** version number on success.
80108

81109
## Architecture
82-
Please see [\[examples/datagen\]](examples/datagen) and [examples/control\]](examples/control) for an in-depth architectural discussion.
83-
84-
## Cite
85-
The code accompanies our [academic work](https://arxiv.org/abs/1907.01879) in the field of machine learning from artificial images. When using please cite the following work
86-
```
87-
@inproceedings{robotpose_etfa2019_cheind,
88-
author={Christoph Heindl and Sebastian Zambal and Josef Scharinger},
89-
title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
90-
booktitle={
91-
24th IEEE International Conference on
92-
Emerging Technologies and Factory Automation (ETFA)
93-
},
94-
year={2019},
95-
pages={1536-1539},
96-
doi={10.1109/ETFA.2019.8868243},
97-
isbn={978-1-7281-0303-7},
98-
}
99-
```
110+
Please see [\[examples/datagen\]](examples/datagen) and [\[examples/control\]](examples/control) for an in-depth architectural discussion. Bi-directional communication is explained in [\[examples/densityopt\]](examples/densityopt).
100111

101112
## Runtimes
102-
The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmarks/benchmark.py](./benchmarks/benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating.
113+
114+
The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmarks/benchmark.py](./benchmarks/benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating. Reported timings are for Blender 2.8. Blender 2.9 performs equally well on this scene, but is usually faster for more complex renderings.
103115

104116
| Blender Instances | Runtime sec/batch | Runtime sec/image | Arguments|
105117
|:-:|:-:|:-:|:-:|

benchmarks/benchmark.py

Lines changed: 22 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,20 @@
22
import argparse
33
from pathlib import Path
44
import torch.utils.data as data
5+
import matplotlib.pyplot as plt
6+
import numpy as np
7+
58
from blendtorch import btt
69

710
BATCH = 8
811
INSTANCES = 4
9-
WORKER_INSTANCES = 2
12+
WORKER_INSTANCES = 4
1013
NUM_ITEMS = 512
1114
EXAMPLES_DIR = Path(__file__).parent/'..'/'examples'/'datagen'
1215

1316
def main():
1417
parser = argparse.ArgumentParser()
15-
parser.add_argument('--scene', help='Blender scene name to run', default='cube')
18+
parser.add_argument('scene', help='Blender scene name to run', default='cube')
1619
args = parser.parse_args()
1720

1821
launch_args = dict(
@@ -31,20 +34,36 @@ def main():
3134
time.sleep(5)
3235

3336
t0 = None
37+
tlast = None
3438
imgshape = None
3539

40+
elapsed = []
3641
n = 0
3742
for item in dl:
43+
n += len(item['image'])
3844
if t0 is None: # 1st is warmup
3945
t0 = time.time()
46+
tlast = t0
4047
imgshape = item['image'].shape
41-
n += len(item['image'])
48+
elif n % (50*BATCH) == 0:
49+
t = time.time()
50+
elapsed.append(t - tlast)
51+
tlast = t
52+
print('.', end='')
4253
assert n == NUM_ITEMS
4354

4455
t1 = time.time()
4556
N = NUM_ITEMS - BATCH
4657
B = NUM_ITEMS//BATCH - 1
4758
print(f'Time {(t1-t0)/N:.3f}sec/image, {(t1-t0)/B:.3f}sec/batch, shape {imgshape}')
4859

60+
fig, _ = plt.subplots()
61+
plt.plot(np.arange(len(elapsed)), elapsed)
62+
plt.title('Receive times between 50 consecutive batches')
63+
save_path = EXAMPLES_DIR / 'tmp' / 'batches_elapsed.png'
64+
fig.savefig(str(save_path))
65+
plt.close(fig)
66+
print(f'Figure saved to {save_path}')
67+
4968
if __name__ == '__main__':
5069
main()

etc/blendtorch_intro_v3.svg

Lines changed: 1 addition & 0 deletions
Loading

etc/export_paths.bat

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
11
@echo off
2-
set PATH=c:\Program Files\Blender Foundation\Blender 2.83;%PATH%
3-
set PYTHONPATH=%~dp0..\pkg_blender;%~dp0..\pkg_pytorch;%PYTHONPATH%
2+
set PATH=c:\Program Files\Blender Foundation\Blender 2.90;%PATH%
43
@echo on

etc/result.png

-79.1 KB
Binary file not shown.
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
## Compositor Render Support
2+
3+
This directory showcases synthetic data generation using **blendtorch** for supervised machine learning. In particular, we use composite rendering to extract normals and depths from a randomized scene. The scene is composed of fixed plane and a number of parametric 3D supershapes. Using physics, we drop a random initial constellation of objects onto the plane. Once the object come to rest (we speed up the physics, so this roughly happens after a single frame), we publish dense camera depth and normal information.
4+
5+
<p align="center">
6+
<img src="etc/normals_depth.png" width="500">
7+
</p>
8+
9+
### Composite rendering
10+
This sample uses the compositor to access different render passes. Unfortunately, Blender (2.9) does not offer a straight forward way to access the result of various render passes in memory. Therefore, `btb.CompositeRenderer` requires `FileOutput` nodes for temporary storage of data. For this purpose a fast OpenEXR reader, [py-minexr](https://github.com/cheind/py-minexr) was developed and integrated into **blendtorch**.
11+
12+
### Normals
13+
Camera normals are generated by a custom geometry-based material. Since colors must be in range (0,1), but normals are in (-1,1) a transformation is applied to make them compatible with color ranges. Hence, in PyTorch apply the following transformation to get true normals
14+
```python
15+
true_normals = (normals - 0.5)*np.array([2., 2., -2.]).reshape(1,1,1,-1) # BxHxWx3
16+
```
17+
18+
### Run
19+
20+
To recreate these results run [generate.py](./generate.py)
21+
```
22+
python generate.py
23+
```
Binary file not shown.
Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
2+
import blendtorch.btb as btb
3+
import numpy as np
4+
import bpy
5+
6+
SHAPE = (30, 30)
7+
NSHAPES = 70
8+
9+
10+
def main():
11+
# Update python-path with current blend file directory
12+
btb.add_scene_dir_to_path()
13+
import scene_helpers as scene
14+
15+
def pre_anim(meshes):
16+
# Called before each animation
17+
# Randomize supershapes
18+
for m in meshes:
19+
scene.update_mesh(m, sshape_res=SHAPE)
20+
21+
def post_frame(render, pub, animation):
22+
# After frame
23+
if anim.frameid == 1:
24+
imgs = render.render()
25+
pub.publish(
26+
normals=imgs['normals'],
27+
depth=imgs['depth']
28+
)
29+
30+
# Parse script arguments passed via blendtorch launcher
31+
btargs, _ = btb.parse_blendtorch_args()
32+
33+
# Fetch camera
34+
cam = bpy.context.scene.camera
35+
36+
bpy.context.scene.rigidbody_world.time_scale = 100
37+
bpy.context.scene.rigidbody_world.substeps_per_frame = 300
38+
39+
# Setup supershapes
40+
meshes = scene.prepare(NSHAPES, sshape_res=SHAPE)
41+
42+
# Data source
43+
pub = btb.DataPublisher(btargs.btsockets['DATA'], btargs.btid)
44+
45+
# Setup default image rendering
46+
cam = btb.Camera()
47+
render = btb.CompositeRenderer(
48+
[
49+
btb.CompositeSelection('normals', 'Out1', 'Normals', 'RGB'),
50+
btb.CompositeSelection('depth', 'Out1', 'Depth', 'V'),
51+
],
52+
btid=btargs.btid,
53+
camera=cam,
54+
)
55+
56+
# Setup the animation and run endlessly
57+
anim = btb.AnimationController()
58+
anim.pre_animation.add(pre_anim, meshes)
59+
anim.post_frame.add(post_frame, render, pub, anim)
60+
anim.play(frame_range=(0, 1), num_episodes=-1,
61+
use_offline_render=False, use_physics=True)
62+
63+
64+
main()
Loading
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
from pathlib import Path
2+
3+
import blendtorch.btt as btt
4+
import matplotlib.pyplot as plt
5+
import numpy as np
6+
import torch
7+
from torch.utils import data
8+
9+
10+
def main():
11+
# Define how we want to launch Blender
12+
launch_args = dict(
13+
scene=Path(__file__).parent/'compositor_normals_depth.blend',
14+
script=Path(__file__).parent/'compositor_normals_depth.blend.py',
15+
num_instances=1,
16+
named_sockets=['DATA'],
17+
)
18+
19+
# Launch Blender
20+
with btt.BlenderLauncher(**launch_args) as bl:
21+
# Create remote dataset and limit max length to 16 elements.
22+
addr = bl.launch_info.addresses['DATA']
23+
ds = btt.RemoteIterableDataset(addr, max_items=4)
24+
dl = data.DataLoader(ds, batch_size=4, num_workers=0)
25+
26+
for item in dl:
27+
normals = item['normals']
28+
# Note, normals are color-coded (0..1), to convert back to original
29+
# range (-1..1) use
30+
# true_normals = (normals - 0.5) * \
31+
# torch.tensor([2., 2., -2.]).view(1, 1, 1, -1)
32+
depth = item['depth']
33+
print('Received', normals.shape, depth.shape,
34+
depth.dtype, np.ptp(depth))
35+
36+
fig, axs = plt.subplots(2, 2)
37+
axs = np.asarray(axs).reshape(-1)
38+
for i in range(4):
39+
axs[i].imshow(depth[i, :, :, 0], vmin=1, vmax=2.5)
40+
fig, axs = plt.subplots(2, 2)
41+
axs = np.asarray(axs).reshape(-1)
42+
for i in range(4):
43+
axs[i].imshow(normals[i, :, :])
44+
plt.show()
45+
46+
47+
if __name__ == '__main__':
48+
main()

0 commit comments

Comments
 (0)