|
| 1 | +<div itemscope itemtype="http://schema.org/Dataset"> |
| 2 | + <div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog"> |
| 3 | + <meta itemprop="name" content="TensorFlow Datasets" /> |
| 4 | + </div> |
| 5 | + |
| 6 | + <meta itemprop="name" content="dmlab" /> |
| 7 | + <meta itemprop="description" content=" The Dmlab dataset contains frames observed by the agent acting in the DeepMind Lab environment, which are annotated by the distance between the agent and various objects present in the environment. The goal is to is to evaluate the ability of a visual model to reason about distances from the visual input in 3D environments. The Dmlab dataset consists of 360x480 color images in 6 classes. The classes are {close, far, very far} x {positive reward, negative reward} respectively. To use this dataset: ```python import tensorflow_datasets as tfds ds = tfds.load('dmlab', split='train') for ex in ds.take(4): print(ex) ``` See [the guide](https://www.tensorflow.org/datasets/overview) for more informations on [tensorflow_datasets](https://www.tensorflow.org/datasets). " /> |
| 8 | + <meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/dmlab" /> |
| 9 | + <meta itemprop="sameAs" content="https://github.com/google-research/task_adaptation" /> |
| 10 | + <meta itemprop="citation" content="@article{zhai2019visual, title={The Visual Task Adaptation Benchmark}, author={Xiaohua Zhai and Joan Puigcerver and Alexander Kolesnikov and Pierre Ruyssen and Carlos Riquelme and Mario Lucic and Josip Djolonga and Andre Susano Pinto and Maxim Neumann and Alexey Dosovitskiy and Lucas Beyer and Olivier Bachem and Michael Tschannen and Marcin Michalski and Olivier Bousquet and Sylvain Gelly and Neil Houlsby}, year={2019}, eprint={1910.04867}, archivePrefix={arXiv}, primaryClass={cs.CV}, url = {https://arxiv.org/abs/1910.04867} }" /> |
| 11 | +</div> |
| 12 | + |
| 13 | +# `dmlab` |
| 14 | + |
| 15 | +The Dmlab dataset contains frames observed by the agent acting in the DeepMind |
| 16 | +Lab environment, which are annotated by the distance between the agent and |
| 17 | +various objects present in the environment. The goal is to is to evaluate the |
| 18 | +ability of a visual model to reason about distances from the visual input in 3D |
| 19 | +environments. The Dmlab dataset consists of 360x480 color images in 6 classes. |
| 20 | +The classes are {close, far, very far} x {positive reward, negative reward} |
| 21 | +respectively. |
| 22 | + |
| 23 | +* URL: |
| 24 | + [https://github.com/google-research/task_adaptation](https://github.com/google-research/task_adaptation) |
| 25 | +* `DatasetBuilder`: |
| 26 | + [`tfds.image.dmlab.Dmlab`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/image/dmlab.py) |
| 27 | +* Version: `v2.0.0` |
| 28 | +* Versions: |
| 29 | + |
| 30 | + * **`2.0.0`** (default): |
| 31 | + |
| 32 | +* Size: `2.81 GiB` |
| 33 | + |
| 34 | +## Features |
| 35 | + |
| 36 | +```python |
| 37 | +FeaturesDict({ |
| 38 | + 'filename': Text(shape=(), dtype=tf.string), |
| 39 | + 'image': Image(shape=(360, 480, 3), dtype=tf.uint8), |
| 40 | + 'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=6), |
| 41 | +}) |
| 42 | +``` |
| 43 | + |
| 44 | +## Statistics |
| 45 | + |
| 46 | +Split | Examples |
| 47 | +:--------- | -------: |
| 48 | +ALL | 110,913 |
| 49 | +TRAIN | 65,550 |
| 50 | +TEST | 22,735 |
| 51 | +VALIDATION | 22,628 |
| 52 | + |
| 53 | +## Homepage |
| 54 | + |
| 55 | +* [https://github.com/google-research/task_adaptation](https://github.com/google-research/task_adaptation) |
| 56 | + |
| 57 | +## Supervised keys (for `as_supervised=True`) |
| 58 | + |
| 59 | +`(u'image', u'label')` |
| 60 | + |
| 61 | +## Citation |
| 62 | + |
| 63 | +``` |
| 64 | +@article{zhai2019visual, |
| 65 | + title={The Visual Task Adaptation Benchmark}, |
| 66 | + author={Xiaohua Zhai and Joan Puigcerver and Alexander Kolesnikov and |
| 67 | + Pierre Ruyssen and Carlos Riquelme and Mario Lucic and |
| 68 | + Josip Djolonga and Andre Susano Pinto and Maxim Neumann and |
| 69 | + Alexey Dosovitskiy and Lucas Beyer and Olivier Bachem and |
| 70 | + Michael Tschannen and Marcin Michalski and Olivier Bousquet and |
| 71 | + Sylvain Gelly and Neil Houlsby}, |
| 72 | + year={2019}, |
| 73 | + eprint={1910.04867}, |
| 74 | + archivePrefix={arXiv}, |
| 75 | + primaryClass={cs.CV}, |
| 76 | + url = {https://arxiv.org/abs/1910.04867} |
| 77 | + } |
| 78 | +``` |
| 79 | + |
| 80 | +-------------------------------------------------------------------------------- |
0 commit comments