Skip to content

Commit 805e861

Browse files
Conchylicultorcopybara-github
authored andcommitted
Automated documentation update
PiperOrigin-RevId: 280547812
1 parent ab1bb0a commit 805e861

File tree

11 files changed

+5102
-1093
lines changed

11 files changed

+5102
-1093
lines changed

docs/api_docs/python/tfds/testing/RaggedConstant.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@ source</a>
1818
## Class `RaggedConstant`
1919

2020
<!-- Start diff -->
21-
2221
Container of tf.ragged.constant values.
2322

2423
<!-- Placeholder for "Used in" -->

docs/catalog/_toc.yaml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,8 @@ toc:
6262
title: deep_weeds
6363
- path: /datasets/catalog/diabetic_retinopathy_detection
6464
title: diabetic_retinopathy_detection
65+
- path: /datasets/catalog/dmlab
66+
title: dmlab
6567
- path: /datasets/catalog/downsampled_imagenet
6668
title: downsampled_imagenet
6769
- path: /datasets/catalog/dsprites
@@ -212,6 +214,8 @@ toc:
212214
title: imdb_reviews
213215
- path: /datasets/catalog/lm1b
214216
title: lm1b
217+
- path: /datasets/catalog/math_dataset
218+
title: math_dataset
215219
- path: /datasets/catalog/multi_nli
216220
title: multi_nli
217221
- path: /datasets/catalog/multi_nli_mismatch

docs/catalog/dmlab.md

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
<div itemscope itemtype="http://schema.org/Dataset">
2+
<div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog">
3+
<meta itemprop="name" content="TensorFlow Datasets" />
4+
</div>
5+
6+
<meta itemprop="name" content="dmlab" />
7+
<meta itemprop="description" content="&#10; The Dmlab dataset contains frames observed by the agent acting in the&#10; DeepMind Lab environment, which are annotated by the distance between&#10; the agent and various objects present in the environment. The goal is to&#10; is to evaluate the ability of a visual model to reason about distances&#10; from the visual input in 3D environments. The Dmlab dataset consists of&#10; 360x480 color images in 6 classes. The classes are&#10; {close, far, very far} x {positive reward, negative reward}&#10; respectively.&#10;&#10;To use this dataset:&#10;&#10;```python&#10;import tensorflow_datasets as tfds&#10;&#10;ds = tfds.load('dmlab', split='train')&#10;for ex in ds.take(4):&#10; print(ex)&#10;```&#10;&#10;See [the guide](https://www.tensorflow.org/datasets/overview) for more&#10;informations on [tensorflow_datasets](https://www.tensorflow.org/datasets).&#10;&#10;" />
8+
<meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/dmlab" />
9+
<meta itemprop="sameAs" content="https://github.com/google-research/task_adaptation" />
10+
<meta itemprop="citation" content="@article{zhai2019visual,&#10; title={The Visual Task Adaptation Benchmark},&#10; author={Xiaohua Zhai and Joan Puigcerver and Alexander Kolesnikov and&#10; Pierre Ruyssen and Carlos Riquelme and Mario Lucic and&#10; Josip Djolonga and Andre Susano Pinto and Maxim Neumann and&#10; Alexey Dosovitskiy and Lucas Beyer and Olivier Bachem and&#10; Michael Tschannen and Marcin Michalski and Olivier Bousquet and&#10; Sylvain Gelly and Neil Houlsby},&#10; year={2019},&#10; eprint={1910.04867},&#10; archivePrefix={arXiv},&#10; primaryClass={cs.CV},&#10; url = {https://arxiv.org/abs/1910.04867}&#10; }" />
11+
</div>
12+
13+
# `dmlab`
14+
15+
The Dmlab dataset contains frames observed by the agent acting in the DeepMind
16+
Lab environment, which are annotated by the distance between the agent and
17+
various objects present in the environment. The goal is to is to evaluate the
18+
ability of a visual model to reason about distances from the visual input in 3D
19+
environments. The Dmlab dataset consists of 360x480 color images in 6 classes.
20+
The classes are {close, far, very far} x {positive reward, negative reward}
21+
respectively.
22+
23+
* URL:
24+
[https://github.com/google-research/task_adaptation](https://github.com/google-research/task_adaptation)
25+
* `DatasetBuilder`:
26+
[`tfds.image.dmlab.Dmlab`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/image/dmlab.py)
27+
* Version: `v2.0.0`
28+
* Versions:
29+
30+
* **`2.0.0`** (default):
31+
32+
* Size: `2.81 GiB`
33+
34+
## Features
35+
36+
```python
37+
FeaturesDict({
38+
'filename': Text(shape=(), dtype=tf.string),
39+
'image': Image(shape=(360, 480, 3), dtype=tf.uint8),
40+
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=6),
41+
})
42+
```
43+
44+
## Statistics
45+
46+
Split | Examples
47+
:--------- | -------:
48+
ALL | 110,913
49+
TRAIN | 65,550
50+
TEST | 22,735
51+
VALIDATION | 22,628
52+
53+
## Homepage
54+
55+
* [https://github.com/google-research/task_adaptation](https://github.com/google-research/task_adaptation)
56+
57+
## Supervised keys (for `as_supervised=True`)
58+
59+
`(u'image', u'label')`
60+
61+
## Citation
62+
63+
```
64+
@article{zhai2019visual,
65+
title={The Visual Task Adaptation Benchmark},
66+
author={Xiaohua Zhai and Joan Puigcerver and Alexander Kolesnikov and
67+
Pierre Ruyssen and Carlos Riquelme and Mario Lucic and
68+
Josip Djolonga and Andre Susano Pinto and Maxim Neumann and
69+
Alexey Dosovitskiy and Lucas Beyer and Olivier Bachem and
70+
Michael Tschannen and Marcin Michalski and Olivier Bousquet and
71+
Sylvain Gelly and Neil Houlsby},
72+
year={2019},
73+
eprint={1910.04867},
74+
archivePrefix={arXiv},
75+
primaryClass={cs.CV},
76+
url = {https://arxiv.org/abs/1910.04867}
77+
}
78+
```
79+
80+
--------------------------------------------------------------------------------

docs/catalog/duke_ultrasound.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,14 @@
22
<div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog">
33
<meta itemprop="name" content="TensorFlow Datasets" />
44
</div>
5+
56
<meta itemprop="name" content="duke_ultrasound" />
67
<meta itemprop="description" content="DukeUltrasound is an ultrasound dataset collected at Duke University with a &#10;Verasonics c52v probe. It contains delay-and-sum (DAS) beamformed data &#10;as well as data post-processed with Siemens Dynamic TCE for speckle &#10;reduction, contrast enhancement and improvement in conspicuity of &#10;anatomical structures. These data were collected with support from the&#10;National Institute of Biomedical Imaging and Bioengineering under Grant &#10;R01-EB026574 and National Institutes of Health under Grant 5T32GM007171-44.&#10;A usage example is avalible &#10;[here](https://colab.research.google.com/drive/1R_ARqpWoiHcUQWg1Fxwyx-ZkLi0IZ5qs).&#10;&#10;To use this dataset:&#10;&#10;```python&#10;import tensorflow_datasets as tfds&#10;&#10;ds = tfds.load('duke_ultrasound', split='train')&#10;for ex in ds.take(4):&#10; print(ex)&#10;```&#10;&#10;See [the guide](https://www.tensorflow.org/datasets/overview) for more&#10;informations on [tensorflow_datasets](https://www.tensorflow.org/datasets).&#10;&#10;" />
78
<meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/duke_ultrasound" />
8-
<meta itemprop="sameAs" content="https://arxiv.org/abs/1908.05782" />
9+
<meta itemprop="sameAs" content="https://github.com/ouwen/mimicknet" />
910
<meta itemprop="citation" content="@article{DBLP:journals/corr/abs-1908-05782,&#10; author = {Ouwen Huang and&#10; Will Long and&#10; Nick Bottenus and&#10; Gregg E. Trahey and&#10; Sina Farsiu and&#10; Mark L. Palmeri},&#10; title = {MimickNet, Matching Clinical Post-Processing Under Realistic Black-Box&#10; Constraints},&#10; journal = {CoRR},&#10; volume = {abs/1908.05782},&#10; year = {2019},&#10; url = {http://arxiv.org/abs/1908.05782},&#10; archivePrefix = {arXiv},&#10; eprint = {1908.05782},&#10; timestamp = {Mon, 19 Aug 2019 13:21:03 +0200},&#10; biburl = {https://dblp.org/rec/bib/journals/corr/abs-1908-05782},&#10; bibsource = {dblp computer science bibliography, https://dblp.org}&#10;}" />
1011
</div>
12+
1113
# `duke_ultrasound`
1214

1315
DukeUltrasound is an ultrasound dataset collected at Duke University with a
@@ -19,7 +21,8 @@ and Bioengineering under Grant R01-EB026574 and National Institutes of Health
1921
under Grant 5T32GM007171-44. A usage example is avalible
2022
[here](https://colab.research.google.com/drive/1R_ARqpWoiHcUQWg1Fxwyx-ZkLi0IZ5qs).
2123

22-
* URL: [https://arxiv.org/abs/1908.05782](https://arxiv.org/abs/1908.05782)
24+
* URL:
25+
[https://github.com/ouwen/mimicknet](https://github.com/ouwen/mimicknet)
2326
* `DatasetBuilder`:
2427
[`tfds.image.duke_ultrasound.DukeUltrasound`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/image/duke_ultrasound.py)
2528
* Version: `v1.0.0`
@@ -69,7 +72,7 @@ VALIDATION | 278
6972

7073
## Homepage
7174

72-
* [https://arxiv.org/abs/1908.05782](https://arxiv.org/abs/1908.05782)
75+
* [https://github.com/ouwen/mimicknet](https://github.com/ouwen/mimicknet)
7376

7477
## Supervised keys (for `as_supervised=True`)
7578
`(u'das/dB', u'dtce')`

0 commit comments

Comments
 (0)