Skip to content

Commit bda77c7

Browse files
Conchylicultorcopybara-github
authored andcommitted
Automated documentation update
PiperOrigin-RevId: 281137731
1 parent 00927c8 commit bda77c7

File tree

9 files changed

+424
-382
lines changed

9 files changed

+424
-382
lines changed

docs/catalog/_toc.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -206,6 +206,8 @@ toc:
206206
title: c4
207207
- path: /datasets/catalog/definite_pronoun_resolution
208208
title: definite_pronoun_resolution
209+
- path: /datasets/catalog/esnli
210+
title: esnli
209211
- path: /datasets/catalog/gap
210212
title: gap
211213
- path: /datasets/catalog/glue

docs/catalog/dmlab.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,12 @@
22
<div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog">
33
<meta itemprop="name" content="TensorFlow Datasets" />
44
</div>
5-
65
<meta itemprop="name" content="dmlab" />
76
<meta itemprop="description" content="&#10; The Dmlab dataset contains frames observed by the agent acting in the&#10; DeepMind Lab environment, which are annotated by the distance between&#10; the agent and various objects present in the environment. The goal is to&#10; is to evaluate the ability of a visual model to reason about distances&#10; from the visual input in 3D environments. The Dmlab dataset consists of&#10; 360x480 color images in 6 classes. The classes are&#10; {close, far, very far} x {positive reward, negative reward}&#10; respectively.&#10;&#10;To use this dataset:&#10;&#10;```python&#10;import tensorflow_datasets as tfds&#10;&#10;ds = tfds.load('dmlab', split='train')&#10;for ex in ds.take(4):&#10; print(ex)&#10;```&#10;&#10;See [the guide](https://www.tensorflow.org/datasets/overview) for more&#10;informations on [tensorflow_datasets](https://www.tensorflow.org/datasets).&#10;&#10;" />
87
<meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/dmlab" />
98
<meta itemprop="sameAs" content="https://github.com/google-research/task_adaptation" />
109
<meta itemprop="citation" content="@article{zhai2019visual,&#10; title={The Visual Task Adaptation Benchmark},&#10; author={Xiaohua Zhai and Joan Puigcerver and Alexander Kolesnikov and&#10; Pierre Ruyssen and Carlos Riquelme and Mario Lucic and&#10; Josip Djolonga and Andre Susano Pinto and Maxim Neumann and&#10; Alexey Dosovitskiy and Lucas Beyer and Olivier Bachem and&#10; Michael Tschannen and Marcin Michalski and Olivier Bousquet and&#10; Sylvain Gelly and Neil Houlsby},&#10; year={2019},&#10; eprint={1910.04867},&#10; archivePrefix={arXiv},&#10; primaryClass={cs.CV},&#10; url = {https://arxiv.org/abs/1910.04867}&#10; }" />
1110
</div>
12-
1311
# `dmlab`
1412

1513
The Dmlab dataset contains frames observed by the agent acting in the DeepMind
@@ -32,7 +30,6 @@ respectively.
3230
* Size: `2.81 GiB`
3331

3432
## Features
35-
3633
```python
3734
FeaturesDict({
3835
'filename': Text(shape=(), dtype=tf.string),
@@ -55,11 +52,9 @@ VALIDATION | 22,628
5552
* [https://github.com/google-research/task_adaptation](https://github.com/google-research/task_adaptation)
5653

5754
## Supervised keys (for `as_supervised=True`)
58-
5955
`(u'image', u'label')`
6056

6157
## Citation
62-
6358
```
6459
@article{zhai2019visual,
6560
title={The Visual Task Adaptation Benchmark},

docs/catalog/duke_ultrasound.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,12 @@
22
<div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog">
33
<meta itemprop="name" content="TensorFlow Datasets" />
44
</div>
5-
65
<meta itemprop="name" content="duke_ultrasound" />
76
<meta itemprop="description" content="DukeUltrasound is an ultrasound dataset collected at Duke University with a &#10;Verasonics c52v probe. It contains delay-and-sum (DAS) beamformed data &#10;as well as data post-processed with Siemens Dynamic TCE for speckle &#10;reduction, contrast enhancement and improvement in conspicuity of &#10;anatomical structures. These data were collected with support from the&#10;National Institute of Biomedical Imaging and Bioengineering under Grant &#10;R01-EB026574 and National Institutes of Health under Grant 5T32GM007171-44.&#10;A usage example is avalible &#10;[here](https://colab.research.google.com/drive/1R_ARqpWoiHcUQWg1Fxwyx-ZkLi0IZ5qs).&#10;&#10;To use this dataset:&#10;&#10;```python&#10;import tensorflow_datasets as tfds&#10;&#10;ds = tfds.load('duke_ultrasound', split='train')&#10;for ex in ds.take(4):&#10; print(ex)&#10;```&#10;&#10;See [the guide](https://www.tensorflow.org/datasets/overview) for more&#10;informations on [tensorflow_datasets](https://www.tensorflow.org/datasets).&#10;&#10;" />
87
<meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/duke_ultrasound" />
98
<meta itemprop="sameAs" content="https://github.com/ouwen/mimicknet" />
109
<meta itemprop="citation" content="@article{DBLP:journals/corr/abs-1908-05782,&#10; author = {Ouwen Huang and&#10; Will Long and&#10; Nick Bottenus and&#10; Gregg E. Trahey and&#10; Sina Farsiu and&#10; Mark L. Palmeri},&#10; title = {MimickNet, Matching Clinical Post-Processing Under Realistic Black-Box&#10; Constraints},&#10; journal = {CoRR},&#10; volume = {abs/1908.05782},&#10; year = {2019},&#10; url = {http://arxiv.org/abs/1908.05782},&#10; archivePrefix = {arXiv},&#10; eprint = {1908.05782},&#10; timestamp = {Mon, 19 Aug 2019 13:21:03 +0200},&#10; biburl = {https://dblp.org/rec/bib/journals/corr/abs-1908-05782},&#10; bibsource = {dblp computer science bibliography, https://dblp.org}&#10;}" />
1110
</div>
12-
1311
# `duke_ultrasound`
1412

1513
DukeUltrasound is an ultrasound dataset collected at Duke University with a

docs/catalog/esnli.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
<div itemscope itemtype="http://schema.org/Dataset">
2+
<div itemscope itemprop="includedInDataCatalog" itemtype="http://schema.org/DataCatalog">
3+
<meta itemprop="name" content="TensorFlow Datasets" />
4+
</div>
5+
6+
<meta itemprop="name" content="esnli" />
7+
<meta itemprop="description" content="&#10;The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to&#10;include human-annotated natural language explanations of the entailment&#10;relations.&#10;&#10;&#10;To use this dataset:&#10;&#10;```python&#10;import tensorflow_datasets as tfds&#10;&#10;ds = tfds.load('esnli', split='train')&#10;for ex in ds.take(4):&#10; print(ex)&#10;```&#10;&#10;See [the guide](https://www.tensorflow.org/datasets/overview) for more&#10;informations on [tensorflow_datasets](https://www.tensorflow.org/datasets).&#10;&#10;" />
8+
<meta itemprop="url" content="https://www.tensorflow.org/datasets/catalog/esnli" />
9+
<meta itemprop="sameAs" content="https://github.com/OanaMariaCamburu/e-SNLI" />
10+
<meta itemprop="citation" content="&#10;@incollection{NIPS2018_8163,&#10;title = {e-SNLI: Natural Language Inference with Natural Language Explanations},&#10;author = {Camburu, Oana-Maria and Rockt&quot;{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},&#10;booktitle = {Advances in Neural Information Processing Systems 31},&#10;editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},&#10;pages = {9539--9549},&#10;year = {2018},&#10;publisher = {Curran Associates, Inc.},&#10;url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}&#10;}&#10;" />
11+
</div>
12+
13+
# `esnli`
14+
15+
The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to
16+
include human-annotated natural language explanations of the entailment
17+
relations.
18+
19+
* URL:
20+
[https://github.com/OanaMariaCamburu/e-SNLI](https://github.com/OanaMariaCamburu/e-SNLI)
21+
* `DatasetBuilder`:
22+
[`tfds.text.esnli.Esnli`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/text/esnli.py)
23+
24+
`esnli` is configured with `tfds.core.dataset_builder.BuilderConfig` and has the
25+
following configurations predefined (defaults to the first one):
26+
27+
* `plain_text` (`v0.0.1`) (`Size: 195.04 MiB`): Plain text import of e-SNLI
28+
29+
## `esnli/plain_text`
30+
31+
Plain text import of e-SNLI
32+
33+
Versions:
34+
35+
* **`0.0.1`** (default):
36+
37+
### Statistics
38+
39+
Split | Examples
40+
:--------- | -------:
41+
ALL | 569,033
42+
TRAIN | 549,367
43+
VALIDATION | 9,842
44+
TEST | 9,824
45+
46+
### Features
47+
48+
```python
49+
FeaturesDict({
50+
'explanation': Text(shape=(), dtype=tf.string),
51+
'hypothesis': Text(shape=(), dtype=tf.string),
52+
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=3),
53+
'premise': Text(shape=(), dtype=tf.string),
54+
})
55+
```
56+
57+
### Homepage
58+
59+
* [https://github.com/OanaMariaCamburu/e-SNLI](https://github.com/OanaMariaCamburu/e-SNLI)
60+
61+
## Citation
62+
63+
```
64+
@incollection{NIPS2018_8163,
65+
title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
66+
author = {Camburu, Oana-Maria and Rockt"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
67+
booktitle = {Advances in Neural Information Processing Systems 31},
68+
editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
69+
pages = {9539--9549},
70+
year = {2018},
71+
publisher = {Curran Associates, Inc.},
72+
url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
73+
}
74+
```
75+
76+
--------------------------------------------------------------------------------

0 commit comments

Comments
 (0)