File tree Expand file tree Collapse file tree 7 files changed +294
-131
lines changed Expand file tree Collapse file tree 7 files changed +294
-131
lines changed Original file line number Diff line number Diff line change @@ -78,7 +78,7 @@ Callers must pass arguments as keyword arguments.
78
78
``` python
79
79
as_dataset(
80
80
split = None ,
81
- batch_size = 1 ,
81
+ batch_size = None ,
82
82
shuffle_files = None ,
83
83
as_supervised = False
84
84
)
@@ -95,10 +95,10 @@ Callers must pass arguments as keyword arguments.
95
95
which subset(s) of the data to read. If None (default), returns all splits
96
96
in a dict ` <key: tfds.Split, value: tf.data.Dataset> ` .
97
97
* <b >` batch_size ` </b >: ` int ` , batch size. Note that variable-length features
98
- will be 0-padded if ` batch_size > 1 ` . Users that want more custom behavior
99
- should use ` batch_size=1 ` and use the ` tf.data ` API to construct a custom
100
- pipeline. If ` batch_size == -1 ` , will return feature dictionaries of the
101
- whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
98
+ will be 0-padded if ` batch_size ` is set . Users that want more custom
99
+ behavior should use ` batch_size=None ` and use the ` tf.data ` API to construct
100
+ a custom pipeline. If ` batch_size == -1 ` , will return feature dictionaries
101
+ of the whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
102
102
* <b >` shuffle_files ` </b >: ` bool ` , whether to shuffle the input files. Defaults
103
103
to ` True ` if ` split == tfds.Split.TRAIN ` and ` False ` otherwise.
104
104
* <b >` as_supervised ` </b >: ` bool ` , if ` True ` , the returned ` tf.data.Dataset `
Original file line number Diff line number Diff line change @@ -109,7 +109,7 @@ Callers must pass arguments as keyword arguments.
109
109
``` python
110
110
as_dataset(
111
111
split = None ,
112
- batch_size = 1 ,
112
+ batch_size = None ,
113
113
shuffle_files = None ,
114
114
as_supervised = False
115
115
)
@@ -126,10 +126,10 @@ Callers must pass arguments as keyword arguments.
126
126
which subset(s) of the data to read. If None (default), returns all splits
127
127
in a dict ` <key: tfds.Split, value: tf.data.Dataset> ` .
128
128
* <b >` batch_size ` </b >: ` int ` , batch size. Note that variable-length features
129
- will be 0-padded if ` batch_size > 1 ` . Users that want more custom behavior
130
- should use ` batch_size=1 ` and use the ` tf.data ` API to construct a custom
131
- pipeline. If ` batch_size == -1 ` , will return feature dictionaries of the
132
- whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
129
+ will be 0-padded if ` batch_size ` is set . Users that want more custom
130
+ behavior should use ` batch_size=None ` and use the ` tf.data ` API to construct
131
+ a custom pipeline. If ` batch_size == -1 ` , will return feature dictionaries
132
+ of the whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
133
133
* <b >` shuffle_files ` </b >: ` bool ` , whether to shuffle the input files. Defaults
134
134
to ` True ` if ` split == tfds.Split.TRAIN ` and ` False ` otherwise.
135
135
* <b >` as_supervised ` </b >: ` bool ` , if ` True ` , the returned ` tf.data.Dataset `
Original file line number Diff line number Diff line change @@ -87,7 +87,7 @@ Callers must pass arguments as keyword arguments.
87
87
``` python
88
88
as_dataset(
89
89
split = None ,
90
- batch_size = 1 ,
90
+ batch_size = None ,
91
91
shuffle_files = None ,
92
92
as_supervised = False
93
93
)
@@ -104,10 +104,10 @@ Callers must pass arguments as keyword arguments.
104
104
which subset(s) of the data to read. If None (default), returns all splits
105
105
in a dict ` <key: tfds.Split, value: tf.data.Dataset> ` .
106
106
* <b >` batch_size ` </b >: ` int ` , batch size. Note that variable-length features
107
- will be 0-padded if ` batch_size > 1 ` . Users that want more custom behavior
108
- should use ` batch_size=1 ` and use the ` tf.data ` API to construct a custom
109
- pipeline. If ` batch_size == -1 ` , will return feature dictionaries of the
110
- whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
107
+ will be 0-padded if ` batch_size ` is set . Users that want more custom
108
+ behavior should use ` batch_size=None ` and use the ` tf.data ` API to construct
109
+ a custom pipeline. If ` batch_size == -1 ` , will return feature dictionaries
110
+ of the whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
111
111
* <b >` shuffle_files ` </b >: ` bool ` , whether to shuffle the input files. Defaults
112
112
to ` True ` if ` split == tfds.Split.TRAIN ` and ` False ` otherwise.
113
113
* <b >` as_supervised ` </b >: ` bool ` , if ` True ` , the returned ` tf.data.Dataset `
Original file line number Diff line number Diff line change @@ -12,7 +12,7 @@ tfds.load(
12
12
name,
13
13
split = None ,
14
14
data_dir = None ,
15
- batch_size = 1 ,
15
+ batch_size = None ,
16
16
download = True ,
17
17
as_supervised = False ,
18
18
with_info = False ,
@@ -71,9 +71,9 @@ of hundreds of GiB to disk. Refer to the `download` argument.
71
71
<a href =" ../tfds/Split.md#TEST " ><code >tfds.Split.TEST</code ></a >).
72
72
* <b >` data_dir ` </b >: ` str ` (optional), directory to read/write data. Defaults
73
73
datasets are stored.
74
- * <b >` batch_size ` </b >: ` int ` , set to > 1 to get batches of examples. Note that
75
- variable length features will be 0-padded. If ` batch_size=-1 ` , will return
76
- the full dataset as ` tf.Tensor ` s.
74
+ * <b >` batch_size ` </b >: ` int ` , if set, add a batch dimension to examples. Note
75
+ that variable length features will be 0-padded. If ` batch_size=-1 ` , will
76
+ return the full dataset as ` tf.Tensor ` s.
77
77
* <b >` download ` </b >: ` bool ` (optional), whether to call
78
78
<a href =" ../tfds/core/DatasetBuilder.md#download_and_prepare " ><code >tfds.core.DatasetBuilder.download_and_prepare</code ></a >
79
79
before calling ` tf.DatasetBuilder.as_dataset ` . If ` False ` , data is expected
Original file line number Diff line number Diff line change @@ -82,7 +82,7 @@ Callers must pass arguments as keyword arguments.
82
82
``` python
83
83
as_dataset(
84
84
split = None ,
85
- batch_size = 1 ,
85
+ batch_size = None ,
86
86
shuffle_files = None ,
87
87
as_supervised = False
88
88
)
@@ -99,10 +99,10 @@ Callers must pass arguments as keyword arguments.
99
99
which subset(s) of the data to read. If None (default), returns all splits
100
100
in a dict ` <key: tfds.Split, value: tf.data.Dataset> ` .
101
101
* <b >` batch_size ` </b >: ` int ` , batch size. Note that variable-length features
102
- will be 0-padded if ` batch_size > 1 ` . Users that want more custom behavior
103
- should use ` batch_size=1 ` and use the ` tf.data ` API to construct a custom
104
- pipeline. If ` batch_size == -1 ` , will return feature dictionaries of the
105
- whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
102
+ will be 0-padded if ` batch_size ` is set . Users that want more custom
103
+ behavior should use ` batch_size=None ` and use the ` tf.data ` API to construct
104
+ a custom pipeline. If ` batch_size == -1 ` , will return feature dictionaries
105
+ of the whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
106
106
* <b >` shuffle_files ` </b >: ` bool ` , whether to shuffle the input files. Defaults
107
107
to ` True ` if ` split == tfds.Split.TRAIN ` and ` False ` otherwise.
108
108
* <b >` as_supervised ` </b >: ` bool ` , if ` True ` , the returned ` tf.data.Dataset `
Original file line number Diff line number Diff line change @@ -62,7 +62,7 @@ __init__(
62
62
``` python
63
63
as_dataset(
64
64
split = None ,
65
- batch_size = 1 ,
65
+ batch_size = None ,
66
66
shuffle_files = None ,
67
67
as_supervised = False
68
68
)
@@ -79,10 +79,10 @@ Callers must pass arguments as keyword arguments.
79
79
which subset(s) of the data to read. If None (default), returns all splits
80
80
in a dict ` <key: tfds.Split, value: tf.data.Dataset> ` .
81
81
* <b >` batch_size ` </b >: ` int ` , batch size. Note that variable-length features
82
- will be 0-padded if ` batch_size > 1 ` . Users that want more custom behavior
83
- should use ` batch_size=1 ` and use the ` tf.data ` API to construct a custom
84
- pipeline. If ` batch_size == -1 ` , will return feature dictionaries of the
85
- whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
82
+ will be 0-padded if ` batch_size ` is set . Users that want more custom
83
+ behavior should use ` batch_size=None ` and use the ` tf.data ` API to construct
84
+ a custom pipeline. If ` batch_size == -1 ` , will return feature dictionaries
85
+ of the whole dataset with ` tf.Tensor ` s instead of a ` tf.data.Dataset ` .
86
86
* <b >` shuffle_files ` </b >: ` bool ` , whether to shuffle the input files. Defaults
87
87
to ` True ` if ` split == tfds.Split.TRAIN ` and ` False ` otherwise.
88
88
* <b >` as_supervised ` </b >: ` bool ` , if ` True ` , the returned ` tf.data.Dataset `
You can’t perform that action at this time.
0 commit comments