tf.data.Dataset
from text files in a directoryGenerate a tf.data.Dataset
from text files in a directory
text_dataset_from_directory(
directory,
labels = "inferred",
label_mode = "int",
class_names = NULL,
batch_size = 32L,
max_length = NULL,
shuffle = TRUE,
seed = NULL,
validation_split = NULL,
subset = NULL,
follow_links = FALSE,
...
)
Directory where the data is located.
If labels
is "inferred", it should contain
subdirectories, each containing text files for a class.
Otherwise, the directory structure is ignored.
Either "inferred"
(labels are generated from the directory structure),
NULL (no labels),
or a list of integer labels of the same size as the number of
text files found in the directory. Labels should be sorted according
to the alphanumeric order of the text file paths
(obtained via os.walk(directory)
in Python).
'int'
: means that the labels are encoded as integers
(e.g. for sparse_categorical_crossentropy
loss).
'categorical'
means that the labels are
encoded as a categorical vector
(e.g. for categorical_crossentropy
loss).
'binary'
means that the labels (there can be only 2)
are encoded as float32
scalars with values 0 or 1
(e.g. for binary_crossentropy
).
NULL
(no labels).
Only valid if labels
is "inferred"
. This is the explicit
list of class names (must match names of subdirectories). Used
to control the order of the classes
(otherwise alphanumerical order is used).
Size of the batches of data. Default: 32
.
Maximum size of a text string. Texts longer than this will
be truncated to max_length
.
Whether to shuffle the data. Default: TRUE
.
If set to FALSE
, sorts the data in alphanumeric order.
Optional random seed for shuffling and transformations.
Optional float between 0 and 1, fraction of data to reserve for validation.
One of "training" or "validation".
Only used if validation_split
is set.
Whether to visits subdirectories pointed to by symlinks.
Defaults to FALSE
.
For future compatibility (unused presently).
If your directory structure is:
main_directory/ ...class_a/ ......a_text_1.txt ......a_text_2.txt ...class_b/ ......b_text_1.txt ......b_text_2.txt
Then calling text_dataset_from_directory(main_directory, labels = 'inferred')
will return a tf.data.Dataset
that yields batches of texts from
the subdirectories class_a
and class_b
, together with labels
0 and 1 (0 corresponding to class_a
and 1 corresponding to class_b
).
Only .txt
files are supported at this time.