Skip to content Skip to sidebar Skip to footer

How To Create Cutomized Dataset For Google Tensorflow Attention Ocr?

I am able to create TFRecord file according to this question. But I don't know whether I should write all images into a single TFRecord file or create multiple TFRecord files. Also

Solution 1:

whether I should write all images into a single TFRecord file or create multiple TFRecord files

It depends on size of the training data and has impact on parallel prefetching to fill queues. I'd recommend ~1000 samples per shard (a tfrecord file with a suffix num-of-total, e.g. /path/to/my/dataset-00000-of-00512).

What content should be in "charset_filename" file?

It is a text file which defines the mapping between integer ids and corresponding characters. It has the following format: <id><TAB><character> one of rows in the file should define an id for the <nul> character - a special character the model outputs when it reached end of sequence to pad the output to a fixed length.

For example, here is an excerpt from the FSNS dataset's charset file:

0    
133 <nul>
1   l
2   ’
3   é
4   t

Note that the <SPACE> character has id=0.

Should it be a collection of all posible chracters in the dataset?

yes. This file should define id-to-character mappings for all characters in the dataset.

When generating TFRecord file, we converted charcters to integer ids, should this file include characters or their ids?

both. Each line in the file should be in the form <id><TAB><character>.

Post a Comment for "How To Create Cutomized Dataset For Google Tensorflow Attention Ocr?"