MSRA Text Detection 500 Database (MSRA-TD500)

2014-01-13 (v. 1)
avg.: 4.4 by 12 users

Contact author

Cong Yao

Huazhong University of Science and Technology

yaocong2010@gmail.com

You can cite this dataset as: Cong Yao, MSRA Text Detection 500 Database (MSRA-TD500) ,1,ID:MSRA-TD500_1,URL:https://tc11.cvc.uab.es/datasets/MSRA-TD500_1

Dataset Information

Keywords

Text Detection, Natural Image, Arbitrary Orientation

Description

 

Figure 1. Typical images from MSRA-TD500. Notice the red rectangles. They indicate the texts within them are labelled as difficult (due to blur or occlusion).

Please check the Task page to download the dataset.

 

The MSRA Text Detection 500 Database (MSRA-TD500) is collected and released publicly as a benchmark to evaluate text detection algorithms, for the purpose of tracking the recent progresses in the field of text detection in natural images, especially the advances in detecting texts of arbitrary orientations.

The MSRA Text Detection 500 Database (MSRA-TD500) contains 500 natural images, which are taken from indoor (office and mall) and outdoor (street) scenes using a pocket camera. The indoor images are mainly signs, doorplates and caution plates while the outdoor images are mostly guide boards and billboards in complex background. The resolutions of the images vary from 1296x864 to 1920x1280.

The dataset is challenging because of both the diversity of the texts and the complexity of the background in the images. The text may be in different languages (Chinese, English or mixture of both), fonts, sizes, colors and orientations. The background may contain vegetation (e.g. trees and bushes) and repeated patterns (e.g. windows and bricks), which are not so distinguishable from text.

The dataset is divided into two parts: training set and test set. The training set contains 300 images randomly selected from the original dataset and the remaining 200 images constitute the test set. All the images in this dataset are fully annotated. The basic unit in this dataset is text line (see Figure 1) rather than word, which is used in the ICDAR datasets, because it is hard to partition Chinese text lines into individual words based on their spacing; even for English text lines, it is non-trivial to perform word partition without high level information.

Technical Details

Figure 2. Ground truth generation. (a) Human annotations. The annotators are required to locate and bound each text line using a four-vertex polygon (red dots and yellow lines). (b) Ground truth rectangles (green). The ground truth rectangle is generated automatically by fitting a minimum area rectangle using the polygon.

The procedure of ground truth generation is shown in Figure 2. While current evaluation methods for text detection are designed for horizontal texts only, we proposed a new evaluation protocol (see [1] for details). Minimum area rectangles are used in our protocol because they (green rectangles in Figure 2 (b)) are much tighter than axis-aligned rectangles (red rectangles in Figure 2 (b)).

In particular, to accommodate difficult text (too small, occluded, blurry, or truncated) that is hard for text detection algorithms, each text instance considered to be difficult is given an additional “difficult” label (note the red rectangles in Figure 1). Detection misses of such difficult texts will not be punished.

 

 

 

References

[1] C. Yao, X. Bai, W. Liu, Y. Ma and Z. Tu. Detecting Texts of Arbitrary Orientations in Natural Images CVPR 2012 (PDF)

Comments
Naseer Ahmed Khan 10-16-2016 10:32
Please tell me where is the original datasets link here?
ade 11-19-2016 04:46
where is the datasets link?
xululu 02-24-2017 08:48
Could you tell me where is the dataset link?
Zhengwenxun 11-21-2017 11:55
To ALL: click the "Research Tasks" at the right side. It leads you to a page with download link as well as task description.
Zengyuan Guo 10-17-2018 06:41
How to Download this dataset
doufangzheng 10-23-2018 03:28
shilei 03-01-2019 07:38
download datasets
Valoration
avg.: 4.4 by 12 users
In order to rate this dataset you need to be logged onLogin / Register