Dataset for the competition on Post-OCR Text Correction 2017 (Post-OCR 2017)
Text Correction, OCR
The dataset is multilingual (50% French, 50% English, with maybe few expressions borrowed from other languages such as Latin). It consists in texts published in monographs and periodicals during the last four centuries. It accounts more than 12 million characters, and includes both noisy OCR-ed texts and the corresponding Gold-Standard (GS) which has been aligned at the character level.
The documents come from different collections (e.g. BnF, British Library) supported by various projects (e.g. Europeana Newspapers, IMPACT, Gutenberg, Perseus, Wikisource and Bank of wisdom).
Error rates, and thus difficulty, vary according to the nature of the documents. In both English and French parts of the dataset, historical newspapers, which have been reported to be especially challenging, represent more than 25% of the data.
Important notice, in the archive :
- Even ids (document numbers) were exclusively used for Task 1) Detection
- Odd ids (document numbers) were exclusively used for Task 2) Correction