A method for automatically translating print books into electronic Braille books

logo

SCIENCE CHINA Information Sciences, Volume 59, Issue 7: 072101(2016) https://doi.org/10.1007/s11432-016-5575-z

A method for automatically translating print books into electronic Braille books

More info
  • ReceivedNov 10, 2015
  • AcceptedDec 29, 2015
  • PublishedJun 16, 2016

Abstract

In this paper, a method for automatically translating scanned images from print books into electronic Braille books is proposed with the objective of reducing the amount of time and cost required for producing Braille books. The proposed method consists of processes for identifying character and image areas in a scanned image, automatically translating characters and images into Braille and tactile graphics, respectively, and positioning Braille and tactile graphics into an electronic Braille page. Experimental results show that the proposed method drastically reduces the time required to translate a print book into an electronic Braille book. Despite the drastic reduction in translation time, the method proposed in this paper does not compromise the ability to recognize information for the visually impaired compared to manually produced Braille books, demonstrating its feasibility in practical applications. Therefore, the proposed method is expected to significantly reduce the time and cost required for producing Braille books, and provide more reading materials for the visually impaired, making significant contributions to enhancing their knowledge and welfare.


Acknowledgment

Acknowledgments

The present research was conducted by the research fund of Dankook University in 2013.


References

[1] Improvement suggestion for ensuring accessibility to knowledge and information of disabled people. ACRC Technical Report. Korea, 2014. Google Scholar

[2] Wong E K, Chen M. A new robust algorithm for video text extraction. Patt Recogn, 2003, 36: 1397-1406 CrossRef Google Scholar

[3] Shivakumara P, Phan T Q, Tan C L. A gradient difference based technique for video text detection. In: Proceedings of the 10th International Conference on Document Analysis and Recognition, Barcelona, 2009. 156--160. Google Scholar

[4] Jang D G, Hwang C S. Document image layout analysis using image filters and constrained conditions. KIPS Trans B, 2002, 9: 311-318 Google Scholar

[5] Chun B T, Bae Y I, Kim T Y. Text extraction in videos using topographical feature of characters. In: Proceedings of the IEEE International Conference on Fuzzy System, Seoul, 1999. 1126--1130. Google Scholar

[6] Fu X L, Cai L H, Liu Y, et al. A computational cognition model of perception, memory, and judgment. Sci China Inf Sci, 2014, 57: 032114-318 Google Scholar

[7] Liu Y, Fu Q, Liu Y, et al. A distributed computational cognitive model for object recognition. Sci China Inf Sci, 2013, 56: 092101-318 Google Scholar

[8] Strouthopoulos C, Papamarkos N, Atsalakis A E. Text extraction in complex color document. Patt Recogn, 2002, 35: 1743-1758 CrossRef Google Scholar

[9] Chun B T, Song C Y. A method for character segmentation using frequency characteristics and back propagation neural network. J Korea Soc Comput Inf, 2006, 4: 55-60 Google Scholar

[10] Yuan Q, Tan C L. Text extraction from gray scale document images using edge information. In: Proceedings of the 6th International Conference on Document Analysis and Recognition, Seattle, 2001. 302--306. Google Scholar

[11] Park J C. Text region detection of various slope and size of text using the adaptive character-edge map. In: Proceedings of KOCON Conference, 2007. 5--9. Google Scholar

[12] Grover S, Arora K, Mitra S K. Text extraction from document images using edge information. In: Proceedings of the IEEE India Council Conference, Gujarat, 2009. 1--4. Google Scholar

[13] Kim E J. Character area extraction and the character segmentation on the color document. J Korean Inst Intell Syst, 1999, 9: 444-450 Google Scholar

[14] Song Y J, Kim K C, Choi Y W, et al. Text region extraction and text segmentation on camera-captured document style images. In: Proceedings of the 8th International Conference on Document Analysis and Recognition, Seoul, 2005. 172--176. Google Scholar

[15] Kim J S, Kim S H. Three-level color clustering algorithm for binarizing scene text images. KIPS Trans B, 2005, 12: 1-8 Google Scholar

[16] Jung K C, Han J H. Hybrid approach to efficient text extraction in complex color images. Patt Recogn Lett, 2004, 25: 679-699 CrossRef Google Scholar

[17] Jung K C. Hybrid approach of texture and connected component methods for text extraction in complex images. IEEK, 2004, 41: 175-186 Google Scholar

[18] Huang T J, Tian Y H, Li J, et al. Salient region detection and segmentation for general object recognition and image understanding. Sci China Inf Sci, 2011, 54: 2461-2470 CrossRef Google Scholar

[19] Zhou L, Hu D W, Zhou Z T. Scene recognition combining structural and textural features. Sci China Inf Sci, 2013, 56: 078106-2470 Google Scholar

[20] Smith R. An overview of the Tesseract OCR Engine. In: Proceedings of the 12th International Conference on Document Analysis and Recognition. Washington DC: IEEE, 2007. 629--633. Google Scholar

[21] Ministry of Culture, Sports and Tourism. Korea Braille Regulations, MCST Regulation 2006-39. Korea, 2006. Google Scholar

[22] Way T P, Barner K E. Automatic visual to tactile translation-part II: evaluation of the TACTile image creation system. IEEE Trans Rehabil Eng, 1997, 5: 95-105 CrossRef Google Scholar

[23] Rotard M, Kn?dler S, Ertl T. A tactile web browser for the visually disabled. In: Proceedings of the 16th ACM Conference on Hypertext and Hypermedia. New York: ACM, 2005. 15--22. Google Scholar

[24] Ladner R E, Ivory M Y, Rao R, et al. Automating tactile graphics translation. In: Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility. New York: ACM, 2005. 150--157. Google Scholar

[25] Petit G, Dufresne A, Levesque V, et al. Refreshable tactile graphics applied to schoolbook illustrations for students with visual impairment. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility. New York: ACM, 2008. 89--96. Google Scholar

[26] Chen J J, Nagaya R, Takagi N. An extraction method of solid line graph elements in mathematical graphs for automating translation of tactile graphics. In: Proceedings of the 13th IEEE International Symposium on Advanced Intelligent Systems. Washington DC: IEEE, 2012. 422--427. Google Scholar

[27] Takagi N, Chen J J. A broken line classification method of mathematical graphs for automating translation into scalable vector graphic. In: Proceedings of the IEEE 43rd International Symposium on Multiple-Valued Logic. Washington DC: IEEE, 2013. 71--76. Google Scholar

[28] Chen J J, Takagi N. A pattern recognition method for automating tactile graphics translation from hand-drawn maps. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics. Washington DC: IEEE, 2013, 4173--4178. Google Scholar

[29] Chen Q, Zhao L, Lu J, et al. Modified two-dimensional Otsu image segmentation algorithm and fast realization. IET Image Process, 2012, 6: 426-433 CrossRef Google Scholar

[30] Deng Y N, Kenney C, Moore M S, et al. Peer group filtering and perceptual color image quantization. In: Proceedings of the IEEE International Symposium on Circuits and Systems. Washington DC: IEEE, 1999. 21--24. Google Scholar

[31] Gong Y. Advancing content-based image retrieval by exploiting image color and region features. Multimedia Syst, 1999, 7: 449-457 CrossRef Google Scholar

[32] Shi J, Tomasi C. Good features to track. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE, 1994. 593--600. Google Scholar

Copyright 2019 Science China Press Co., Ltd. 科学大众杂志社有限责任公司 版权所有

京ICP备18024590号-1