Development Application for Identify Thai Banknote by Voice for Blindness via Smartphone

Main Article Content

ณัฐวดี หงษ์บุญมี
กาญจนา แสงตาล

Abstract

The aim of this research is to develop an application to help identify Thai banknotes by voice for the blindness on the Smartphone android operating system. This research applied deep learning technique for image recognition and text to speech for Smartphone pronunciation. The process began with a collection of 2,700 images of Thai banknotes by analyzing and creating an image classification model using a Convolution Neural Network, using MobileNet algorithm through Tensorflow library. Then, the model was developed to be a Smartphone application. The application is developed with Android studio, JAVA language and Text to Speech​​ for pronunciation. The model performance test results obtained with an accuracy of 95.00%. The test of the application found that the banknote can be classified correctly 84.00%. The evaluation of user applications found that the average satisfaction was 4.33, the standard deviation 0.56, at a good level. It can conclude that this application makes it easier for the blindness to discover banknotes, because banknotes can identified by voice and are convenient for the blindness to use anywhere, anytime. It also helps to promote a lifestyle for the blindness so that they can live their basic life more conveniently.

Article Details

How to Cite
[1]
หงษ์บุญมี ณ. and แสงตาล ก., “Development Application for Identify Thai Banknote by Voice for Blindness via Smartphone”, JIST, vol. 9, no. 2, pp. 24–34, Dec. 2019.
Section
Research Article: Multidisciplinary (Detail in Scope of Journal)

References

1. กรมส่งเสริมและพัฒนาคุณภาพชีวิตคนพิการ, “สถานการณ์คนพิการ มีนาคม 2562,” [Online]. Available: https://dep.go.th/Content/View/4232/1 [Accessed: April 10, 2019].

2. ศุภา คงแสงไชย, กลยุทธ์การกระตุ้นพัฒนาการทางสายตาสำหรับเด็กพิการทางสายตาและเด็กด้อยโอกาสอายุ 0-3 ปี, กรุงเทพฯ: สำนักพิมพ์สุภา, 2014.

3. L. Deng and D. Yu, “Deep Learning Methods and Application,” Foundations and Trends in Signal Processing, vol. 17, No.3-4, pp. 197–387, June. 2014.

4. “The Connect Between Deep Learning and AI”. [Online]. Available: https://opensourceforu.com/2018/01/connect-deep-learning-ai/. [Accessed: April 14, 2019].

5. Y. Gao and H. J. Lee, “Vehicle Make Recognition Based on Convolutional Neural Network,” 2015 2nd International Conference on Information Science and Security (ICISS), Seoul, 2015, pp. 1–4.

6. S. Rabano, M. Cabatuan, E. Sybingco, E. Dadios and E. Calilung, “Common Garbage Classification Using MobileNet,” 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Baguio City, Philippines, 2018, pp. 1–4.

7. S. Lukose and S. Upadhya, “Text to Speech Synthesizer-Formant Synthesis,” in Proceeding of IEEE International Conference on Nascent Technologies in the Engineering Field, Navi Mumbai, India, 2017, pp. 121–124.

8. P. Pankerd, T. Tirabutr, T. Tanduang and S. Pongpinigpinyo, “Using deep learning for printed thai charaters recognition,” in Proceeding of 13th National Conference on Computing and Information Technology (NCCIT), King Mongkut’s University of Technology North Bangkok, Bangkok, 2017, pp. 181–186.

9. W. Wairotchanaphuttha, M. Uimin, N. Boonsirisumpun and W. Puarungroj, “Detection and classification of vehicles using deep learning algorithm for video surveillance systems,” in Proceeding of 14th National Conference on Computing and Information Technology (NCCIT), King Mongkut’s University of Technology North Bangkok, Bangkok, 2018, pp. 402–407.

10. A. Reangsuk and T. Kungkajit, “Classification of amulets using deep learning techniques,” in Proceeding of 10th National Conference on Information Technology (NCIT), Mahasarakham University, Mahasarakham, 2018, pp. 190–194.

11. N. Manoi, A. Bunjanda and C. Rattanapoka, “A System for cooking recipe sharing and cooking recipe finding.