Development Application for Identify Thai Banknote by Voice for Blindness via Smartphone
Main Article Content
Abstract
The aim of this research is to develop an application to help identify Thai banknotes by voice for the blindness on the Smartphone android operating system. This research applied deep learning technique for image recognition and text to speech for Smartphone pronunciation. The process began with a collection of 2,700 images of Thai banknotes by analyzing and creating an image classification model using a Convolution Neural Network, using MobileNet algorithm through Tensorflow library. Then, the model was developed to be a Smartphone application. The application is developed with Android studio, JAVA language and Text to Speech for pronunciation. The model performance test results obtained with an accuracy of 95.00%. The test of the application found that the banknote can be classified correctly 84.00%. The evaluation of user applications found that the average satisfaction was 4.33, the standard deviation 0.56, at a good level. It can conclude that this application makes it easier for the blindness to discover banknotes, because banknotes can identified by voice and are convenient for the blindness to use anywhere, anytime. It also helps to promote a lifestyle for the blindness so that they can live their basic life more conveniently.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
I/we certify that I/we have participated sufficiently in the intellectual content, conception and design of this work or the analysis and interpretation of the data (when applicable), as well as the writing of the manuscript, to take public responsibility for it and have agreed to have my/our name listed as a contributor. I/we believe the manuscript represents valid work. Neither this manuscript nor one with substantially similar content under my/our authorship has been published or is being considered for publication elsewhere, except as described in the covering letter. I/we certify that all the data collected during the study is presented in this manuscript and no data from the study has been or will be published separately. I/we attest that, if requested by the editors, I/we will provide the data/information or will cooperate fully in obtaining and providing the data/information on which the manuscript is based, for examination by the editors or their assignees. Financial interests, direct or indirect, that exist or may be perceived to exist for individual contributors in connection with the content of this paper have been disclosed in the cover letter. Sources of outside support of the project are named in the cover letter.
I/We hereby transfer(s), assign(s), or otherwise convey(s) all copyright ownership, including any and all rights incidental thereto, exclusively to the Journal, in the event that such work is published by the Journal. The Journal shall own the work, including 1) copyright; 2) the right to grant permission to republish the article in whole or in part, with or without fee; 3) the right to produce preprints or reprints and translate into languages other than English for sale or free distribution; and 4) the right to republish the work in a collection of articles in any other mechanical or electronic format.
We give the rights to the corresponding author to make necessary changes as per the request of the journal, do the rest of the correspondence on our behalf and he/she will act as the guarantor for the manuscript on our behalf.
All persons who have made substantial contributions to the work reported in the manuscript, but who are not contributors, are named in the Acknowledgment and have given me/us their written permission to be named. If I/we do not include an Acknowledgment that means I/we have not received substantial contributions from non-contributors and no contributor has been omitted.
References
2. ศุภา คงแสงไชย, กลยุทธ์การกระตุ้นพัฒนาการทางสายตาสำหรับเด็กพิการทางสายตาและเด็กด้อยโอกาสอายุ 0-3 ปี, กรุงเทพฯ: สำนักพิมพ์สุภา, 2014.
3. L. Deng and D. Yu, “Deep Learning Methods and Application,” Foundations and Trends in Signal Processing, vol. 17, No.3-4, pp. 197–387, June. 2014.
4. “The Connect Between Deep Learning and AI”. [Online]. Available: https://opensourceforu.com/2018/01/connect-deep-learning-ai/. [Accessed: April 14, 2019].
5. Y. Gao and H. J. Lee, “Vehicle Make Recognition Based on Convolutional Neural Network,” 2015 2nd International Conference on Information Science and Security (ICISS), Seoul, 2015, pp. 1–4.
6. S. Rabano, M. Cabatuan, E. Sybingco, E. Dadios and E. Calilung, “Common Garbage Classification Using MobileNet,” 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Baguio City, Philippines, 2018, pp. 1–4.
7. S. Lukose and S. Upadhya, “Text to Speech Synthesizer-Formant Synthesis,” in Proceeding of IEEE International Conference on Nascent Technologies in the Engineering Field, Navi Mumbai, India, 2017, pp. 121–124.
8. P. Pankerd, T. Tirabutr, T. Tanduang and S. Pongpinigpinyo, “Using deep learning for printed thai charaters recognition,” in Proceeding of 13th National Conference on Computing and Information Technology (NCCIT), King Mongkut’s University of Technology North Bangkok, Bangkok, 2017, pp. 181–186.
9. W. Wairotchanaphuttha, M. Uimin, N. Boonsirisumpun and W. Puarungroj, “Detection and classification of vehicles using deep learning algorithm for video surveillance systems,” in Proceeding of 14th National Conference on Computing and Information Technology (NCCIT), King Mongkut’s University of Technology North Bangkok, Bangkok, 2018, pp. 402–407.
10. A. Reangsuk and T. Kungkajit, “Classification of amulets using deep learning techniques,” in Proceeding of 10th National Conference on Information Technology (NCIT), Mahasarakham University, Mahasarakham, 2018, pp. 190–194.
11. N. Manoi, A. Bunjanda and C. Rattanapoka, “A System for cooking recipe sharing and cooking recipe finding.