The Effects of Partial Occlusion in Image-based CAPTCHAs on Pass Rates
Main Article Content
Abstract
- Several attempts in research of CAPTCHAs are to design and develop different kinds of CAPTCHAs in order to make them more deployable for computer system protection from internet bots. Text-based CAPTCHAs are commonly used in practice. They have been increasingly more distorted and camouflaged due to the increase of advanced developed bots. However, the increasing distortion of CAPTCHA text makes it difficult for human to decode and then take more time to get into systems. This research was to proposed new image-based CAPTCHAs which was developed base upon human visual perception so that the CAPTCHA is more usable to humans, but hard to decipher for auto-bots. The proposed image-based CAPTHCAs were generated based upon three factors to make the CAPTCHAs incomplete. Those factors are the two partial occlusion methods (constant and random occlusion of images), the two levels of image occlusion (30% and 45%), and the three different number of small frame as interference in an image (no frame, two frames and three frames). The proposed CAPTCHAs were evaluated by accuracy rate (pass rate) together with user satisfaction. A within-Subject design experiment was run with 40 participants. The results of the experiment indicated that the participants were satisfied with the image-based CAPTCHAs more than text-based CAPTCHAs at the 95% significant level. And importantly, the accuracy rate of image -based CAPTHCAs was significantly higher than text-based. The response time will be different depending on the completion of image CAPTCHAs. The most usable image-based CAPTCHAs should be made incomplete by constant occlusion of 30% of original picture, and can be interfered with zero or two small frames.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
I/we certify that I/we have participated sufficiently in the intellectual content, conception and design of this work or the analysis and interpretation of the data (when applicable), as well as the writing of the manuscript, to take public responsibility for it and have agreed to have my/our name listed as a contributor. I/we believe the manuscript represents valid work. Neither this manuscript nor one with substantially similar content under my/our authorship has been published or is being considered for publication elsewhere, except as described in the covering letter. I/we certify that all the data collected during the study is presented in this manuscript and no data from the study has been or will be published separately. I/we attest that, if requested by the editors, I/we will provide the data/information or will cooperate fully in obtaining and providing the data/information on which the manuscript is based, for examination by the editors or their assignees. Financial interests, direct or indirect, that exist or may be perceived to exist for individual contributors in connection with the content of this paper have been disclosed in the cover letter. Sources of outside support of the project are named in the cover letter.
I/We hereby transfer(s), assign(s), or otherwise convey(s) all copyright ownership, including any and all rights incidental thereto, exclusively to the Journal, in the event that such work is published by the Journal. The Journal shall own the work, including 1) copyright; 2) the right to grant permission to republish the article in whole or in part, with or without fee; 3) the right to produce preprints or reprints and translate into languages other than English for sale or free distribution; and 4) the right to republish the work in a collection of articles in any other mechanical or electronic format.
We give the rights to the corresponding author to make necessary changes as per the request of the journal, do the rest of the correspondence on our behalf and he/she will act as the guarantor for the manuscript on our behalf.
All persons who have made substantial contributions to the work reported in the manuscript, but who are not contributors, are named in the Acknowledgment and have given me/us their written permission to be named. If I/we do not include an Acknowledgment that means I/we have not received substantial contributions from non-contributors and no contributor has been omitted.
References
2. H.Gao,D.Yao,H.Liu,X.Liu,L.Wang,and L.Wang "A novel imaged based captcha using jigsaw puzzle," in IEEE 2010 International Conference on Computational Science and Engineering, pp.351-356, DEC.2010.
3. J.Yan,and Ahmad, " Usability of CAPTCHAs or usability issues in CAPTCHA design ," in SOUPS '08 Proceedings of the 4th symposium on Usable privacy and security,pp. 44-52,2008.
4. R.Gossweiler,M.Kamvar, and S.Baluja, "A captcha based on image orientation,"in International World Wide Web Conference Committee (IW3C2), 2009.
5. W.Liao,"A captcha mechanism by exchanging image blocks," in Pattern Recognition, 2006. ICPR 2006. 18th International Conference,pp.1179-1183,2006.
6. W.Liao and C.Chang," Embedding information within dynamic visual patterns,"in Multimedia and Expo, 2004. ICME '04. 2004 IEEE International Conference,pp.895-898, Jun.2004.
7. L.von Ahn,"CAPTCHA:telling humans and computers apart Automatically" from http://www.captcha.net/J.Holman,J.Lazer,J.Feng.andJ.D'Arcy,"Developing usable captchas for blind users,"in Assets '07 Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility, pp.245-246,2007
9. A.Vetro, "Mobile visual search:Architectures, technologies, and the emerging mpeg standard",in IEEE MultiMedia, pp.86-94, Mar.2011.
10]J.Wright,"Search by text, voice, or image" from http://insidesearch.blogspot.com/2011/06/search-by-text-voice-or-image.html
11. Solso, Robert L, Cognitive psychology, 2d ed. Boston.Allyn and Bacon, 1988.
12. สุมาลี ชัยเจริญ. เทคโนโลยีการศึกษา: หลักการ ทฤษฎี สู่การปฏิบัติ.ขอนแก่น: คลังนานา,2551.
13. รัจรี นพเกตุ.จิตวิทยาการรับรู้.พิมพ์ครั้งที่ 1.กรุงเทพฯ : ประกายพรึก, 2540.