Tracking Vehicles in the Presence of Occlusions
doi: 10.14456/mijet.2018.10
Keywords:
Vehicle tracking, occlusions, traffic violations, red lightsAbstract
Thailand loses about 25,000 people every year to road trauma: this motivated this study, which designed and evaluated a simple system to discourage ‘Red Light Running’ – failure to observe red traffic lights. Our system is simple, cheap and flexible – consisting of a single camera, a portable computer and the ability to send images to a mobile phone using the public network. However, to be flexible, cameras were set only 1-2m about the road, which caused many occlusions to be observed – the major challenge for the system software. A rule-based system was used to resolve most occlusions. In our tests, vehicles were completely and correctly tracked in 83% of frames. This was sufficient to allow images of 95% of Red Light Runners to be transmitted to a monitoring station and potentially stopped.Thailand loses about 25,000 people every year to road trauma: this motivated this study, which designed and evaluated a simple system to discourage ‘Red Light Running’ – failure to observe red traffic lights. Our system is simple, cheap and flexible – consisting of a single camera, a portable computer and the ability to send images to a mobile phone using the public network. However, to be flexible, cameras were set only 1-2m about the road, which caused many occlusions to be observed – the major challenge for the system software. A rule-based system was used to resolve most occlusions. In our tests, vehicles were completely and correctly tracked in 83% of frames. This was sufficient to allow images of 95% of Red Light Runners to be transmitted to a monitoring station and potentially stopped.
References
[2] ECONOMIC CONDITIONS AND AUSTRALIAN AUTOMOBILE ASSOCIATION, Cost of Road Trauma in Australia, Australian Automobile Association, 2015.
[3] ROYAL THAI POLICE (2013), loc. cit. WHO and Bloomberg Initiative for Global Road Safety. Road Safety Institutional and Legal Assessment Thailand. http://www.searo.who.int/thailand/
areas/rs-legal-eng11.pdf, 2015..
[4] RETTING, R., WILLIAMS, A., GREENE, M. Red-Light Running and Sensible Countermeasures: Summary of Research Findings. Transp. Res. Rec. J. Transp. Res. Board, vol. 1640, p. 23–26, Jan. 1998.
[5] WORLD HEALTH ORGANIZATION, “Global status report on road safety,” Inj. Prev., p. 318, 2013.
[6] CHOUDHURY, S.K., SA, P. K., S. BAKSHI, S., B. MAJHI. An Evaluation of Background Subtraction for Object Detection Vis-a-Vis Mitigating Challenging Scenarios, IEEE Access, vol. 4, p. 6133–6150, 2016.
[7] BABAEE, M., DINH, D., RIGOLL, G. A deep convolutional neural network for video sequence background subtraction. Pattern Recognition, vol 76, p. 635-649, 2018.
[8] HARITAOGLU, L.,. HARWOOD, D., AND L. S. DAVIS, L. W4: Real-time surveillance of people and their activities. IEEE TPAMI, vol. 22(8), pp. 809–830, 2000.
[9] STAUFFER, C., GRIMSON, W. Adaptive background mixture models for real-time tracking. IEEE Conference on Computer Vision and Pattern Recognition, pp. 2246–2252, 1999.
[10] ZIVKOVIC, Z., VAN DER HEIJDEN, F. Efficient adaptive density estimation per image pixel for the task of background subtraction, Pattern Recognit. Lett., vol. 27(7), pp. 773–780, 2006.
[11] OpenCV: cv::BackgroundSubtractorKNN Class Reference.” [Online]. Available: http://docs.opencv.org/trunk/db/d88/classcv_1_1BackgroundSubtractorKNN.html. [Accessed: 02-Sep-2017].
[12] XU, Y., DONG, J,. ZHANG, B., XU, D. Background modeling methods in video analysis: A review and comparative evaluation, CAAI Trans. Intel. Technology, 1, pp 43-60, 2016.
[13] PICCARDI, M., T JAN, T. Mean-shift background image modelling. Int Conf Image Processing, Singapore, 3399-3402, 2004
[14] ELGAMMAL, A., DURAISWAMI, R., DAVID HARWOOD, D., DAVIS, L. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance, Proceedings of the IEEE, vol. 90(7), pp. 1151-1163, 2002.
[15] MITTAL. A., PARAGIOS, N. Motion-Based Background Subtraction using Adaptive Kernel Density Estimation. IEEE Conf Comp Vision Pat Recog, pp. 302-309, 2004.
[16] LO, B., VELASTIN, S., Automatic congestion detection system for underground platforms. Proc. 2001 Intl Symp on Intelligent Multimedia, Video and Speech Processing, Hong Kong, pp. 158-161, 2001.
[17] WANG, H., SUTER, D. A consensus-based method for tracking: Modelling background scenario and foreground appearance. Pattern Recognition, 40(3), pp. 1091-1105, 2007.
[18] EL BAF, F., BOUWMANS, T., VACHON, B. A Fuzzy approach for background subtraction. ICIP 2008: pp. 2648-2651, 2008.
[19] CULIBRK, D., MARQUES, O., SOCEK, D., KALVA, H. FURHT, B. Neural Network Approach to Background Modeling for Video Object Segmentation. IEEE Trans Neural Networks, vol. 18(6), pp. 1614-1627, 2007.
[20] YAO, J., ODOBEZ, J. Multi-layer background subtraction based on color and texture. IEEE Comp Vision and Pattern Recognition, pp 1-8, 2007.
[21] ZHOU, T TAO, D. GoDec: Randomized low-rank & sparse matrix decomposition in noisy case. Intl Conf on Machine Learning, pp 33-40, 2011.
[22] HARDAS, A., BADE D., WALI,V. Moving Object Detection using Background Subtraction, Shadow Removal and Post Processing, IJCA Proceedings on International Conference on Computer Technology, pp. 975–8887, 2015.
[23] SUBUDHI, B., GHOSH, S., SHIU, S., GHOSH, A. Statistical feature bag based background subtraction for local change detection. Inf. Sci. (NY)., vol. 366, pp. 31–47, 2016.
[24] SAJID, H. CHEUNG, S. Background subtraction for static & moving camera. ICIP 2015: 4530-4534, 2015.
[25] ibid. Universal Multimode Background Subtraction. IEEE Trans Image Processing, vol 26(7), 3249-3260, 2017.
[26] WANG, Y, JODOIN, P., PORIKLI, F., KONRAD, J., BENEZETH, Y., ISHWAR, P. CDnet 2014: An Expanded Change Detection Benchmark Datase. Proc IEEE Conf. Computer Vision and Pattern Recognition Workshops, pp 393-400, 2014
[27] JEEVA, S., SIVABALAKRISHNAN, M. Survey on background modeling and foreground detection for real time video surveillance. Procedia Comput. Sci., vol. 50, pp. 566–571, 2015.
[28] HOFMANN, M., TIEFENBACHER, P., RIGOLL, G. Background Segmentation with Feedback: The Pixel-Based Adaptive Segmenter. IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 38-43, 2012
[29] MADDALENA L., PETROSINO, A. A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications. IEEE Trans Image Process., vol 17(7), pp 1168-1177, 2008.
[30] BARNICH, O., VAN DROOGENBROECK, M. ViBe: A universal background subtraction algorithm for video sequences. IEEE Trans. Image Process., vol. 20(6), pp. 1709–1724, 2011.
[31] LEE, J., PARK, M.. An Adaptive Background Subtraction Method Based on a Kernel Density Estimation. Sensors, vol 12, pp 12279-12300, 2012.
[32] SOOKSATRA, S. T. KONDO, T. Red traffic light detection using fast radial symmetry transform. 2014 11th Intl Conf Electrical Engineering/ Electronics, Computer, Telecom and Inf Tech, ECTI-CON 2014, 2014.
[33] GUO, H., GAO, Z., YIANG, X. JIANG, X. Modeling Pedestrian Violation Behavior at Signalized Crosswalks in China: A Hazards - Based Duration Approach. Traffic Injury Prevention, vol. 12, 96-103, 2011.
[34] DIAZ, M., CERRI, P., PIRLO, G., FERRER, M. A Survey on Traffic Light Detection. New Trends in Image Analysis and Processing - ICIAP 2015 Workshops, 201-208, 2015.
[35] YILMAZ, A., JAVED, O., SHAH, M. Object tracking: A survey. ACM Comput. Surv. 38 (4), 13, 2006.
[36] SMEULDERS, A. CHU, D., CUCCHIARA, R., CALDERARA, DEHGHAN, A., SHAH, M. Visual Tracking: An Experimental Survey. IEEE TPAMI, vol. 36, pp. 1442-1468, 2013.
[37] WU, Y., LIM, J., YANG, M.. (2015). Object Tracking Benchmark. IEEE TPAMI, vol. 37(9). pp.1834-1848, 2013. doi: 10.1109/TPAMI.2014.2388226.
[38] KRISTAN, M. et al., The Visual Object Tracking VOT2015 Challenge Results. Visual Object Tracking Workshop IEEE Intl Conf. Computer Vision Workshop (ICCVW), 2015.
[39] NAM. H. HAN, B. Learning multi-domain convolutional neural networks for visual tracking. Proc IEEE Conf. Computer Vision Pattern Recognition, pp 4293-4302, 2016.
[40] KRISTAN, M. et al. The Visual Object Tracking (VOT2018) Challenge Results,” ECCVW, pp. 1–27, 2018.
[41] LEE, B., LIEW, L., CHEAH, W., WANG, Y. Occlusion handling in videos object tracking. IOP Conf. Series Earth Env Science. vol. 18 (1), 2014.
[42] ZHAO S, ZHANG S, ZHANG L. Towards Occlusion Handling: Object Tracking With Background Estimation. IEEE Trans Cybern., vol. 48(7), pp. 2086-2100, 2018.
[43] MOTRO, M., GHOSH, J. Measurement-wise Occlusion in Multi-object Tracking, arXiv preprint arXiv:1805.08324. 2018.
[44] MILAN, A., LEAL-TAIXE, L., SCHINDLER, R., CREMERS, D., ROTH, S., REID, I. Multiple Object Tracking Benchmark. https://motchallenge.net, Accessed 23.10.2018.
[45] DOYLE, D., ALAN L. JENNINGS, A., BLACK, J. Optical flow background estimation for real-time pan/tilt camera object tracking. Measurement, vol. 48, pp 195–207, 2014.
[46] Kale, K., Pawar, S., Dhulekar, A. Moving Object Tracking using Optical Flow and Motion Vector Estimation. 4th Intl Conf. Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), pp 1-6, 2015.
[47] OCCAM, W. Occam’s Razor, https://en.wikipedia.org/
wiki/Occam’s_razor. Accessed 7 Nov, 2018.
[48] SHIMADA, A., NAGAHARA, H., TANIGUCHI, R. Background modeling based on bidirectional analysis. Computer Vision Pattern Recognition, pp. 1979–1986, 2013.
[49] HAINES, T., XIANG, T, Background Subtraction with DirichletProcess Mixture Models. IEEE Trans Pattern Analysis Machine Intelligence, vol. 36(4), pp. 670-683, 2014. doi: 10.1109/TPAMI.2013.239.
[50] SEIDEL, F., HAGE, C., KLEINSTEUBER, M. pROST: a smoothed ℓp-norm robust online subspace tracking method for background subtraction in video. Machine Vision and Applications, vol. 25(5), pp. 1227--1240, 2014.
[51] LI, X., WANG, K., WANG, W., LI, Y. A Multiple Object Tracking Method using Kalman Filter. Inf. Autom., vol. 1(1), pp. 1862–1866, 2010.
[52] OPENCV: cv::KalmanFilter Class Reference. [Online]. Available: https://docs.opencv.org/3.4/
dd/d6a/classcv_1_1KalmanFilter.html. [Accessed: 07-Nov-2018].
[53] JENOPTICK AG. Flexible systems and services for Traffic Safety. 2016. [Online]. Available: https://www.jenoptik.com/products/traffic-safety-systems/combined-speed-redlight-enforcement-monitoring. [Accessed: 08-Nov-2018].
[54] SMARTVISION TECHNOLOGY CO. LTD. Red Light Camera (RLC). www.smartvisoncompany.com/
/redlightcamera.html [Accessed: 08-Nov-2018]
[55] KLUBSUWAN, K., KOODTALANG, W., MUNGSING, S. Traffic violation detection using multiple trajectories evaluation of vehicles. Proc. - Int. Conf. Intell. Syst. Model. Simulation, ISMS, vol. 5 (12), pp. 220–224, 2013.
[56] K. KIM, T. CHALIDABHONGSE, D. HARWOOD, AND L. DAVIS, Real-time foreground background segmentation using codebook model. Real Time Imaging, vol. 11(3), pp. 172–185, 2005
[57] MENG, C. Traffic Violation Detection. MEng thesis, Faculty of Engineering, Mahasarakham University, 2018.
[58] REDDY, V. SANDERSON, C., LOVELL, B. Improved foreground detection via block-based classifier cascade with probabilistic decision integration. IEEE Trans. Circuits Syst. Video Technol., vol. 23(1), pp. 83–93, 2013.
[59] WREN, R., AZARBAYEJANI, A., DARRELL, T., AND PENTLAND, A. Pfinder: Real-time tracking of the human body. IEEE TPAMI, vol. 19(7), pp. 780–785, 1997.
[60] KAEWTRAKULPONG, P. BOWDEN, R. An Improved Adaptive Background Mixture Model for real-time tracking with shadow detection. Video-Based Surveillance Systems, Springer, pp. 135–144, 2002.
[61] RODRIGUEZ. P. B. WOHLBERG, B. Fast principal component pursuit via alternating minimization. Proc. IEEE Int. Conf. Image Process. pp. 69–73, Sep 2013.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2018 Mahasarakham International Journal of Engineering Technology
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.