Simple Online and Real-time Tracking with Feature Matching Enhancement for Re-identification after Occlusion

Main Article Content

Koksal Chou
Natsuda Kaothanthong
Chawalit Jeenanunta

Abstract

         Occlusion in people tracking in computer vision is the problem that affects the tracking continuity as the information of the object is lost when the tracked object is behind another object. This study proposes a tracking algorithm that is robust to occlusion. The algorithm is designed based on the algorithm called Simple
Online and Real-time Tracking (SORT) which utilizes deep neural network detector, Kalman Filter, and Hungarian algorithm. The feature extraction method is used to capture the information of objects before and after occlusion to solve the problem of multi-object tracking in the presence of occlusion. This method can improve the lack of memory bottleneck of SORT which is a crucial requirement for robust multi-object tracking. The
experiment is performed on 13 testing videos which contain multi-people walking pass each other with and without background noise. The result from the experiment shows that the proposed method can increase multi-object tracking accuracy of SORT. The algorithm can correctly re-identify the object after the occlusion event.

Downloads

Download data is not yet available.

Article Details

How to Cite
Chou, K., Kaothanthong, N., & Jeenanunta, C. (2019). Simple Online and Real-time Tracking with Feature Matching Enhancement for Re-identification after Occlusion. INTERNATIONAL SCIENTIFIC JOURNAL OF ENGINEERING AND TECHNOLOGY (ISJET), 3(2), 34-41. Retrieved from https://ph02.tci-thaijo.org/index.php/isjet/article/view/188741
Section
Research Article

References

[1] A. Gruen, “Adaptive least squares correlation: a powerful image matching technique,” South African Journal of
Photogrammetry, Remote Sensing and Cartography, vol. 14, no. 3, pp. 175-187, 1985.

[2] A. Nakhmani and A. Tannenbaum, “A new distance measure based on generalized image normalized cross-correlation for robust video tracking and image recognition,” Pattern recognition letters, vol. 34, no. 3, pp. 315-321, 2013.

[3] S.-Q. Chen, “A corner matching algorithm based on Harris operator,” ICIECS 2010, Wuhan, 2010, pp. 1-2.

[4] D. G. Lowe, “Object recognition from local scale-invariant features,” ICCV, Corfu, 1999, pp. 1150-1157.

[5] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Computer vision and image
understanding, vol. 110, no. 3, pp. 346-359, 2008.

[6] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” ICCV, Barcelona,
2011, pp. 2564-2571.

[7] D. G. Viswanathan, “Features from accelerated segment test (fast),” in Proc. ICISA 2014.

[8] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” ECCV,
Crete, 2010, pp. 778-792.

[9] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” NIPS, Montreal, 2015, pp. 91-99.

[10] R. Girshick, “Fast r-cnn,” Proc. IEEE-ICCV, Santiago, 2015, pp. 1440-1448.

[11] D. Comaniciu, V. Ramesh, and P. Meer, “Real-time tracking of non-rigid objects using mean shift,” CVPR, South Carolina, 2000, pp. 142-149.

[12] C. I. Patel and R. Patel, “Contour based object tracking,” International Journal of Computer and Electrical Engineering, vol. 4, no. 4, pp. 525, 2012.

[13] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” ICIP, Phoenix, 2016,
pp. 3464-3468.

[14] A. Sadeghian, A. Alahi, and S. Savarese, “Tracking the untrackable: Learning to track multiple cues with long-term
dependencies,” Proc. IEEE-ICCV, Venice, 2017, pp. 300-311.

[15] J. Xiang, G. Zhang, and J. Hou, “Online Multi-Object Tracking Based on Feature Representation and Bayesian
Filtering within a Deep Learning Architecture,” IEEE Access, vol. 7, pp. 27923-27935, Feb. 2019.

[16] K. Fang, Y. Xiang, X. Li, and S. Savarese, “Recurrent autoregressive networks for online multi-object tracking,”
WACV, California, 2018, pp. 466-475.

[17] H. Fan and H. Ling, “Sanet: Structure-aware network for visual tracking,” CVPR, Hawaii, 2017, pp. 42-49.

[18] G. Bishop, and G. Welch, “An introduction to the kalman filter,” Proc of SIGGRAPH, Course, vol. 8, no. 27599-23175.

[19] K. Smith, D. Gatica-Perez, J.-M. Odobez, and S. Ba, “Evaluating multi-object tracking,” CVPR, San Diego, 2005,
pp. 36-36.

[20] K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance: the CLEAR MOT metrics,”
Journal on Image and Video Processing, vol. 2008, pp. 1, 2008.

[21] B. Wu and R. Nevatia, “Tracking of multiple, partially occluded humans based on static body part detection,”
CVPR, New York, 2006, pp. 951-958.

[22] R. Van Der Merwe, A. Doucet, N. De Freitas, and E. A. Wan,“The unscented particle filter,” NIPS, Vancouver, 2001,
pp. 584-590.