Tracking System for Living Beings and Objects: Integration of Accessible Mathematical Contributions and Graph Theory in Tracking System Design

Authors

  • Anass Ariss Department of Computer Science, Faculty of Sciences, Mohammed V University in Rabat, Rabat 10000, Morocco. Author
  • Imane Ennejjai Department of Computer Science, Faculty of Sciences, Mohammed V University in Rabat, Rabat 10000, Morocco. Author
  • Jamal Mabrouki Laboratory of Spectroscopy, Molecular Modelling, Materials, Nanomaterial, Water and Environment, CERNE2D, Mohammed V, University in Rabat, Faculty of Science,Rabat, Morocco. Author
  • Asmaa Lamjid Department of Computer Science, Faculty of Sciences, Mohammed V University in Rabat, Rabat 10000, Morocco. Author
  • Nassim Kharmoum Department of Computer Science, Faculty of Sciences, Mohammed V University in Rabat, Rabat 10000, Morocco. Author
  • Soumia Ziti Department of Computer Science, Faculty of Sciences, Mohammed V University in Rabat, Rabat 10000, Morocco. Author

DOI:

https://doi.org/10.56294/dm2024.376

Keywords:

Tracking, Tracking System, Graph, Graph Learning, Hypergraph

Abstract

This paper presents a theoretical framework for a tracking system, wherein we generalize the formulation of a tracking system de- signed for living beings and objects. Many tracking systems are typically developed within specific frameworks, either for tracking in limited or unlimited space. The latter often relies on technical tools dedicated to tracking living beings or objects. In this study, we propose a system theory that formulates the task of tracking both living beings and ob- jects. Graphical modeling is widely employed in tracking to establish correct connections between the elements to be tracked and other com- ponents in the system. However, basing a tracking system on graphs in both its theoretical and practical aspects remains the optimal method for achieving a high-performing, relevant, and adaptable system in vari- ous situations. This paper introduces a tracking system based on graph learning and hypergraphs, fully leveraging direct and indirect relations while considering the order between multiple system links. Tracking is thus formulated as a search problem on graphs and hypergraphs, with vertices representing the elements of the system (living beings or ob- jects), and edges representing the types of connections between these elements. We define a law governing the relationships between the ver- tices, managing the shared data between the elements of the system and other processes. Furthermore, examples of single and multi-context track- ing situations demonstrate that the proposed system, in its theoretical foundation, outperforms existing systems.

References

1. K. Okuma, A. Taleghani, N. De Freitas, J. J. Little, and D. G. Lowe. A boosted particle filter: Multitarget detection and tracking. In Computer Vision-ECCV 2004, pages 28–39. Springer, 2004. DOI: https://doi.org/10.1007/978-3-540-24670-1_3

2. B. Leibe, K. Schindler, N. Cornelis, and L. Van Gool. Coupled ob- ject detection and tracking from static cameras and moving vehi- cles. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(10):1683–1698, 2008. DOI: https://doi.org/10.1109/TPAMI.2008.170

3. A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and

S. Savarese. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961–971, 2016.

4. C. Huang, B. Wu, and R. Nevatia. Robust object tracking by hierarchical association of detection responses. In Computer Vi- sion–ECCV 2008, pages 788–801. Springer, 2008. DOI: https://doi.org/10.1007/978-3-540-88688-4_58

5. Y. Xiang, A. Alahi, and S. Savarese. Learning to track: On- line multi-object tracking by decision making. In Proceedings of the IEEE International Conference on Computer Vision, pages 4705–4713, 2015. DOI: https://doi.org/10.1109/ICCV.2015.534

6. S. Ali and M. Shah. Floor fields for tracking in high density crowd scenes. In Computer Vision–ECCV 2008, pages 1–14. Springer, 2008. DOI: https://doi.org/10.1007/978-3-540-88688-4_1

7. A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese. Learning so- cial etiquette: Human trajectory understanding in crowded scenes. In European Conference on Computer Vision, pages 549–565. Springer, 2016. DOI: https://doi.org/10.1007/978-3-319-46484-8_33

8. M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. Van Gool. Robust tracking-by-detection using a detector confidence particle filter. In Computer Vision, 2009 IEEE 12th International Conference on, pages 1515–1522. IEEE, 2009. DOI: https://doi.org/10.1109/ICCV.2009.5459278

9. S. Pellegrini, A. Ess, K. Schindler, and L. Van Gool. You’ll never walk alone: Modeling social behavior for multi-target tracking. In Computer Vision, 2009 IEEE 12th International Conference on, pages 261–268. IEEE, 2009. DOI: https://doi.org/10.1109/ICCV.2009.5459260

10. W. Choi and S. Savarese. Multiple target tracking in world co-

ordinate with single, minimally calibrated camera. In Computer Vision–ECCV 2010, pages 553–567. Springer, 2010.

11. [11] C. Dicle, O. I. Camps, and M. Sznaier. The way they move:

Tracking multiple targets with similar appearance. In Proceedings of the IEEE International Conference on Computer Vision, pages 2304–2311, 2013. DOI: https://doi.org/10.1109/ICCV.2013.286

12. K. Yamaguchi, A. C. Berg, L. E. Ortiz, and T. L. Berg. Who are

you with and where are you going? In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1345–1352. IEEE, 2011.

13. P. Scovanner and M. F. Tappen. Learning pedestrian dynamics

from the real world. In Computer Vision, 2009 IEEE 12th Inter- national Conference on, pages 381–388. IEEE, 2009.

14. S. Pellegrini, A. Ess, and L. Van Gool. Improving data association

by joint modeling of pedestrian trajectories and groupings. In Eu- ropean Conference on Computer Vision, pages 452–465. Springer, 2010.

15. Z. Wu, A. Thangali, S. Sclaroff, and M. Betke. Coupling detection

and data association for multiple object tracking. In Computer Vi- sion and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1948–1955. IEEE, 2012.

16. S. Oron, A. Bar-Hillel, and S. Avidan. Real-time trackingwith-

detection for coping with viewpoint change. Machine Vision and Applications, 26(4):507–518, 2015. DOI: https://doi.org/10.1007/s00138-015-0676-z

17. S. Tang, B. Andres, M. Andriluka, and B. Schiele. Multiperson

tracking by multicut and deep matching. In European Conference on Computer Vision, pages 100–111. Springer, 2016. DOI: https://doi.org/10.1007/978-3-319-48881-3_8

18. N. Le, A. Heili, and J.-M. Odobez. Long-term time-sensitive costs

for crf-based tracking by detection. In European Conference on Computer Vision, pages 43–51. Springer, 2016.

19. H. Izadinia, V. Ramakrishna, K. M. Kitani, and D. Huber. Multi-

pose multi-target tracking for activity understanding. In Applica- tions of Computer Vision (WACV), 2013 IEEE Workshop on, pages 385–390. IEEE, 2013. DOI: https://doi.org/10.1109/WACV.2013.6475044

20. A. R. Zamir, A. Dehghan, and M. Shah. Gmcp-tracker: Global

multi-object tracking using generalized minimum clique graphs. In Computer Vision–ECCV 2012, pages 343– 356. Springer, 2012.

21. B. Y. S. Khanloo, F. Stefanus, M. Ranjbar, Z.-N. Li, N. Saunier,

T. Sayed, and G. Mori. A large margin framework for single cam- era offline tracking with hybrid cues. Computer Vision and Image Understanding, 116(6):676– 689, 2012. DOI: https://doi.org/10.1016/j.cviu.2012.01.004

22. D. Helbing and P. Molnar. Social force model for pedestrian dy-

namics. Physical review E, 51(5):4282, 1995. DOI: https://doi.org/10.1103/PhysRevE.51.4282

23. D. Held, S. Thrun, and S. Savarese. Learning to track at 100 fps with deep regression networks. In European Conference on Com- puter Vision, pages 749–765. Springer, 2016. DOI: https://doi.org/10.1007/978-3-319-46448-0_45

24. S. Hong and B. Han. Visual tracking by sampling treestructured graphical models. In European Conference on Computer Vision, pages 1–16. Springer, 2014. DOI: https://doi.org/10.1007/978-3-319-10590-1_1

25. M. Hu, S. Ali, and M. Shah. Detecting global motion patterns in complex videos. In Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, pages 1–5. IEEE, 2008. DOI: https://doi.org/10.1109/ICPR.2008.4760950

26. X. Zhao, D. Gong, and G. Medioni. Tracking using motion patterns for very crowded scenes. In Computer Vision–ECCV 2012, pages 315–328. Springer, 2012. DOI: https://doi.org/10.1007/978-3-642-33709-3_23

27. P. Mordohai and G. Medioni. Dimensionality estimation, manifold learning and function approximation using tensor voting. Journal of Machine Learning Research, 11(Jan):411–450, 2010.

28. L. Kratz and K. Nishino. Tracking with local spatio-temporal mo- tion patterns in extremely crowded scenes. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 693–700. IEEE, 2010. DOI: https://doi.org/10.1109/CVPR.2010.5540149

29. L. Kratz and K. Nishino. Tracking pedestrians using local spatio-temporal motion patterns in extremely crowded scenes. IEEE transactions on pattern analysis and machine intelligence, 34(5):987–1002, 2012. DOI: https://doi.org/10.1109/TPAMI.2011.173

30. M. Rodriguez, S. Ali, and T. Kanade. Tracking in unstructured crowded scenes. In 2009 IEEE 12th International Conference on Computer Vision, pages 1389–1396. IEEE, 2009. DOI: https://doi.org/10.1109/ICCV.2009.5459301

31. E. Ristani and C. Tomasi. Tracking multiple people online and in real time. In Asian Conference on Computer Vision, pages 444–459. Springer, 2014. DOI: https://doi.org/10.1007/978-3-319-16814-2_29

32. H. Nam and B. Han. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4293–4302, 2016. DOI: https://doi.org/10.1109/CVPR.2016.465

33. M. Zhai, M. J. Roshtkhari, and G. Mori. Deep learning of appearance models for online object tracking. arXiv preprint arXiv:1607.02568, 2016.

34. A. Milan, S. Roth, and K. Schindler. Continuous energy minimiza- tion for multitarget tracking. IEEE transactions on pattern analysis and machine intelligence, 36(1):58–72, 2014. DOI: https://doi.org/10.1109/TPAMI.2013.103

35. K. Shafique, M. W. Lee, and N. Haering. A rank constrained con- tinuous formulation of multi-frame multi-target tracking problem. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008. DOI: https://doi.org/10.1109/CVPR.2008.4587577

36. Q. Yu, G. Medioni, and I. Cohen. Multiple target tracking using spatio-temporal markov chain monte carlo data association. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE, 2007. DOI: https://doi.org/10.1109/CVPR.2007.382991

37. B. Zhan, D. N. Monekosso, P. Remagnino, S. A. Velastin, and L.-

Q. Xu. Crowd analysis: a survey. Machine Vision and Applications, 19(5-6):345–357, 2008.

38. L. Leal-Taixe, C. Canton-Ferrer, and K. Schindler. Learning ´ by tracking: Siamese cnn for robust target association. arXiv preprint arXiv:1604.07866, 2016. DOI: https://doi.org/10.1109/CVPRW.2016.59

39. L. Wen, Z. Lei, S. Lyu, S. Z. Li, and M.-H. Yang. Exploiting hier- archical dense structures on hypergraphs for multi-object tracking. IEEE transactions on pattern analysis and machine intelligence, 38(10):1983–1996, 2016. DOI: https://doi.org/10.1109/TPAMI.2015.2509979

40. B. Yang and R. Nevatia. Multi-target tracking by online learning of non-linear motion patterns and robust appearance models. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1918–1925. IEEE, 2012. DOI: https://doi.org/10.1109/CVPR.2012.6247892

41. S. Oron, A. Bar-Hille, and S. Avidan. Extended lucaskanade track- ing. In European Conference on Computer Vision, pages 142–156. Springer, 2014. DOI: https://doi.org/10.1007/978-3-319-10602-1_10

42. B. Yang and R. Nevatia. An online learned crf model for multi- target tracking. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2034–2041. IEEE, 2012. DOI: https://doi.org/10.1109/CVPR.2012.6247907

43. F. Solera, S. Calderara, E. Ristani, C. Tomasi, and R. Cucchiara. Tracking social groups within and across cameras. IEEE Transac- tions on Circuits and Systems for Video Technology.

44. Dawei Du, Honggang Qi, Longyin Wen, Qi Tian, Qingming Huang, and Siwei Lyu. 2017. Geometric hypergraph learning for visual tracking. IEEE TC 47, 12 (2017), 4182–4195. DOI: https://doi.org/10.1109/TCYB.2016.2626275

45. Idir Filali, Mohand Sa¨ıd Allili, and Nadjia Benblidia. 2016. Multi- scale salient object detection using graph ranking and global–local saliency refinement. Elsevier Signal Proc. Image 47 (2016), 380–401. DOI: https://doi.org/10.1016/j.image.2016.07.007

46. Meng Li and Howard Leung. 2016. Multiview skeletal interaction recognition using active joint interaction graph. IEEE TM 18, 11 (2016), 2293–2302. DOI: https://doi.org/10.1109/TMM.2016.2614228

47. H. K. Meena, K. K. Sharma, and S. D. Joshi. 2017. Improved facial expression recognition using graph sig. pro. IET EL 53, 11 (2017), 718–720. DOI: https://doi.org/10.1049/el.2017.0420

48. H. Nam, M. Baek, and B. Han. 2016. Modeling and propagating CNNs in a tree structure for visual tracking. CoRR. Retrieved from abs/1608.07242.

49. Tao Wang and Haibin Ling. 2018. Gracker: A graph-based planar object tracker. IEEE TPAMI 40, 6 (2018), 1494–1501. DOI: https://doi.org/10.1109/TPAMI.2017.2716350

50. D. Yeo, J. Son, B. Han, and J. H. Han. 2017. Superpixel-based tracking-by-segmentation using Markov chains. In Proceedings of the CVPR. IEEE, 511–520. DOI: https://doi.org/10.1109/CVPR.2017.62

51. Dawei Du, Honggang Qi, Wenbo Li, Longyin Wen, Qingming Huang, and Siwei Lyu. 2016. Online deformable object tracking based on structure-aware hyper-graph. TIP 25, 8 (2016), 3572–3584. DOI: https://doi.org/10.1109/TIP.2016.2570556

52. W. Hu, T. Tan, L. Wang, S. Maybank, A survey on visual surveil- lance of object motion and behaviors, IEEE Trans. Syst. Man Cy- bern., Part C, Appl. Rev. 34 (3) (2004) 334–352. DOI: https://doi.org/10.1109/TSMCC.2004.829274

53. X. Wang, Intelligent multi-camera video surveillance: a review, Pat- tern Recognit. Lett. 34 (1) (2013) 3–19. DOI: https://doi.org/10.1016/j.patrec.2012.07.005

54. J. Candamo, M. Shreve, D.B. Goldgof, D.B. Sapper, R. Kas- turi, Understanding transit scenes: a survey on human behavior- recognition algorithms, IEEE Trans. Intell. Transp. Syst. 11 (1) (2010) 206–224. DOI: https://doi.org/10.1109/TITS.2009.2030963

55. B. Zhan, D.N. Monekosso, P. Remagnino, S.A. Velastin, L.-Q. Xu, Crowd analysis: a survey, Mach. Vis. Appl. 19 (5–6) (2008) 345–357. DOI: https://doi.org/10.1007/s00138-008-0132-4

56. I.S. Kim, H.S. Choi, K.M. Yi, J.Y. Choi, S.G. Kong, Intelligent visual surveillance-a survey, Int. J. Control. Autom. Syst. 8 (5) (2010) 926–939. DOI: https://doi.org/10.1007/s12555-010-0501-4

57. D.A. Forsyth, O. Arikan, L. Ikemoto, J. O’Brien, D. Ramanan, et al., Computational studies of human motion: part 1, tracking and motion synthesis, Found. Trends Comput. Graph. Vis. 1 (2–3) (2006) 77–254. DOI: https://doi.org/10.1561/0600000005

58. A. Yilmaz, O. Javed, M. Shah, Object tracking: a survey, ACM Comput. Surv. 38 (4) (2006) 13.

59. X. Li, W. Hu, C. Shen, Z. Zhang, A. Dick, A.V.D. Hengel, A survey of appearance models in visual object tracking, ACM Trans. Intell. Syst. Technol. 4 (4) (2013) 58. DOI: https://doi.org/10.1145/2508037.2508039

60. Y. Wu, J. Lim, M.-H. Yang, Online object tracking: a benchmark, in: Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recog- nit., Anchorage, AL, USA, 2013, pp. 2411–2418. DOI: https://doi.org/10.1109/CVPR.2013.312

61. L. Leal-Taix´e, A. Milan, I. Reid, S. Roth, K. Schindler, MOTChal- lenge 2015: towards a benchmark for multi-target tracking, arXiv:1504.01942, http:// arxiv.org/abs/1504.01942.

62. Z.-Q. Zhao, P. Zheng, S.-t. Xu, X. Wu, Object detection with deep learning: a review, IEEE Trans. Neural Netw. Learn. Syst. 30 (11) (2019) 3212–3232. DOI: https://doi.org/10.1109/TNNLS.2018.2876865

63. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: To- wards Real-Time Object Detection with Region Proposal Net- works,” IEEE TPAMI, vol. 39, no. 6, pp. 1137–1149, 2017. DOI: https://doi.org/10.1109/TPAMI.2016.2577031

64. D. Impiombato, S. Giarrusso, T. Mineo, O. Catalano, C. Gargano,

G. La Rosa, F. Russo, G. Sottile, S. Billotta, G. Bonanno, S. Garozzo, A. Grillo, D. Marano, and G. Romeo, “You Only Look Once: Unified, Real-Time Object Detection Joseph,” Nuclear In- struments and Methods in Physics Research, Section A: Accelera- tors, Spectrometers, Detectors and Associated Equipment, vol. 794, pp. 185–192, 2015. DOI: https://doi.org/10.1016/j.nima.2015.05.028

65. J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” 2016. DOI: https://doi.org/10.1109/CVPR.2017.690

66. P. Felzenszwalb, D. McAllester, and D. Ramanan, “A discrimina- tively trained, multiscale, deformable part model,” in Proc. CVPR, 2008. DOI: https://doi.org/10.1109/CVPR.2008.4587597

67. Y. Tian, A. Dehghan, and M. Shah, “On detection, data association and segmentation for multi-target tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2018.

68. L. Wen, D. Du, S. Li, X. Bian, and S. Lyu, “Learning nonuniform hypergraph for multi-object tracking,” arXiv preprint arXiv:1812.03621, 2018.

69. H. Sheng, Y. Zhang, J. Chen, Z. Xiong, and J. Zhang, “Hetero- geneous association graph fusion for target association in multiple object tracking,” IEEE Transactions on Circuits and Systems for Video Technology, 2018. DOI: https://doi.org/10.1109/TCSVT.2018.2882192

70. K. Shafique and M. Shah, “A noniterative greedy algorithm for multiframe point correspondence,” IEEE transactions on pattern analysis and machine intelligence, vol. 27, no. 1, pp. 51–65, 2005. DOI: https://doi.org/10.1109/TPAMI.2005.1

71. D. Reid et al., “An algorithm for tracking multiple targets,” IEEE transactions on Automatic Control, vol. 24, no. 6, pp. 843–854, 1979. DOI: https://doi.org/10.1109/TAC.1979.1102177

72. G. Shu, A. Dehghan, O. Oreifej, E. Hand, and M. Shah, “Part- based multiple-person tracking with partial occlusion handling,” in Proc. CVPR. IEEE, 2012, pp. 1815–1821. DOI: https://doi.org/10.1109/CVPR.2012.6247879

73. A. Roshan Zamir, A. Dehghan, and M. Shah, “GMCP-tracker: Global multi-object tracking using generalized minimum clique graphs,” in Lecture Notes in Computer Science, 2012. DOI: https://doi.org/10.1007/978-3-642-33709-3_25

74. B. Wu and R. Nevatia, “Detection and tracking of multiple, par- tially occluded humans by bayesian combination of edgelet based part detectors,” International Journal of Computer Vision, vol. 75, no. 2, pp. 247–266, 2007. DOI: https://doi.org/10.1007/s11263-006-0027-7

75. A. Dehghan, S. Modiri Assari, and M. Shah, “Gmmcp tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking,” in Proc. CVPR, 2015, pp. 4091–4099. DOI: https://doi.org/10.1109/CVPR.2015.7299036

76. H. Pirsiavash, D. Ramanan, and C. C. Fowlkes, “Globally-optimal greedy algorithms for tracking a variable number of objects,” Proc. CVPR, pp. 1201–1208, 2011. DOI: https://doi.org/10.1109/CVPR.2011.5995604

77. A. A. Butt and R. T. Collins, “Multi-target tracking by lagrangian relaxation to min-cost network flow,” in Proc. CVPR, 2013, pp. 1846–1853. DOI: https://doi.org/10.1109/CVPR.2013.241

78. J. Berclaz, F. Fleuret, E. Turetken, and P. Fua, “Multiple ob- ¨ ject tracking using k-shortest paths optimization,” IEEE TPAMI, vol. 33, no. 9, pp. 1806–1819, 2011. DOI: https://doi.org/10.1109/TPAMI.2011.21

79. H. B. Shitrit, J. Berclaz, F. Fleuret, and P. Fua, “Multi-commodity network flow for tracking multiple people,” IEEE TPAMI, vol. 36, no. 8, pp. 1614–1627, 2014. DOI: https://doi.org/10.1109/TPAMI.2013.210

80. Tutte, William Thomas, and William Thomas Tutte. Graph theory. Vol. 21. Cambridge university press, 2001.

81. West, Douglas Brent. Introduction to graph theory. Vol. 2. Upper Saddle River: Prentice hall, 2001.

82. Gould, Ronald. Graph theory. Courier Corporation, 2012.

83. Bollob´as, B´ela. Modern graph theory. Vol. 184. Springer Science Business Media, 1998.

84. Bondy, John Adrian, and Uppaluri Siva Ramachandra Murty. Graph theory with applications. Vol. 290. London: Macmillan, 1976. DOI: https://doi.org/10.1007/978-1-349-03521-2

85. Hallinan, Maureen T. ”Tracking: From theory to practice.” Sociol- ogy of education 67.2 (1994): 79-84.

86. Yilmaz, Alper, Omar Javed, and Mubarak Shah. ”Object tracking: A survey.” Acm computing surveys (CSUR) 38.4 (2006): 13-es. DOI: https://doi.org/10.1145/1177352.1177355

87. McKenna, Stephen J., et al. ”Tracking groups of people.” Computer vision and image understanding 80.1 (2000): 42-56. DOI: https://doi.org/10.1006/cviu.2000.0870

88. Zhou, Xingyi, Vladlen Koltun, and Philipp Kr¨ahenbu¨hl. ”Tracking objects as points.” Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part DOI: https://doi.org/10.1007/978-3-030-58548-8_28

IV. Cham: Springer International Publishing, 2020.

89. Bretto, Alain. ”Hypergraph theory.” An introduction. Mathemati- cal Engineering. Cham: Springer (2013).

90. Hopkins, Brian, and Robin J. Wilson. ”The truth about K¨onigsberg.” The College Mathematics Journal 35.3 (2004): 198- 207. DOI: https://doi.org/10.1080/07468342.2004.11922073

91. Bretto, Alain. ”Hypergraph theory.” An introduction. Mathemati- cal Engineering. Cham: Springer (2013). DOI: https://doi.org/10.1007/978-3-319-00080-0

92. Hallinan, Maureen T. ”Tracking: From theory to practice.” Sociol- ogy of education 67.2 (1994): 79-84. DOI: https://doi.org/10.2307/2112697

Downloads

Published

2024-08-16

Issue

Section

Original

How to Cite

1.
Ariss A, Ennejjai I, Mabrouki J, Lamjid A, Kharmoum N, Ziti S. Tracking System for Living Beings and Objects: Integration of Accessible Mathematical Contributions and Graph Theory in Tracking System Design. Data and Metadata [Internet]. 2024 Aug. 16 [cited 2026 Feb. 25];3:.376. Available from: https://dm.ageditor.ar/index.php/dm/article/view/376