한국해군과학기술학회
[ Article ]
Journal of the KNST - Vol. 7, No. 2, pp.93-106
ISSN: 2635-4926 (Print)
Print publication date 30 Jun 2024
Received 08 Apr 2024 Revised 19 Apr 2024 Accepted 13 May 2024
DOI: https://doi.org/10.31818/JKNST.2024.6.7.2.93

Indoor Autonomous Surveillance Robot Based on RGB-D Sensor

Kyeongmo Kang1, * ; Won-jong Kim2
1LT, ROK Navy/Instructor, Department of Mechanical System Engineering, Republic of Korea Naval Academy
2Associate professor, Dept. of Mechanical Engineering, Texas A&M University
RGB-D 센서 기반 실내 자율 감시 로봇
강경모1, * ; 김원종2
1해군 대위/해군사관학교 기계시스템공학과 교관
2텍사스 A&M 대학교 기계공학과 부교수

Correspondence to: *Kyeongmo Kang Dept. of Mechanical System Engineering, Republic of Korea Naval Academy 1 Jungwon-ro, Jinhae-gu, Changwon-si, Gyungsangnam-do, 51704, Republic of Korea Tel: +82-55-907-5316 E-mail: kmkang@navy.ac.kr

Ⓒ 2024 Korea Society for Naval Science & Technology

Abstract

This article presents an autonomous surveillance robot with an Red-Green-Blue-Depth (RGB-D) sensor. The robot incorporates Simultaneous Localization and Mapping (SLAM), autonomous patrol, face recognition, and human tracking. Based on mathematical modeling, the control system of the robot is designed with proportional-integral-differential (PID) controllers. Autonomous patrol is achieved through the control system and Robot Operating System (ROS) Navigation Stack. A Convolutional Neural Network (CNN) model is employed for face recognition. For human tracking, a position-control system is developed based on skeleton tracking. The integration of these functions into a single system results in a low-cost surveillance robot, which is tested in real-life environments.

초록

본 논문에서는 저가 RGB-D 센서 기반의 실내 자율 감시로봇을 소개한다. 로봇에는 SLAM, 자율정찰, 얼굴인식, 사람추적 기능이 통합되어 탑재되었으며, 제어시스템은 수학적 모델링과 PID 제어기를 기반으로 설계되었다. ROS Navigation Stack을 활용하여 자율정찰 기능을 개발했으며, CNN 모델을 통해 얼굴인식 기능을 구현했다. 또한, 골격추적 기반의 위치제어시스템을 개발하여 사람추적 기능을 구현했다. 기능들을 단일 시스템으로 통합하여 저비용 감시 로봇을 개발했고, 실제 생활환경에서 실험했다.

Keywords:

Autonomous Patrol, Digital Control, RGB-D Sensor, ROS, Surveillance Robot

키워드:

자율정찰, 디지털제어, RGB-D 센서, 로봇운영체계, 감시로봇

Acknowledgments

본 논문은 해군사관학교 해양연구소 학술연구과제 연구비의 지원으로 수행된 연구임.

References

  • T. Theodoridis and H. Hu, “Toward intelligent security robots: A survey,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 42, No. 6, pp. 1219–1230, Nov. 2012. [https://doi.org/10.1109/TSMCC.2012.2198055]
  • M. Saptharishi, C. Spence Oliver, C. Diehl, K. Bhat, J. Dolan, A. Trebi-Ollennu, and P. Khos, “Distributed surveillance and reconnaissance using multiple autonomous ATVs: Cyberscout,” IEEE Transactions on Robotics and Automation, Vol. 18, No. 5, pp. 826–836, Oct. 2002. [https://doi.org/10.1109/TRA.2002.804501]
  • M. Labbé and F. Michaud, “Appearance-based loop closure detection for online large-scale and long-term operation,” IEEE Transactions on Robotics, Vol. 29, No. 3, pp. 734–745, Feb. 2013. [https://doi.org/10.1109/TRO.2013.2242375]
  • M. Labbé, F. Michaud, “RTAB-Map as an open-source LiDAR and visual simultaneous localization and mapping library for large-scale and long-term online operation,” Journal of Field Robotics, Vol. 36, No. 2, pp. 416–446, Oct. 2018. [https://doi.org/10.1002/rob.21831]
  • G. Goswami, S. Bharadwaj, M. Vasta, and R. Singh, “On RGB-D face recognition using kinect,” in Proceedings of the 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications, and Systems, Oct. 2013, pp. 1–6. [https://doi.org/10.1109/BTAS.2013.6712717]
  • Y. Lee, J. Chen, C. Tseng, and S. Lai, “Accurate and robust face recognition from RGB-D images with a deep learning approach,” in Proceedings of the British Machine Vision Conference, Sep. 2016, pp. 123.1–123.14. [https://doi.org/10.5244/C.30.123]
  • L. Spinello and K. O. Arras, “People detection in RGB-D data,” in Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2011, pp. 3838–3843. [https://doi.org/10.1109/IROS.2011.6095074]
  • J. Liu, Y. Liu, G. Zhang, P. Zhu, and Y. Q. Chen, “Detecting and tracking people in real time with RGB-D camera,” Pattern Recognition Letters, Vol. 53, pp. 16–23, Feb. 2015. [https://doi.org/10.1016/j.patrec.2014.09.013]
  • J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images,” in Proceedings of the CVPR 2011, Jun. 2011, pp. 1297–1304. [https://doi.org/10.1109/CVPR.2011.5995316]
  • S. G. M. Almeida, F. G. Guimarães, and J. A. Ramírez, “Feature extraction in Brazilian sign language recognition based on phonological structure and using RGB-D sensors,” Expert Systems with Applications, Vol. 41, No. 16, pp. 7259–7271, Nov. 2014. [https://doi.org/10.1016/j.eswa.2014.05.024]
  • G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM with rao-blackwellized particle filters by adaptive proposals and selective resampling,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Apr. 2005, pp. 2432–2437. [https://doi.org/10.1109/ROBOT.2005.1570477]
  • E. Marder-Eppstein, E. Berger, T.Foote, B. Gerkey, and K. Konolige, “The office marathon: Robust navigation in an indoor office environment,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation, May 2010, pp. 300–307. [https://doi.org/10.1109/ROBOT.2010.5509725]
  • E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische Mathematik, Vol. 1, No. 1, pp. 269–271, Jun. 1959 [https://doi.org/10.1007/BF01386390]
  • D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision avoidance,” IEEE Robotics and Automation Magazine, Vol. 4, No. 1, pp. 23–33, Mar. 1997. [https://doi.org/10.1109/100.580977]
  • D. E. King, “Dlib-ml: A machine learning toolkit,” Journal of Machine Learning Research, Vol. 10, pp. 1755–1758, Jul. 2009.
  • G. F. Franklin, J. D. Powell, and A. Emami-Naein, Feedback Control of Dynamic Systems (8th Edition), Pearson, Jan. 2018.
  • J. M. Santos, D. Portugal, and R. P. Rocha, “An evaluation of 2D SLAM techniques available in robot operating system,” in Proceedings of the 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics, Oct. 2013, pp. 1–6. [https://doi.org/10.1109/SSRR.2013.6719348]