Pengembangan YOLO untuk Mengunci Sebuah Obyek Telapak Tangan Diantara Sejumlah Objek Seragam

Agustinus Rudatyo Himamunanto(1*),Jean Pinter Son Zalukhu(2)
(1) 
(2) Universitas Kristen Immanuel
(*) Corresponding Author
DOI : 10.35889/progresif.v20i2.2409

Abstract

The hand is one part of the body that is often used in daily human activities. Nowadays, in line with the development of digital technology and computing technology, the role of hands has the potential to become even wider. Hands can potentially be used to model operational facilities or control input models for a device. Problems arise if more than one visual of a hand is captured by the camera. This has the potential to cause ambiguity due to the emergence of multiple hand gesture control input models. It is necessary to lock or tag the original hand control input model in such a way that the other hand can be ignored. In this research, the Google Media Pipe Hand (GMPH) framework is used to mark the hand area based on the input image and the YOLO framework will work to recognize the hand with a marker, which will lock the marked hand between the visuals of other hands. Based on test results involving 800 test data in the form of video data with visual images of one hand gesture or more than one hand gesture image, it is known that the results of the YOLO modification show success with an accuracy of 97.5%.

Keywords: Hand; Markers; Google Media Pipe Hand; YOLO Framework

 

Abstrak

Tangan merupakan salah satu anggota tubuh yang cukup sering dipergunakan dalam keseharian aktifitas manusia. Dewasa ini sesuai dengan perkembangan teknologi digital dan teknologi komputasi maka peran tangan berpotensi menjadi lebih luas lagi. Tangan berpotensi dipergunakan untuk model sarana operasional atau model input kendali terhadap suatu perangkat. Permasalahan muncul bila suatu visual tangan yang tertangkap oleh kamera   berjumlah lebih dari satu. Hal ini berpotensi menimbulkan ambigu karena munculnya model input kendali gestur tangan ganda. Perlu dilakukan proses penguncian atau penandaan pada satu model input kendali tangan yang asli sedemikian rupa sehingga tangan yang lain dapat diabaikan. Pada peelitian ini, framework Google Media Pipe Hand (GMPH) dipergunakan untuk menandai area tangan berdasarkan citra input dan framework YOLO akan bekerja untuk mengenali tangan dengan marker, yang akan megunci tangan bermarker diantara visual tangan yang lain. Berdasarkan hasil pengujian yang melibatkan 800 data uji berupa data video dengan visual citra satu gestur tangan maupun lebih dari satu citra gestur tangan, diketahui bahwa hasil modifikasi YOLO menunjukkan keberhasilan dengan akurasi 97,5%.

Kata kunci: Tangan; Marker; Google Media Pipe Hand; Framework YOLO 

References


A. P. Dhote and V. R. Parihar, “Gesture Recognition,” no. Icici, pp. 512–516, 2017.

N. Patel and S. J. He, “A Survey on Hand Gesture Recognition Techniques , Methods and Tools,” vol. 6, no. 6, 2018.

P. Xu, “A Real-time Hand Gesture Recognition and Human-Computer Interaction System,” Int. Conf. Adv. Comput. 2016 IEEE 6th Int. Conf. Adv. Comput. 2016, pp. 1–8, 2016.

M. C. Roh, D. Kang, S. Huh, and S. W. Lee, “A virtual mouse interface with a two-layered Bayesian network,” Multimed. Tools Appl., vol. 76, no. 2, pp. 1615–1638, 2017, doi: 10.1007/s11042-015-3144-x.

U.V. Solanki, & N.H. Desai, "Hand gesture based remote control for home appliances: Handmote. In 2011 World Congress on Information and Communication Technologies, IEEE, pp. 419-423, 2021.

A. Chaudhary and J. L. Raheja, “Light invariant real-time robust hand gesture recognition,” Optik (Stuttg)., vol. 159, pp. 283–294, 2018, doi: 10.1016/j.ijleo.2017.11.158.

A. Chaudhary, Robust Hand Gesture Recognition for Robotic Hand Control. Singapore: Springer, 2918

W.P. Fang, An intelligent hand gesture extraction and recognition system for home care application. In 2012 sixth international conference on genetic and evolutionary computing, IEEE, pp. 457-459, 2012.

C. N. Nyaga and R. D. Wario, “Sign Language Gesture Recognition through Computer Vision,” 2018 IST-Africa Week Conf., pp. 1-8, 2018.

J. Xu and X. Zhang, “A High-Security and Smart Interaction System Based on Hand Gesture Recognition for Internet of Things, Security and Communication Networks,” vol. 2018, no. 1, 2018.

J. Wang and S. Payandeh, “Hand Motion and Posture Recognition in a Network of Calibrated Cameras,” Adv. Multimed., vol. 2017, pp. 1–25, 2017, doi: 10.1155/2017/2162078.

F. Zhang et al., “MediaPipe Hands: On-device Real-time Hand Tracking,” 2020, [Online]. Available: http://arxiv.org/abs/2006.10214

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.

T. Grzejszczak, M. Kawulok, and A. Galuszka, “Hand landmarks detection and localization in color images,” Multimed. Tools Appl., vol. 75, no. 23, pp. 16363–16387, 2016, doi: 10.1007/s11042-015-2934-5.

J. Lin, X. Ruan, N. Yu, and J. Cai, “Multi-cue based moving hand segmentation for gesture recognition,” Autom. Control Comput. Sci., vol. 51, no. 3, pp. 193–203, 2017, doi: 10.3103/s0146411617030063.

A. Tagliasacchi, M. Schröder, and A. Tkach, “Robust Articulated-ICP for Real-Time Hand Tracking,” Eurograhics (Wiley), vol. 34, no. 5, pp. 101-114, 2015, doi: 10.1111/cgf.12700.

A. I. Maqueda, C. R. Del-Blanco, F. Jaureguizar, and N. García, “Human-computer interaction based on visual hand-gesture recognition using volumetric spatiograms of local binary patterns,” Comput. Vis. Image Underst., vol. 141, pp. 126–137, 2015, doi: 10.1016/j.cviu.2015.07.009.

W. H. Chun and H. Tobias, “Real-time Hand Interaction for Augmented Reality on Mobile Phones,” Proceedings of the 2013 international conference on Intelligent user interfaces, pp. 307–314, 2013.

Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: Exceeding YOLO Series in 2021,” pp. 1–7, 2021.

X. Long et al., “PP-YOLO: An Effective and Efficient Implementation of Object Detector,” 2020.

B. Ye, S. Jin, B. Li, S. Yan, and D. Zhang, “applied sciences Dual Histogram Equalization Algorithm Based on Adaptive Image Correction,” Applied Sciences, vol. 13, no. 19, p.10649. 2023.

A. Mujahid et al., “Real-time hand gesture recognition based on deep learning YOLOv3 model,” Appl. Sci., vol. 11, no. 9, 2021, p. 4164. Doi: 10.3390/app11094164.

T.-P. Chang, H.-M. Chen, S.-Y. Chen, and W.-C. Lin, “Deep Learning Model for Dynamic Hand Gesture Recognition for Natural Human-Machine Interface on End Devices,” Int. J. Inf. Syst. Model. Des., vol. 13, no. 10, pp. 1–23, 2022, doi: 10.4018/ijismd.306636.

S. Wibowo and I. Sugiarto, “Hand symbol classification for human-computer interaction using the fifth version of yolo object detection,” CommIT J., vol. 17, no. 1, pp. 43–50, 2023, doi: 10.21512/commit.v17i1.8520.

A. Farhadi, & J. Redmon, Yolov3: An incremental improvement. In Computer vision and pattern recognition, Vol. 1804, pp. 1-6. Berlin/Heidelberg, Germany: Springer, 2018

A. Kapitanov, K. Kvanchiani, A. Nagaev, R. Kraynov, & A. Makhliarchuk, HaGRID--HAnd Gesture Recognition Image Dataset. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4572-4581, 2024

J. Materzynska, G. Berger, I. Bax, and R. Memisevic, “The jester dataset: A large-scale video dataset of human gestures,” Proc. - 2019 Int. Conf. Comput. Vis. Work. ICCVW 2019, pp. 2874–2882, 2019, doi: 10.1109/ICCVW.2019.00349

.

R. Vezzani, “Proceedings of ICPR 2020 25th International Conference on Pattern Recognition : Milan, 10-15 January 2021,” no. ii, pp. 4340–4347, 2021.

D. Berrar, “Cross-validation,” Encycl. Bioinforma. Comput. Biol. ABC Bioinforma., vol. 1–3, no. January 2018, pp. 542–545, 2018, doi: 10.1016/B978-0-12-809633-8.20349-X.

C. Dewi, H. Dwi Purnomo, B. Kristanto, D. Hartomo, S. Zaiton, and M. Hashim, “Utilizing the YOLOv8 Model for Accurate Hand Gesture Recognition with Complex Background,” 2017, [Online]. Available: http://dx.doi.org/10.1016/j.cviu.2017.00.000http://dx.doi. org/10.1016/j. cviu.2017.00.0001077-3142/

B. Alsharif, E. Alalwany, and M. Ilyas, “Transfer learning with YOLOV8 for real-time recognition system of American Sign Language Alphabet,” Franklin Open, vol. 8, no. September, p. 100165, 2024, doi: 10.1016/j.fraope.2024.100165.


How To Cite This :

Refbacks

  • There are currently no refbacks.