PERBANDINGAN CNN DAN YOLO PADA SISTEM PENGENALAN WAJAH BERBASIS PRESENSI

##plugins.themes.academic_pro.article.main##

Nurfadillah
Ida
Darniati
Rizki Yusliana Bakti
Titin Wahyuni
Muhammad Faisal

Abstract

Face recognition based on image data has been widely applied in automated attendance systems; however, it still faces challenges related to accuracy and efficiency under varying lighting conditions and facial pose variations. This study aims to compare the performance of Convolutional Neural Network (CNN) and You Only Look Once (YOLO) methods for face detection and recognition in a deep learning–based attendance system. The dataset consists of facial images collected from students in a limited campus environment with several variations in viewpoint and illumination. The research stages include image preprocessing, training of CNN and YOLO models, and performance evaluation using accuracy, precision, recall, and computation time metrics. The experimental results indicate that YOLO outperforms CNN in terms of detection speed and performance stability, while CNN demonstrates competitive classification performance on limited datasets. This study provides empirical insights into the characteristics of both methods in attendance system scenarios and can serve as a reference for selecting appropriate models for real-world implementation. The main limitations of this study are the dataset size and the restricted data acquisition scope.

##plugins.themes.academic_pro.article.details##

How to Cite
Nurfadillah, Ida, Darniati, Yusliana Bakti, R., Wahyuni, T., & Faisal, M. (2026). PERBANDINGAN CNN DAN YOLO PADA SISTEM PENGENALAN WAJAH BERBASIS PRESENSI. Jurnal Informatika Progres, 18(1), 93-101. https://doi.org/10.56708/progres.v18i1.532

References

[1] E. Santucci, L. Didaci, G. Fumera, and F. Roli, “A parameter randomization approach for constructing classifier ensembles,” Pattern Recognition, vol. 69, pp. 1–13, 2017.
[2] P. Panov and S. Džeroski, “Combining bagging and random subspaces to create better ensembles,” in Proc. 7th Int. Conf. Intelligent Data Analysis, 2007, pp. 118–129.
[3] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
[4] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
[5] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE CVPR, 2016, pp. 779–788.
[6] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in Proc. IEEE CVPR, 2017, pp. 7263–7271.
[7] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
[8] G. Jocher et al., “YOLOv8 documentation,” Ultralytics, 2023.
[9] T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures,” Pattern Recognition, vol. 29, no. 1, pp. 51–59, 2018.
[10] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 2019.
[11] J. I. Mujidah et al., “KLASIFIKASI TINGKAT KEMATANGAN LADA MENGGUNAKAN ENSEMBLE LEARNING BERDASARKAN CITRA WARNA KULIT,” J. Inform. Prog., vol. 17, no. 2, pp. 1–11, Sep. 2025, doi: 10.56708/progres.v17i2.467.
[12] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399–458, 2018.
[13] H. Wang et al., “CosFace: Large margin cosine loss for deep face recognition,” in Proc. IEEE CVPR, 2018, pp. 5265–5274.
[14] J. Deng et al., “ArcFace: Additive angular margin loss for deep face recognition,” in Proc. IEEE CVPR, 2019, pp. 4690–4699.
[15] Sarina et al., “KLASIFIKASI PENYAKIT TANAMAN NILAM BERDASARKAN CITRA DAUN MENGGUNAKAN GLCM DAN SVM,” J. Inform. Prog., vol. 17, no. 2, pp. 12–22, Sep. 2025, doi: 10.56708/progres.v17i2.469.
[16] S. Z. Li and A. K. Jain, Handbook of Face Recognition, 2nd ed. London, UK: Springer, 2019.
[17] R. Girshick, “Fast R-CNN,” in Proc. IEEE ICCV, 2015, pp. 1440–1448.
[18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE CVPR, 2016, pp. 770–778.
[19] A. Andayani, “Perbandingan metode CNN dan YOLO pada sistem pengenalan wajah,” Jurnal Ilmiah FIFO, vol. 15, no. 2, pp. 85–94, 2023.
[20] B. Lestari, “Implementasi CNN untuk pengenalan wajah mahasiswa,” Jurnal Ilmiah FIFO, vol. 14, no. 1, pp. 33–41, 2022.
[21] S. Akbar et al., “IMPLEMENTASI K-MEANS DAN ANALISIS SENTIMEN KRITIK SARAN BERBASIS NLP PADA DATA MONEV BBPSDMP KOMINFO MAKASSAR,” J. Inform. Prog., vol. 17, no. 2, pp. 36–43, Sep. 2025, doi: 10.56708/progres.v17i2.465.
[22] D. Santoso, “Deteksi wajah real-time menggunakan YOLOv5,” Jurnal Teknologi Informasi, vol. 8, no. 3, pp. 210–218, 2023.
[23] M. Rahman, “Evaluation of YOLOv8 for face detection performance,” International Journal of Computer Vision Systems, vol. 12, no. 1, pp. 55–66, 2025.
[24] A. M. Akbar DB, M. Faisal, and M. AM Hayat, “IMPLEMENTASI HYBRID CNN, FACIAL LANDMARK DAN LIVENESS DETECTION PADA SISTEM ABSENSI WAJAH,” J. Inform. Prog., vol. 17, no. 2, pp. 116–120, Sep. 2025, doi: 10.56708/progres.v17i2.483.

Most read articles by the same author(s)