COMPARATIVE ANALYSIS OF YOLOV5SM, YOLOV8, AND YOLOV11 FOR IMAGE-BASED TEMPEH QUALITY RECOGNITION
DOI:
https://doi.org/10.33480/jitk.v11i4.7930Keywords:
Augmentation, Computer Vision, Object Detection, Tempeh Quality, YOLOAbstract
Tempeh is a traditional Indonesian fermented food whose quality is influenced by fermentation and environmental conditions. Quality assessment is still commonly performed manually, leading to subjectivity and inconsistency. This study compares three modern object detection models—YOLOv5sM, YOLOv8, and YOLOv11—for digital image–based tempeh quality recognition. A dataset of 1,000 images (500 good and 500 defective) was collected using a Logitech C270 camera under controlled lighting conditions. YOLOv5sM was trained with data augmentation (Mosaic, flip, rotation), while YOLOv8 and YOLOv11 were trained without augmentation to isolate architectural differences. All models were trained for 100 epochs using identical hyperparameters and evaluated on a 10% test set. Results show that YOLOv11 achieved the highest accuracy (98%), outperforming YOLOv8 (94%) and YOLOv5sM (88%). Although mAP@0.5 reached 99.5% across models, stricter evaluation using mAP@0.5:0.95 revealed performance differences (96.2%, 96.9%, and 97.0%, respectively). The superior performance of YOLOv11 is attributed to its C3K2 and C2PSA modules, which enhance fine-grained feature extraction and localization precision. These findings indicate that YOLOv11 is the most suitable architecture for automated tempeh quality inspection
Downloads
References
[1] O. H. Kristiadi and A. T. Lunggani, “Tempe Kacang Kedelai Sebagai Pangan Fermentasi Unggulan Khas Indonesia: Literature Review,” J. Andaliman-Jurnal Gizi Pangan, Klin. Dan Masy., vol. 2, no. 2, pp. 48–56, 2022, doi: 10.24114/jgpkm.v2i2.40334.
[2] A. Romulo and R. Surya, “International Journal of Gastronomy and Food Science Tempe : A traditional fermented food of Indonesia and its health benefits,” Int. J. Gastron. Food Sci., vol. 26, no. May, 2021, doi: 10.1016/j.ijgfs.2021.100413.
[3] R. Ratnaningsih, N. Kusumawaty, A. C. Iwansyah, E. R. N. Herawati, and D. Kristanti, “History, manufacture, nutritional content , bioactive compounds , and health benefits of tempeh and tofu as alternative protein in Indonesia : a review,” Aust. J. Crop Sci., vol. 19, no. 07, pp. 839–852, 2025, doi: 10.21475/ajcs.25.19.07.p381.
[4] J. Suliburska, I. A. Harahap, and E. Capanoglu, “Fermented soy products : A review of bioactives for health from fermentation to functionality,” Compr. Rev., vol. 24, no. September, pp. 1–24, 2025, doi: 10.1111/1541-4337.70080
[5] S. Q. Teoh, N. L. Chin, C. W. Chong, A. M. Ripen, S. How, and J. J. L. Lim, “A Review on Health Benefits and Processing of Tempeh with Outlines on Its Functional Microbes,” Futur. Foods, vol. 9, p. 100330, 2024, doi: 10.1016/j.fufo.2024.100330.
[6] K. Górska, E. Pejcz, and J. Haasym, “Tempeh and Fermentation — Innovative Substrates in a Classical Microbial Process,” Appl. Sci., vol. 15, pp. 1–24, 2025, doi: 10.3390/app15042171.
[7] A. M. Jacob, C. Gaby, H. Rachman, J. Sulistyo, and O. Krisbianto, “Determination of optimum fermentation time through microscopic, sensory, and eating quality comparison of hyacinth beans, sword beans , and soybean tempeh,” AGROINTEK, vol. 19, no. 4, pp. 937–949, 2025, doi: 10.21107/agrointek.v19i4.32918.
[8] M. Astawan, A. P. G. Prayudani, P. Hadiningtias, and T. Wresdiyati, “Effect of tea extract ( Camellia sinensis ) on shelf life and intrinsic quality of tempeh,” Canrea J. Food Technol. Nutr. Culin., vol. 7, no. 1, pp. 1–14, 2024, doi: 10.20956/canrea.v7i1.1349.
[9] K. Shehzad, U. Ali, and A. Munir, “Computer Vision for Food Quality Assessment : Advances and Challenges,” Glob. J. Mach. Learn. Comput., vol. 1, no. 1, pp. 76–92, 2025, doi: 10.70445/gjmlc.1.1.2025.76-92.
[10] A. M. Mhaskar, P. P. Mangrulkar, R. E. Rane, and R. M. Sairise, “Computer Vision in Food Quality Control: Applications and Innovations,” IJFANS Int. J. Food Nutr. Sci., vol. 11, no. 8, pp. 3830–3846, 2022, doi: 10.12345/IJFANS.2022.11.08.
[11] K. G. Liakos, V. Athanasiadis, E. Bozinou, and S. I. Lalas, “Machine Learning for Quality Control in the Food Industry : A Review,” Foods, vol. 14, pp. 1–35, 2025, doi: 10.3390/foods14060934.
[12] Y. Hendrawan et al., “Classification of soybean tempe quality using deep learning,” IOP Conf. Ser. Earth Environ. Sci., vol. 924, no. 1, 2021, doi: 10.1088/1755-1315/924/1/012022.
[13] C. H. S. A, R. Farah, and Z. Nizam, “Machine Vision for Quality Control in Halal Food Production : A Deep Learning Approach,” J. Moeslim Res. Tech., vol. 2, no. 3, pp. 108–117, 2025, doi: 10.70177/technik.v2i3.2351.
[14] D. Liu, E. Zuo, D. Wang, L. He, L. Dong, and X. Lu, “Deep Learning in Food Image Recognition: A Comprehensive Review,” Appl. Sci., vol. 15, no. 14, pp. 1–18, 2025, doi: 10.3390/app15147924.
[15] L. Zhu, P. Spachos, E. Pensini, and K. N. Plataniotis, “Current Research in Food Science Deep learning and machine vision for food processing : A survey,” Curr. Res. Food Sci., vol. 4, no. March, pp. 233–249, 2021, doi: 10.1016/j.crfs.2021.03.009.
[16] J. Woo, J. Baek, S. Jo, and S. Y. Kim, “A Study on Object Detection Performance of YOLOv4 for Autonomous Driving of Tram,” Sensors, vol. 22, no. 22, pp. 9026–9037, 2022, doi: 10.3390/s22229026.
[17] R. Khanam, T. Asghar, and M. Hussain, “Comparative Performance Evaluation of YOLOv5 , YOLOv8 , and YOLOv11 for Solar Panel Defect Detection,” Solar, vol. 5, no. 6, pp. 1–25, 2025, doi: 10.3390/solar5060076.
[18] Y. Zhang, Z. Guo, J. Wu, Y. Tian, H. Tang, and X. Guo, “Real-Time Vehicle Detection Based on Improved YOLO v5,” Sustainability, vol. 14, no. 19, pp. 12274–12293, 2022, doi: 10.3390/su141912274.
[19] C. Wang, A. Bochkovskiy, and H. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1–15, doi: 10.1109/CVPR52729.2023.00921.
[20] J. Ryu, D. Kwak, and S. Choi, “YOLOv8 with Post-Processing for Small Object Detection Enhancement,” Appl. Sci., vol. 15, pp. 1–13, 2025, doi: 10.3390/app15116091.
[21] V. Ponduri, L. Mohan, S. Basheera, V. R. V, U. Chandan, and K. Praveena, “Highly efficient YOLOV9 model for detecting extremely small-scale objects,” Eng. Res. Express, vol. 7, no. 1, pp. 15287–15302, 2025, doi: 10.1088/2631-8695/adc223.
[22] Q. Wu and X. Nie, “Improved YOLOv10 : A Real-Time Object Detection Approach in Complex Environments,” Sensors, vol. 25, pp. 1–17, 2025, doi: 10.3390/s25010169.
[23] L. He, Y. Zhou, L. Liu, and J. Ma, “Research and Application of YOLOv11-Based Object Segmentation in Intelligent Recognition at Construction Sites,” Buildings, vol. 14, no. 12, pp. 3777–3812, 2024, doi: 10.3390/buildings14123777.
[24] I. Mahfudi, B. Helmiananda, D. Marya, M. Kusumawardani, and C. Setiadi, “Rancang Bangun Sistem Pengenalan Ekspresi Bayi Berbasis YOLOv11 Untuk Pemantauan Jarak Jauh,” JATI (Jurnal Mhs. Tek. Inform., vol. 9, no. 6, pp. 10533–10540, 2025, doi: 10.36040/jati.v9i6.16509.
[25] S. Swain and A. K. Tripathy, “Real-time lane detection for autonomous vehicles using YOLOV5 Segmentation Model,” Int. J. Sustain. Eng., vol. 17, no. 1, pp. 718–728, 2024, doi: 10.1080/19397038.2024.2311905.
[26] Z. Ren, H. Zhang, and Z. Li, “Improved YOLOv5 Network for Real-Time Object Detection in Vehicle-Mounted Camera Capture Scenarios,” Sensors, vol. 23, no. 10, pp. 4589–4605, 2023, doi: 10.3390/app14188368.
[27] L. Gao et al., “A Lightweight YOLOv8 Model for Apple Leaf Disease Detection,” Appl. Sci., vol. 14, no. 15, pp. 6710–6731, 2024, doi: 10.3390/app14146190.
[28] R. Reveles-Martínez, H. Gamboa-Rosales, E. Sánchez-Femat, J. Saldívar-Pérez, T. Ibarra-Pérez, L. C. Reveles-Gómez, O. A. Guirette-Barbosa, J. I. Galván-Tejada, C. E. Galván-Tejada, H. Luna-García, and J. M. Celaya-Padilla, “Benchmarking YOLOv8 to YOLOv11 Architectures for Real-Time Traffic Sign Recognition in Embedded 1:10 Scale Autonomous Vehicles,” Technologies, vol. 13, no. 11, p. 531, 2025, doi: 10.3390/technologies13110531.
[29] E. Casas, L. Ramos, C. Romero, and F. Rivas-echeverría, “A comparative study of YOLOv5 and YOLOv8 for corrosion segmentation tasks in metal surfaces,” Array, vol. 22, p. 100351, 2024, doi: 10.1016/j.csite.2024.104894.
[30] Y. Indrihapsari, D. Wijaya, S. A. Ardy, and I. I. Siswanto, “A Performance Comparison of YOLOv5 and YOLOv8 for Road Damage Object Detection on a Mixed GRDDC – PUPR Dataset,” Elinvo (Electronics, Informatics, Vocat. Educ., vol. 10, no. 2, pp. 147–168, 2025.
[31] F. Aldi, I. Nozomi, M. Hafizh, and T. Novita, “Comparative Analysis of YOLOv11 with Previous YOLO in the Detection of Human Bone Fractures,” J. Comput. Networks, Archit. High Perform. Comput., vol. 7, no. 3, pp. 777–790, 2025, doi: 10.54254/2755-2721/136/2025.21668.
[32] M. L. Ali and Z. Zhang, “The YOLO Framework : A Comprehensive Review of Evolution, Applications, and Benchmarks in Object Detection,” Computers, vol. 13, no. 12, pp. 336–373, 2024, doi: 10.3390/app15052669.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Isa Mahfudi, Mila Kusumawardani, Moechammad Sarosa, Chandrasena Setiadi

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.






-a.jpg)
-b.jpg)











