COMPARISON OF SIFT AND ORB METHODS IN IDENTIFYING THE FACE OF BUDDHA STATUE
The statue is part of the heritage facial recognition process which is immobile and artistically stylized. Identifying the similarities between the statues can help provide an important reference for tourism in recognizing the faces of the statues which are different and have almost the same characteristics in every country, especially in Indonesia, among the facial recognition of the statues based on the condition, color, and shape of the face. The purpose of this study is to apply the original images that have characteristics, partially done manually to various types of transformations and calculate matching evaluation parameters such as the number of key points in the image, the level of matching, and the required execution time for each algorithm. To confirm the efficiency of the proposed method, experiments were carried out on private data sets obtained from statues under low light conditions and in different poses. The data was taken based on the image of the Buddha's face and matched with the facial image of the Buddha statue available in the database using comparisons resulting from data processing using the Sift and ORB methods with various types of transformations. The result will be seen in the image that is matched with the best algorithm for each type of distortion. The faces tested are images of the faces of the Buddha statues that are recognized, and photos of some of the original statues that were not saved due to unclear lighting and camera distance factors. The results show that the number of key points generated is the number of key points, the ORB method gives fewer results compared to the SIFT method and the average SIFT recognition and processing time shows better performance for an average of 100% at a SIFT matching rate of 2% with time 0.400285 and the ORB method is 1% for the time 0.400961
J. Ma, X. Jiang, J. Jiang, and Y. Gao, “Feature-guided Gaussian mixture model for image matching,” Pattern Recognit., vol. 92, pp. 231–245, 2019, doi: 10.1016/j.patcog.2019.04.001.
J. Han, China’s Architecture in a Globalizing World: Between Socialism and the Market. library.oapen.org, 2018.
F. Di Paola, G. Milazzo, and F. Spatafora, “Computer aided restoration tools to assist the conservation of an ancient sculpture, The colossal statue of zeus enthroned,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. - ISPRS Arch., vol. 42, no. 2W5, pp. 177–184, 2017, doi: 10.5194/isprs-archives-XLII-2-W5-177-2017.
“Statue Face Recognition Using the Sift and Suft Method Based on Figure Extraction.” .
H. Wang, Z. He, Y. He, D. Chen, and Y. Huang, “Average-face-based virtual inpainting for severely damaged statues of Dazu Rock Carvings,” Journal of Cultural Heritage, vol. 36. pp. 40–50, 2019, doi: 10.1016/j.culher.2018.08.007.
S. Temple, Relief and Statue. .
N. A. Rasheed and M. J. Nordin, “Classification and reconstruction algorithms for the archaeological fragments,” J. King Saud Univ. - Comput. Inf. Sci., vol. 32, no. 8, pp. 883–894, 2020, doi: 10.1016/j.jksuci.2018.09.019.
A. Y. Rahman, S. Sumpeno, and M. H. Purnomo, “Arca detection and matching using Scale Invariant Feature Transform (SIFT) method of stereo camera,” Proc. - 2017 Int. Conf. Soft Comput. Intell. Syst. Inf. Technol. Build. Intell. Through IOT Big Data, ICSIIT 2017, vol. 2018-Janua, pp. 66–71, 2017, doi: 10.1109/ICSIIT.2017.45.
H. Wang, “SPCDet : Enhancing Object Detection with Combined Feature Fusing,” no. 2017, pp. 236–251, 2019.
D. Papp, F. Mogyorósi, and G. Szucs, “Image matching for individual recognition with SIFT, RANSAC and MCL,” CEUR Workshop Proc., vol. 1866, no. 1, 2017.
L. A. Martín, “A Similarity Measure of Gaussian Process Predictive Distributions,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12882. pp. 150–159, 2021, doi: 10.1007/978-3-030-85713-4_15.
B. Renoust et al., “Buda.art: A multimodal content-based analysis and retrieval system for Buddha statues,” MM 2019 - Proc. 27th ACM Int. Conf. Multimed., pp. 1062–1064, 2019, doi: 10.1145/3343031.3350591.
F. Stanco, L. Tenze, G. Ramponi, and A. De Polo, “Virtual restoration of fragmented glass plate photographs,” Proc. Mediterr. Electrotech. Conf. - MELECON, vol. 1, no. May 2014, pp. 243–246, 2004, doi: 10.1109/melcon.2004.1346819.
B. Majumder, “Face Recognition Based on Human Sketches Using Fuzzy Minimal Structure Oscillation in the SIFT Domain,” Communications in Computer and Information Science, vol. 1241. pp. 325–335, 2020, doi: 10.1007/978-981-15-6318-8_27.
A. D. Narhare and G. V. Molke, “Trademark detection using SIFT features matching,” Proc. - 1st Int. Conf. Comput. Commun. Control Autom. ICCUBEA 2015, pp. 684–688, 2015, doi: 10.1109/ICCUBEA.2015.140.
W. Hua, M. Hou, S. Yang, and Y. Dong, “DISCRIMINATION of CULTURAL RELICS SIMILARITY BASED on PHASH ALGORITHM and SIFT OPERATOR,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, vol. 42, no. 2/W15. pp. 571–575, 2019, doi: 10.5194/isprs-archives-XLII-2-W15-571-2019.
W. Hua, M. Hou, Y. Qiao, X. Zhao, S. Xu, and S. Li, “Similarity Index Based Approach for Identifying Similar Grotto Statues to Support Virtual Restoration,” Remote Sens., vol. 13, no. 6, p. 1201, 2021, doi: 10.3390/rs13061201.
G. Shi, X. Xu, and Y. Dai, “SIFT Feature Point Matching Based on Improved RANSAC Algorithm,” no. 1, pp. 1–4.
B. Renoust et al., “Historical and Modern Features for Buddha Statue Classification,” pp. 23–30, 2019.
L. Marlinda, U. D. Nuswantoro, U. D. Nuswantoro, U. D. Nuswantoro, and U. D. Nuswantoro, “Matching Images On The Face Of A Buddha Statue Using The Scale Invariant Feature Transform ( SIFT ) Method,” pp. 164–167, 2020.
K. Indra Gandhi, S. Janarthanan, R. Sathish, and A. Surendar, “Dominant Feature Prediction by Improved Structural Similarity Computation,” 2020, doi: 10.1109/ICITIIT49094.2020.9071534.
E. Rublee, W. Garage, and M. Park, “ORB : an efficient alternative to SIFT or SURF,” pp. 2564–2571, 2011.
F. Marton, M. B. Rodriguez, F. Bettio, M. Agus, A. J. Villanueva, and E. Gobbetti, “IsoCam: Interactive visual exploration of massive cultural heritage models on large projection setups,” J. Comput. Cult. Herit., vol. 7, no. 2, Jun. 2014, doi: 10.1145/2611519.
S. Rao, S. Shekhar, and A. Kum, “Fe eature Ex xtractionu using ORB B-RANSA AC for Fa ace Recog gnition,” vol. 70, pp. 174–184, 2015, doi: 10.1016/j.procs.2015.10.068.
E. Karami, S. Prasad, and M. Shehata, “Image Matching Using SIFT , SURF , BRIEF and ORB : Performance Comparison for Distorted Images.”
H. Chien, C. Chuang, C. Chen, and R. Klette, “When to Use What Feature ? SIFT , SURF , ORB , or A-KAZE Features for Monocular Visual Odometry,” no. 1, pp. 0–5.
E. Karami, S. Prasad, and M. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: Performance comparison for distorted images,” arXiv, 2017, [Online]. Available: http://arxiv.org/abs/1710.02726.
Abstract viewed = 102 times
PDF downloaded = 95 times
Copyright (c) 2023 Linda Marlinda, Fikri Budiman, Ruri Suko Basuki, Ahmad Zainul Fanani
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.