DRIP INFUSION MONITORING AND DATA LOGGING SYSTEM BASED ON YOLOv5

Authors

  • Giri Wahyu Wiriasto University of Mataram
  • Andika Rizaldy Departement of Electrical Engineering, University of Mataram
  • Putu Aditya Wiguna Departement of Medical and Healthy Science, University of Mataram
  • Indira Puteri Kinasih Faculty of Science, Universiti Brunei Darussalam

DOI:

https://doi.org/10.33480/jitk.v11i1.6818

Keywords:

convolutional neural network , drip infusion , intravenous infusion , YOLOv5

Abstract

Intravenous infusion (IV) functions to deliver medication or fluids directly into the patient’s body and requires an accurate drops-per-minute (TPM) calculation to ensure the correct dosage is administered. Manual calculation techniques, which are still widely used today, tend to be inefficient and carry a high risk of human error. Therefore, a more reliable and innovative automated approach is needed. In this study, we developed a prototype of an automatic infusion monitoring system based on the CNN-YOLOv5 architecture. The system records a one-minute IV drip video using a mobile device, then processes it through a server to automatically calculate the TPM, where YOLOv5 is used for drip detection, Deep SORT for object tracking, and a unique ID numbering scheme is applied to each droplet to ensure it is counted only once until it exits the frame. The calculation results are stored in a patient database that we designed. We also explored the effect of dataset background on accuracy. Testing was conducted on 48 videos (30 fps) with two background types—white (LBP) and black (LBH)—and drip variations of 20, 30, 40, and 50 TPM with varying durations. The results showed higher accuracy on the black background, reaching 0.79 compared to 0.58 on the white background, both with a precision of 1.00. The system demonstrated excellent performance in detecting drips with high precision and good accuracy, particularly on LBP for TPM <40 fps and on LBH for TPM <50 fps. 

Downloads

Download data is not yet available.

References

Open Resources for Nursing (Open RN); Ernstmeyer K, Christman E, editors. Nursing Skills [Internet]. Eau Claire (WI): Chippewa Valley Technical College; 2021. Chapter 23 IV Therapy Management. Available from: https://www.ncbi.nlm.nih.gov/books/NBK593209/

K. Venkatesh, S. S. Alagundagi, V. Garg, K. Pasala, D. Karia, and M. Arora, "DripOMeter: An open-source opto-electronic system for intravenous (IV) infusion monitoring," HardwareX, vol. 12, p. e00345, 2022, doi: 10.1016/j.ohx.2022.e00345.

S. Song, S. Yan, S. Zhang, and Y. Jiang, "Design of an infusion monitoring system based on image processing," J. Phys.: Conf. Ser., vol. 2037, no. 1, p. 012109, Sep. 2021, doi: 10.1088/1742-6596/2037/1/012109.

Karen Kan, Wilton C. Levine,16 - Infusion Pumps,Editor(s): Jan Ehrenwerth, James B. Eisenkraft, James M. Berry, Anesthesia Equipment (Third Edition), W.B. Saunders, 2021, Pages 351-367, ISBN 9780323672795, https://doi.org/10.1016/B978-0-323-67279-5.00016-9.

S. A. Kadiran, E. Supriyanto, and M. Y. Maghribi, “Sistem Monitoring dan Controlling Cairan Infus Berbasis Website,” J. Riset Rekayasa Elektro, vol. 5, no. 1, 2023, doi: 10.30595/jrre.v5i1.17743.

M. Z. Samsono Hadi, H. Mahmudah and L. Thania, "Design of Monitoring System for Infused Liquid Volume Based Wireless Communication," 2021 International Conference on Computer Science and Engineering (IC2SE), Padang, Indonesia, 2021, pp. 1-6, doi: 10.1109/IC2SE52832.2021.9792048.

N. Giaquinto, M. Scarpetta, M. A. Ragolia, and P. Pappalardi, “Real-time drip infusion monitoring through a computer vision system,” in IEEE Med. Meas. Appl. (MeMeA), 2020, doi: 10.1109/MeMeA49120.2020.9137359.

N. Giaquinto, M. Scarpetta, M. Spadavecchia, and G. Andria, “Deep learning-based computer vision for real-time intravenous drip infusion monitoring,” IEEE Sens. J., vol. 21, no. 13, 2021, doi: 10.1109/JSEN.2020.3039009.

Ultralytics, "YOLOv5," GitHub. [Online]. Available: https://github.com/ultralytics/yolov5 [Accessed: Jul. 17, 2025].

U. Nepal and H. Eslamiat, “Comparing YOLOv3, YOLOv4 and YOLOv5 for Autonomous Landing Spot Detection in Faulty UAVs,” Sensors, vol. 22, no. 2, 2022, doi: 10.3390/s22020464.

B. Gao, “Research on Two-Way Detection of YOLO V5s+Deep Sort Road Vehicles Based on Attention Mechanism,” in J. Phys.: Conf. Ser., vol. 2303, 2022, doi: 10.1088/1742-6596/2303/1/012057.

A. Rizaldy, “Set Anak Background Putih,” roboflow.com. Accessed: Jan. 16, 2024. [Online]. Available: https://universe.roboflow.com/andika-rizaldy/set-anak-bp

A. Rizaldy, “Set Anak Background Hitam,” roboflow.com. Accessed: Jan. 16, 2024. [Online]. Available: https://universe.roboflow.com/andika-rizaldy/set-anak-bh

Q. Lin, G. Ye, J. Wang, and H. Liu, “RoboFlow: a Data-centric Workflow Management System for Developing AI-enhanced Robots,” in Proc. Mach. Learn. Res., 2021.

M. A. Barayan et al., “Effectiveness of Machine Learning in Assessing the Diagnostic Quality of Bitewing Radiographs,” Appl. Sci. (Switzerland), vol. 12, no. 19, 2022, doi: 10.3390/app12199588.

R. Xu, H. Lin, K. Lu, L. Cao, and Y. Liu, “A forest fire detection system based on ensemble learning,” Forests, vol. 12, no. 2, 2021, doi: 10.3390/f12020217.

D. Permana and J. Sutopo, “Aplikasi Pengenalan Abjad Sistem Isyarat Bahasa Indonesia (SIBI) Dengan Algoritma YOLOv5,” J. Inform., vol. 11, no. 2, 2023.

R. Pereira, G. Carvalho, L. Garrote, and U. J. Nunes, “Sort and Deep-SORT Based Multi-Object Tracking for Mobile Robotics: Evaluation with New Data Association Metrics,” Appl. Sci. (Switzerland), vol. 12, no. 3, 2022, doi: 10.3390/app12031319.

Abed, Almustafa & Akrout, Belhassen & Amous, Ikram. “Deep learning-based few-shot person re-identification from top-view RGB and depth images. Neural Computing and Applications”. 36. 19365-19382. 2024, doi:10.1007/s00521-024-10239-6.

Thien, “Vehicle Detection and Counting System on Streamlit.” GitHub, Mar. 25, 2023. Accessed: Aug. 18, 2023. [Online]. Available: https://github.com/npq-thien/Vehicle_Detection_and_Counting_System/activity?activity_type=direct_push

I. Markoulidakis, I. Rallis, I. Georgoulas, G. Kopsiaftis, A. Doulamis, and N. Doulamis, “Multiclass Confusion Matrix Reduction Method and Its Application on Net Promoter Score Classification Problem,” Technologies (Basel), vol. 9, no. 4, 2021, doi: 10.3390/technologies9040081.

O. C. Novac et al., “Analysis of the Application Efficiency of TensorFlow and PyTorch in Convolutional Neural Network,” Sensors, vol. 22, no. 22, 2022, doi: 10.3390/s22228872.

A. A. Khan, A. A. Laghari, and S. A. Awan, “Machine Learning in Computer Vision: A Review,” EAI Endorsed Trans. Scalable Inf. Syst., vol. 8, no. 32, 2021, doi: 10.4108/eai.21-4-2021.169418.

D. Khurana, A. Koli, K. Khatter, and S. Singh, “Natural language processing: state of the art, current trends and challenges,” Multimed Tools Appl., vol. 82, no. 3, 2023 , doi: 10.1007/s11042-022-13428-4.

Downloads

Published

2025-08-27

How to Cite

[1]
G. W. Wiriasto, A. Rizaldy, P. A. Wiguna, and I. P. Kinasih, “DRIP INFUSION MONITORING AND DATA LOGGING SYSTEM BASED ON YOLOv5 ”, jitk, vol. 11, no. 1, pp. 171–179, Aug. 2025.