Developing Sign Language Translator by Using OpenCV


  • Zulfan Honggala Putra a:1:{s:5:"en_US";s:22:"Politeknik Caltex Riau";}


This research uses Convolutional Neural Network (CNN), which is one of the methods used in Deep Learning in order to get a higher success rate in hand detection. The process of hand sign detection uses YOLOv3 Algorithm. Sign language detections are done through video input from smartphone camera which shows the sign language. The data of the research uses 26 alphabetical letters, in which are analyzed in said parameters. Those parameters are accuracy, precision, recall, F1 Score, IoU (Intersection over Union) and mAP (mean Average Precision). Writer uses 300 pictures as the dataset which consist of letter A to Z. The result of the research shows that YOLO as the system to detect object could recognize hands consistently, with the accuracy of 50%-90% by using the pre-trained weights which have been trained using ImageNet dataset and already able to recognize color, texture, and so on. Those pre-trained weights have the value of mAP of 82.90% with 99% precision and average IoU of 85.39%. This research shows a good performance in detecting sign language.