Developing Sign Language Translator by Using OpenCV
AbstractThis research uses Convolutional Neural Network (CNN), which is one of the methods used in Deep Learning in order to get a higher success rate in hand detection. The process of hand sign detection uses YOLOv3 Algorithm. Sign language detections are done through video input from smartphone camera which shows the sign language. The data of the research uses 26 alphabetical letters, in which are analyzed in said parameters. Those parameters are accuracy, precision, recall, F1 Score, IoU (Intersection over Union) and mAP (mean Average Precision). Writer uses 300 pictures as the dataset which consist of letter A to Z. The result of the research shows that YOLO as the system to detect object could recognize hands consistently, with the accuracy of 50%-90% by using the pre-trained weights which have been trained using ImageNet dataset and already able to recognize color, texture, and so on. Those pre-trained weights have the value of mAP of 82.90% with 99% precision and average IoU of 85.39%. This research shows a good performance in detecting sign language.
Copyright info for authors
1. Authors hold the copyright in any process, procedure, or article described in the work and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
2. Authors retain publishing rights to re-use all or portion of the work in different work but can not granting third-party requests for reprinting and republishing the work.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) as it can lead to productive exchanges, as well as earlier and greater citation of published work.