You Only Gesture Once (YouGo): American Sign Language Translation using YOLOv3
Mehul Nanda
10.25394/PGS.12221963.v1
https://hammer.purdue.edu/articles/thesis/You_Only_Gesture_Once_YouGo_American_Sign_Language_Translation_using_YOLOv3/12221963
<div>The study focused on creating and proposing a model that could accurately and precisely predict the occurrence of an American Sign Language gesture for an alphabet in the English Language</div><div>using the You Only Look Once (YOLOv3) Algorithm. The training dataset used for this study was custom created and was further divided into clusters based on the uniqueness of the ASL sign.</div><div>Three diverse clusters were created. Each cluster was trained with the network known as darknet. Testing was conducted using images and videos for fully trained models of each cluster and</div><div>Average Precision for each alphabet in each cluster and Mean Average Precision for each cluster was noted. In addition, a Word Builder script was created. This script combined the trained models, of all 3 clusters, to create a comprehensive system that would create words when the trained models were supplied</div><div>with images of alphabets in the English language as depicted in ASL.</div>
2020-05-01 17:52:25
Object Detection
Neural Networks
YOLO
YOLOv3
Sign Language
Sign Language Translation
ASL
Image Processing
Convolutional Neural Network
Artificial Intelligence and Image Processing
Computer Vision
Image Processing
Neural, Evolutionary and Fuzzy Computation
Pattern Recognition and Data Mining