An enhanced convolutional neural network model for translating Kenyan sign language into text in english

dc.contributor.authorMuthui Nancy Njoki
dc.date.accessioned2026-04-23T08:02:05Z
dc.date.available2026-04-23T08:02:05Z
dc.date.issued2024
dc.descriptionA Thesis Submitted to the Graduate School in Partial Fulfillment of the Requirements for the Award of the Degree of Master of Science in Computer Science of Chuka University Supervisors:Dr. Edna Chebet Too,Prof. David Gitonga Mwathi
dc.description.abstractMost people communicate effectively and socialize through verbal means, such as talking. However, mute and deaf people cannot interact with society through speech. So, they use the non-verbal modes of communication. Non-verbal communication is a sort of usual body movements, hand gestures, and facial expressions like sign language, and this needs translation according to the specific patterns that the gestures and facial expressions or positioning of the hands, fingers, and arms carry with them during sign language. While it bridges a gap between those who can hear and those who cannot, it is by no means universally comprehended, thus standing as a barrier that leads to frustration and social exclusion of deaf people. As such, a translation tool may help convert sign language into easily understandable written language that will facilitate smooth communication between hearing and hard-of-hearing persons. While lots of research is going on in the area, little attention has been given to translating Kenyan Sign Language into some of the commonly spoken languages in Kenya. Besides, most translation tools face several challenges due to changing environmental conditions and the movement of a person while performing sign language, leading to changes in background lighting. This work translates KSL into English text through the experimental approach using a deep learning CNN model, DenseNet121, preprocessed by Contrast-Limited Adaptive Histogram Equalization. This architecture has been developed, trained, and tested on the dataset provided by the Kenyan Sign Language Classification Hackathon with an accuracy of 91.5%. The proposed model will bridge communication gaps and help include people who are hard of hearing in educational, health, and employment opportunities.
dc.identifier.citationNjoki, M. N. (2024). An enhanced convolutional neural network model for translating Kenyan Sign Language into text in English (Master’s thesis, Chuka University).
dc.identifier.urihttps://repository.chuka.ac.ke/handle/123456789/22566
dc.language.isoen
dc.publisherChuka University
dc.subjectKenyan Sign Language
dc.subjectConvolutional neural networks
dc.subjectDeep learning
dc.subjectSign language translation
dc.subjectAssistive technology
dc.subjectComputer vision
dc.subjectText translation
dc.titleAn enhanced convolutional neural network model for translating Kenyan sign language into text in english
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Njoki's Final Thesis.pdf
Size:
3.37 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections