Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Muthui Nancy Njoki"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    An enhanced convolutional neural network model for translating Kenyan sign language into text in english
    (Chuka University, 2024) Muthui Nancy Njoki
    Most people communicate effectively and socialize through verbal means, such as talking. However, mute and deaf people cannot interact with society through speech. So, they use the non-verbal modes of communication. Non-verbal communication is a sort of usual body movements, hand gestures, and facial expressions like sign language, and this needs translation according to the specific patterns that the gestures and facial expressions or positioning of the hands, fingers, and arms carry with them during sign language. While it bridges a gap between those who can hear and those who cannot, it is by no means universally comprehended, thus standing as a barrier that leads to frustration and social exclusion of deaf people. As such, a translation tool may help convert sign language into easily understandable written language that will facilitate smooth communication between hearing and hard-of-hearing persons. While lots of research is going on in the area, little attention has been given to translating Kenyan Sign Language into some of the commonly spoken languages in Kenya. Besides, most translation tools face several challenges due to changing environmental conditions and the movement of a person while performing sign language, leading to changes in background lighting. This work translates KSL into English text through the experimental approach using a deep learning CNN model, DenseNet121, preprocessed by Contrast-Limited Adaptive Histogram Equalization. This architecture has been developed, trained, and tested on the dataset provided by the Kenyan Sign Language Classification Hackathon with an accuracy of 91.5%. The proposed model will bridge communication gaps and help include people who are hard of hearing in educational, health, and employment opportunities.

DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback