Project
# | Title | Team Members | TA | Documents | Sponsor |
---|---|---|---|---|---|
40 | eyeAssist |
Annamika Dua Sahil Kamesh Veda Menon |
Xihang Wu | design_document1.pdf final_paper1.pdf other1.pdf presentation1.pdf proposal1.pdf |
|
Veda Menon, Sahil Kamesh, Annamika Dua - vedanta2, skamesh2, ad8 Title: eyeAssist Problem: People who are visually impaired often struggle with many issues that most of us take for granted. Reading any kind of text, whether a book or an important document, can be a difficult task. They may not be able to access audiobooks online, as those books must be pre-recorded before being sold to the public. They may also have trouble navigating both indoors and outdoors without some kind of assistance. This can be extremely frustrating and serve as a significant obstruction in their lives. Solution Overview: Our solution to solve both these problems is to create multi-purpose glasses that can allow the visually impaired to navigate their home with ease, with the added benefit of reading text aloud to them in real-time. To address the problem of navigation, we will implement an obstacle detection system using ultrasonic sensors. The ultrasonic sensor module will be used to detect any object in the path of the person wearing the glasses, and will estimate how far away those objects are. Depending on their distance from the user, the glasses can then alert the user as to the presence of these objects, allowing them to avoid any obstacles. We will also solve the problem of reading text on a book, document, etc. with these glasses using computer vision, with OCR (Optical Character Recognition) technology. By mounting a camera module on the glasses, we can capture real-time images of the text placed in front of the glasses. These images can then be processed and converted into digital characters. We will then implement a text-to-speech feature that will read aloud the text being read to the user wearing the glasses, either through a speaker or through earphones. To avoid the impractical scenario of detecting obstacles and alerting the user while they are simply sitting down and not attempting to navigate, we will also include an accelerometer to detect whether the user is stationary or in motion. The glasses will only attempt to perform obstacle detection if they detect the user is in motion. Solution Components: Subsystem #1 - Obstacle detection: We can use ultrasonic sensors to determine potential obstructions in the user’s path as well as an accelerometer to determine if the user is moving and would require obstacle detection to be enabled. Warnings could be conveyed via haptic feedback or speakers. Subsystem #2 - Real-time text to speech translation: We can use a Raspberry Pi, by using the camera module to capture real-time video of the text, which can then be processed using OCR to extract the text. We can then run a speech synthesis module on the Pi and project the speech output to a loudspeaker or a set of earphones via a headphone jack. Criterion for Success: -Detect any potential obstacles near user while in motion and notify them using haptic and/or voice feedback -Capture text that the user is looking at and extract text accurately -Convert extracted text to speech and read to the user through speaker/earphones |