Project
# | Title | Team Members | TA | Documents | Sponsor |
---|---|---|---|---|---|
36 | EyeCU - Assistive Eyewear |
Abishek Venkit Irfan Suzali Nikhil Mehta |
Shuai Tang | design_document1.pdf design_document2.pdf design_document3.pdf design_document4.pdf final_paper1.pdf proposal2.pdf proposal1.pdf |
|
Project Members: Irfan Suzali (isuzali2), Abishek Venkit (avenkit2), Nikhil Mehta (nikhilm3) Title: EyeCU Problem: Living a normal life can be difficult for the visually impaired. Although people with visual impairments can be largely independent in most scenarios, there are still cases where assistance is required in day-to-day activities. An example of this is reading the ingredients on a package of food, or identifying a mysterious object. We would like to provide this market with another layer of connection to their surroundings. Solution Overview: Our solution is assistive eyewear for people with vision impairments. The eyewear will include a camera to capture the user’s field of view, a bluetooth module, and a battery to power the device. This visual information will be sent over bluetooth to the user’s smartphone, which will then compute the necessary contextual information about the scene in front of them, and output audio through the smartphone. This audio can be played on the phone’s speaker or through a pair of headphones connected to the smartphone. Additionally, we could add a microphone to the eyewear as well, allowing the user to input voice commands to specify what type of information they want about their surroundings. This feature is not essential to the function of our product, but could be an extra feature, given the time and need. Solution Components: Hardware Subsystem: - Integrated Eyewear including camera, bluetooth, and battery - Dedicated circuit to store and transmit images via bluetooth - Low power circuitry for 24+ hour use Software Subsystem: - Bluetooth system to receive images - AI/Computer Vision system on smartphone to analyze images and generate context info (May be connected to a cloud classification database like Azure) - Text to speech to output audio information to user Secondary Subsystem (optional): Hardware: - Microphone built into eyewear, also connected over bluetooth Software: - Natural language processing to understand commands from the user - Used to specify what type of feedback the user wants on their field of vision. Criterion for Success: - A successful solution will improve the quality of life for the visually impaired in numerous ways, and will depend on the speed, accuracy, and usefulness of the glasses. - The glasses must take clear, detailed photos of the surroundings to transmit to the phone. - The processing of the images and response must be quick (under 2 seconds). - The device allows the user to choose between different usage modes : text-to-speech, object classification, and possibly other modes. - The feedback given to the user must be helpful (give them information otherwise unknown to them). For text, the device must recite the text back to the user (within 95% accuracy). For object classification, the device must recite the object back to the user along with its confidence (eg: “There is a bear in front of you with 20% confidence”). Idea Post Link: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=35979 |