Project

# Title Team Members TA Documents Sponsor
48 Real-time braille translator
Aayush Raj
Ashmita Chatterjee
Matthew Price
Shuai Tang design_document2.pdf
design_document3.pdf
design_document4.pdf
design_document5.pdf
design_document6.pdf
final_paper1.pdf
proposal1.pdf
proposal2.pdf
**Team members**:
- Ashmita Chatterjee (ashmita2)
- Aayush Raj (aayushr2)
- Matthew Price (mjprice2)

**Problem**: Visually impaired people have a difficult time reading texts that aren’t written in braille. Public places like restaurants and libraries don’t provide menus and most books in braille. This restricts their independence and limits the amount of knowledge they can consume.

**Solution**: Our solution is to create a handheld device that can assist people in such circumstances. The idea is to develop a device that can scan a piece of paper and translate the written text into Braille in real-time. Each letter in the english alphabet has a translation in braille and we will be exploiting that feature through our device. The device will have the ability to scan a small section of a sheet of paper and translate that portion into braille and display it using a refreshable braille terminal.

**Solution Components**:
- **Subsystem 1**: Camera system
This unit will include a camera and a flash to allow the camera to capture pictures in environments that aren’t well lit. The camera will also have a sensor to detect the amount of light coming in and adjust the flash accordingly. This component will also send the captured picture to the processor for image processing.
- **Subsystem 2**: Image Processing Unit
This unit will mostly be included in the micro-controller chip. The chip needs to be powerful enough to perform an OCR operation in real time. The OCR conversion will allow us to take an image and produce a string of characters that will be then be stored in a buffer in a flash memory chip. A section of this buffer (maybe 5 characters) will be displayed on the braille display at a time. We plan to use an off-the-shelf library to convert the captured image into a string of text.
- **Subsystem 3**: Braille display unit
This unit will comprise of components that will show the braille output. This involves taking a string of 5 characters as an input and pushing the correct tips upwards and having them hold their position till the next string is detected. In order to perform this operation, we will need a hash map that maps each character in the english alphabet (along with the numbers and some punctuation) to the respective characters in braille. A braille character consists of 6 dots (2 columns x 3 rows) raised in a specific pattern. Once this unit converts a character into the braille version, we store this braille character in the form of bits which then convert to electrical signals that will allow us to push the respective dots on the refreshable braille display. We have spoken to Gregg at the Machine shop and have come up with some ideas on how we can set up a refreshable braille display. One potential way to solve this issue is to use a miniature solenoid to represent a single dot on a braille character. Each braille character consists of 6 dots set up in a specific pattern so we would use 6 solenoids to represent a character. The miniature solenoids available online are small enough that we can fit 6 of them in a small rectangle such that it counts as one single legible braille character. Link to miniature solenoid. These solenoids need a continuous signal to keep them pushed up and can be individually controlled. Gregg mentioned that the shop can help us get these parts and then fix them up in a metal plate in such a manner that we only need to care about the signal that goes into this braille display.
- **Subsystem 4**: Next button state machine
Since the braille display can only show a limited number of characters compared to what is being scanned by the camera, we will need a state machine that will allow the braille display to show relevant information in an order. This state machine will use a “next button” input signal to figure out which portion of the scanned text to display.

**Criterion for success**:
- The device is able to capture an image
- The device is able to successfully convert the captured image to text
- The device is able to successfully display the braille version of the captured text (or part of it)

This project was pitched to us by our friend Abhijoy Nandi who is a senior student in Industrial Design here at the University of Illinois. He works on concept design for interesting projects and is curious to see one of these concepts working.
Link to the concept design of this product : https://www.abhijoynandidesigns.com/samanya

LED Cube

Michael Lin, Raymond Yeh

LED Cube

Featured Project

LED technology is more advanced and much more efficient than traditional incandescent light bulbs and as such our team decided we wanted to build a device related to LEDs. An LED cube is inherently aesthetically pleasing and ours will be capable of displaying 3D animations and lighting patterns with much increased complexity compared to any 2D display of comparable resolution. Environmental interaction will also be able to control the various lighting effects on the cube. Although our plan is for a visually pleasing cube, our implementation can easily be adapted for more practical applications such as displaying 3D models.