Project

# Title Team Members TA Documents Sponsor
15 Driver Sleep Detection and Alarming System
Area Award: Computer Vision
Chenyang Xu
Xiangyu Chen
Yixiao Nie
design_document0.pdf
final_paper0.pdf
presentation0.ppt
proposal0.pdf
While driving alone on highway or over a long period of time, it's very easy for the driver to fall asleep and may cause accident. Therefore, we came up with an idea to develop a driver anti-sleep alarm system which could effectively solve the problem.
Our system will use a camera (Kinect) to track the eyes of driver, send information back to a microcontroller, and sound alarms when dangerous signals appear. Therefore our system will have 4 main parts: camera, microcontroller, power supply, and sound warning system.
In the following paragraphs we‚??ll further discuss the image processing and face recognition algorithms and hardware (power supply system and sound warning system).

Algorithms:
This project will use the Kinect camera, which has three modes: RGB mode, depth mode, and IR mode. The RGB mode is used for daytime detection, while the IR mode is used for night detection. The depth mode will not be used.
Basically, the emphasis of the algorithm is detecting the eye motion of the driver. When the driver fells sleepy, their lips will approach closer with each other. The camera should be able to detection this change and then make decision whether the driver is sleepy. During the daytime, the RGB mode of the camera works perfectly for detection. However, when it is at night, RGB mode may perform poor because of the trivial light. In this case we may use the IR mode to detect driver‚??s eyes. Another way to handle this is by using histogram equalization, an algorithm that expands color comparison. In addition, in order to run the algorithm, we may need to use an ARM board which has GPU that can handle image processing in a fast speed

Hardware: The hardware system is composed of three parts. First, we will try to use beagleboard-xm as the little computer to interact with Kinect and control the LEDs and sound system. We will install some Linux system on beagleboard and kinect SDK and implement our algorithms to do face recognition and eye lips closing detection.
The second part is the power supply unit for Beagle board, Kinect camera, and PCB board microprocessor. Since the Kinect camera is connected to Beagle board, it retrieves power directly from Beagle board. Next, the Beagle board is going to retrieve its power from the power supply unit we design, which is approximately 10 W (with camera) according to technical data provided by Beagle‚??s website. The power unit includes a voltage filter, a DC-DC converter and the corresponding microcontroller. The power unit has one USB receptor and one USB output port. The output is used to power up the camera and Beagle board. The receptor is used to receive power from a pre-purchased inverter that can connect to car power supply. The third part is the feedback alarming system including sounds and light. The light warning is made up by 5 regular red LED lights, which are powered by designed PCB. Also, they are able to adjust frequency depending on drivers‚?? condition. The sound warning is implemented with a speaker or a buzzer. It is also powered by another DC-DC converter with different voltage on the PCB board. The car power supply can provide sufficient power above 90 W. The hardware PCB board receives control signal from Beagle, retrieves power from inverter and sends out power to Beagle and camera. Predicted board has a microcontroller for controlling DC-DC converter and communicating with alarming system.

Possible challenge includes efficiency of algorithms for face and eye lips detection during day time and night time. The signal communication and interaction between Beagleboard and camera, warning system will be another major obstacle. Moreover, the stabilization and filter of the voltage of power supply from car is also considered as an intricate experimental task.

VoxBox Robo-Drummer

Craig Bost, Nicholas Dulin, Drake Proffitt

VoxBox Robo-Drummer

Featured Project

Our group proposes to create robot drummer which would respond to human voice "beatboxing" input, via conventional dynamic microphone, and translate the input into the corresponding drum hit performance. For example, if the human user issues a bass-kick voice sound, the robot will recognize it and strike the bass drum; and likewise for the hi-hat/snare and clap. Our design will minimally cover 3 different drum hit types (bass hit, snare hit, clap hit), and respond with minimal latency.

This would involve amplifying the analog signal (as dynamic mics drive fairly low gain signals), which would be sampled by a dsPIC33F DSP/MCU (or comparable chipset), and processed for trigger event recognition. This entails applying Short-Time Fourier Transform analysis to provide spectral content data to our event detection algorithm (i.e. recognizing the "control" signal from the human user). The MCU functionality of the dsPIC33F would be used for relaying the trigger commands to the actuator circuits controlling the robot.

The robot in question would be small; about the size of ventriloquist dummy. The "drum set" would be scaled accordingly (think pots and pans, like a child would play with). Actuators would likely be based on solenoids, as opposed to motors.

Beyond these minimal capabilities, we would add analog prefiltering of the input audio signal, and amplification of the drum hits, as bonus features if the development and implementation process goes better than expected.

Project Videos