Project

# Title Team Members TA Documents Sponsor
39 Hand gesture controlled audio effects system
Sarthak Singh
Sergio Bernal
Zachary Baum
Zicheng Ma design_document1.pdf
design_document2.pdf
final_paper1.pdf
photo1.png
photo2.png
presentation1.pptx
proposal1.pdf
Team Members:
Sarthak Singh (singh94)
Zachary Baum (zbaum2)
Sergio Bernal (sergiob2)

Problem
In audio production, both amateur and professional settings lack intuitive, hands-free control over audio effects. This limitation restricts the creativity and efficiency of users, particularly in live performance scenarios or in situations where physical interaction with equipment is challenging.

Solution Overview
Our project aims to develop a gesture-controlled audio effects processor. This device will allow users to manipulate audio effects through hand gestures, providing a more dynamic and expressive means of audio control. The device will use motion sensors to detect gestures, which will then adjust various audio effect parameters in real-time.

Solution Components:

Gesture Detection Subsystem:
The Gesture Detection Subsystem in our audio effects system uses a camera to track hand movements and orientations. The camera will be connected to a Raspberry PI which then sends signals to our custom PCB. The system processes sensor data in real time, minimizing latency and filtering out inaccuracies. Users can customize gesture-to-effect mappings, allowing for personalized control schemes. This subsystem is integrated with the audio processing unit, ensuring that gestures are seamlessly translated into desired audio effect alterations.


Audio Processing Subsystem:

The Audio Processing Subsystem uses a DSP algorithm to modify audio signals in real time. It includes various audio effects like reverb and delay, which change based on the user's hand gestures detected by the Gesture Detection Subsystem. This part of the system allows users to customize these effects easily. The DSP works closely with the gesture system, making it easy for users to control audio effects simply through gestures. Specifically, we are using a STM32 microcontroller on a custom PCB to handle this subsystem.

Control Interface Subsystem:
The Control Interface Subsystem in our audio effects processor provides a user-friendly interface for displaying current audio effect settings and other relevant information. This subsystem includes a compact screen that shows the active audio effects, their parameters, and the intensity levels set by the gesture controls. It is designed for clarity and ease of use, ensuring that users can quickly glance at the interface to get the necessary information during live performances or studio sessions.

Power Subsystem:

The Power Subsystem for our audio effects processor is simple and direct. It plugs into a standard AC power outlet and includes a power supply unit that converts AC to the DC voltages needed for the processor, sensors, and control interface. This design ensures steady and reliable power, suitable for long use periods, without the need for batteries.
Criterion for Success:
Our solution will enable users to intuitively control multiple audio effects in real time through gestures. The device will be responsive, accurate, and capable of differentiating between a wide range of gestures. It will be compatible with a variety of audio equipment and settings, from studio to live performance.

Alternatives:

Existing solutions are predominantly foot-pedal or knob-based controllers. These are limiting in terms of the range of expression and require physical contact. Our gesture-based solution offers a more versatile and engaging approach, allowing for a broader range of expression and interaction with audio effects.

Musical Hand

Ramsey Foote, Thomas MacDonald, Michelle Zhang

Musical Hand

Featured Project

# Musical Hand

Team Members:

- Ramesey Foote (rgfoote2)

- Michelle Zhang (mz32)

- Thomas MacDonald (tcm5)

# Problem

Musical instruments come in all shapes and sizes; however, transporting instruments often involves bulky and heavy cases. Not only can transporting instruments be a hassle, but the initial purchase and maintenance of an instrument can be very expensive. We would like to solve this problem by creating an instrument that is lightweight, compact, and low maintenance.

# Solution

Our project involves a wearable system on the chest and both hands. The left hand will be used to dictate the pitches of three “strings” using relative angles between the palm and fingers. For example, from a flat horizontal hand a small dip in one finger is associated with a low frequency. A greater dip corresponds to a higher frequency pitch. The right hand will modulate the generated sound by adding effects such as vibrato through lateral motion. Finally, the brains of the project will be the central unit, a wearable, chest-mounted subsystem responsible for the audio synthesis and output.

Our solution would provide an instrument that is lightweight and easy to transport. We will be utilizing accelerometers instead of flex sensors to limit wear and tear, which would solve the issue of expensive maintenance typical of more physical synthesis methods.

# Solution Components

The overall solution has three subsystems; a right hand, left hand, and a central unit.

## Subsystem 1 - Left Hand

The left hand subsystem will use four digital accelerometers total: three on the fingers and one on the back of the hand. These sensors will be used to determine the angle between the back of the hand and each of the three fingers (ring, middle, and index) being used for synthesis. Each angle will correspond to an analog signal for pitch with a low frequency corresponding to a completely straight finger and a high frequency corresponding to a completely bent finger. To filter out AC noise, bypass capacitors and possibly resistors will be used when sending the accelerometer signals to the central unit.

## Subsystem 2 - Right Hand

The right subsystem will use one accelerometer to determine the broad movement of the hand. This information will be used to determine how much of a vibrato there is in the output sound. This system will need the accelerometer, bypass capacitors (.1uF), and possibly some resistors if they are needed for the communication scheme used (SPI or I2C).

## Subsystem 3 - Central Unit

The central subsystem utilizes data from the gloves to determine and generate the correct audio. To do this, two microcontrollers from the STM32F3 series will be used. The left and right hand subunits will be connected to the central unit through cabling. One of the microcontrollers will receive information from the sensors on both gloves and use it to calculate the correct frequencies. The other microcontroller uses these frequencies to generate the actual audio. The use of two separate microcontrollers allows for the logic to take longer, accounting for slower human response time, while meeting needs for quicker audio updates. At the output, there will be a second order multiple feedback filter. This will get rid of any switching noise while also allowing us to set a gain. This will be done using an LM358 Op amp along with the necessary resistors and capacitors to generate the filter and gain. This output will then go to an audio jack that will go to a speaker. In addition, bypass capacitors, pull up resistors, pull down resistors, and the necessary programming circuits will be implemented on this board.

# Criterion For Success

The minimum viable product will consist of two wearable gloves and a central unit that will be connected together via cords. The user will be able to adjust three separate notes that will be played simultaneously using the left hand, and will be able to apply a sound effect using the right hand. The output audio should be able to be heard audibly from a speaker.

Project Videos