# Title Team Members TA Documents Sponsor
35 Acoustic Motion Tracking
Hojin Chun
Sean Nachnani
Yuchen He TA design_review
Group Members:
Sean Nachnani (nachnan2)
Kevin Chun (hchun8)

General Description:
The project idea is to use sound rather than video as a means of motion recognition. Current smart devices are limited to only using natural language processing to interpret a user's needs. We want to expand upon this further and allow devices to perform commands using simple gestures.
The current idea is to create a 4-input microphone array with an ADC that allows for at least a 48khz sample rate, and use a speaker that can reproduce sounds up to at least 24khz. We will start off by sending pseudo-random pulses across a large bandwidth and correlating the sent signal with the received input from the microphones. Given time we will switch to using FMCW (Frequency Modulated Continuous Waveform) radar as a basis for this approach. This will allow us to achieve accurate distance and velocity measurements, and potentially transmit in the inaudible range.
I have spent the last month prototyping this device using a raspberry Pi and a speaker array. I've gotten the pseudo random pulse approach to work, coding all the signal processing in Python, mainly with the PyAudio and SciPy libraries. The prototype's speaker array is currently sampling at 44.1khz and using a speaker that can play up to 20khz. I was able to achieve accurate measurements within the range of a normal living room (about the size of a smaller classroom in eceb).
We plan on building the microphone array using 4 MEMS microphones and appropriate ADCs to sample up to 48khz. This will allow us to play sounds up to 24khz, which will give us enough bandwidth to get accurate measurements. We'll also use a micro controller (most likely a raspberry pi) to sample from these microphones and perform the DSP needed. This system will be designed to be plugged into a regular power outlet.

Related Research Papers:
CAT: High-Precision Acoustic Motion Tracking
FingerIO: Using Active Sonar for Fine-Grained Finger Tracking

Prosthetic Control Board

Caleb Albers, Daniel Lee

Prosthetic Control Board

Featured Project

Psyonic is a local start-up that has been working on a prosthetic arm with an impressive set of features as well as being affordable. The current iteration of the main hand board is functional, but has limitations in computational power as well as scalability. In lieu of this, Psyonic wishes to switch to a production-ready chip that is an improvement on the current micro controller by utilizing a more modern architecture. During this change a few new features would be added that would improve safety, allow for easier debugging, and fix some issues present in the current implementation. The board is also slated to communicate with several other boards found in the hand. Additionally we are looking at the possibility of improving the longevity of the product with methods such as conformal coating and potting.

Core Functionality:

Replace microcontroller, change connectors, and code software to send control signals to the motor drivers

Tier 1 functions:

Add additional communication interfaces (I2C), and add temperature sensor.

Tier 2 functions:

Setup framework for communication between other boards, and improve board longevity.

Overview of proposed changes by affected area:

Microcontroller/Architecture Change:

Teensy -> Production-ready chip (most likely ARM based, i.e. STM32 family of processors)


support new microcontroller, adding additional communication interfaces (I2C), change to more robust connector. (will need to design pcb for both main control as well as finger sensors)


Addition of a temperature sensor to provide temperature feedback to the microcontroller.


change from Arduino IDE to new toolchain. (ARM has various base libraries such as mbed and can be configured for use with eclipse to act as IDE) Lay out framework to allow communication from other boards found in other parts of the arm.