Project

# Title Team Members TA Documents Sponsor
38 Smart Glasses for the Blind
ECE 445 Instructor's Award
Abdul Maaieh
Ahmed Nahas
Siraj Khogeer
Sanjana Pingali design_document1.pdf
final_paper2.pdf
photo1.png
photo2.png
presentation1.pptx
proposal2.pdf
video
# Team Members
- Ahmed Nahas (anahas2)
- Siraj Khogeer (khogeer2)
- Abdulrahman Maaieh (amaaieh2)

# Problem:
The underlying motive behind this project is the heart-wrenching fact that, with all the developments in science and technology, the visually impaired have been left with nothing but a simple white cane; a stick among today’s scientific novelties. Our overarching goal is to create a wearable assistive device for the visually impaired by giving them an alternative way of “seeing” through sound. The idea revolves around glasses/headset that allow the user to walk independently by detecting obstacles and notifying the user, creating a sense of vision through spatial awareness.

# Solution:
Our objective is to create smart glasses/headset that allow the visually impaired to ‘see’ through sound. The general idea is to map the user’s surroundings through depth maps and a normal camera, then map both to audio that allows the user to perceive their surroundings.

We’ll use two low-power I2C ToF imagers to build a depth map of the user’s surroundings, as well as an SPI camera for ML features such as object recognition. These cameras/imagers will be connected to our ESP32-S3 WROOM, which downsamples some of the input and offloads them to our phone app/webpage for heavier processing (for object recognition, as well as for the depth-map to sound algorithm, which will be quite complex and builds on research papers we’ve found).



---

# Subsystems:
## Subsystem 1: Microcontroller Unit
We will use an ESP as an MCU, mainly for its WIFI capabilities as well as its sufficient processing power, suitable for us to connect
- ESP32-S3 WROOM : https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N8/15200089


## Subsystem 2: Tof Depth Imagers/Cameras Subsystem
This subsystem is the main sensor subsystem for getting the depth map data. This data will be transformed into audio signals to allow a visually impaired person to perceive obstacles around them.

There will be two Tof sensors to provide a wide FOV which will be connected to the ESP-32 MCU through two I2C connections. Each sensor provides a 8x8 pixel array at a 63 degree FOV.
- x2 SparkFun Qwiic Mini ToF Imager - VL53L5CX: https://www.sparkfun.com/products/19013

## Subsystem 3: SPI Camera Subsystem
This subsystem will allow us to capture a colored image of the user’s surroundings. A captured image will allow us to implement egocentric computer vision, processed on the app. We will implement one ML feature as a baseline for this project (one of: scene description, object recognition, etc). This will only be given as feedback to the user once prompted by a button on the PCB: when the user clicks the button on the glasses/headset, they will hear a description of their surroundings (hence, we don’t need real time object recognition, as opposed to a higher frame rate for the depth maps which do need lower latency. So as low as 1fps is what we need). This is exciting as having such an input will allow for other ML features/integrations that can be scaled drastically beyond this course.
- x1 Mega 3MP SPI Camera Module: https://www.arducam.com/product/presale-mega-3mp-color-rolling-shutter-camera-module-with-solid-camera-case-for-any-microcontroller/

## Subsystem 4: Stereo Audio Circuit
This subsystem is in charge of converting the digital audio from the ESP-32 and APP into stereo output to be used with earphones or speakers. This included digital to audio conversion and voltage clamping/regulation. Potentially add an adjustable audio option through a potentiometer.

- DAC Circuit
- 2*Op-Amp for Stereo Output, TLC27L1ACP:https://www.ti.com/product/TLC27L1A/part-details/TLC27L1ACP

- SJ1-3554NG (AUX)
- Connection to speakers/earphones https://www.digikey.com/en/products/detail/cui-devices/SJ1-3554NG/738709

- Bone conduction Transducer (optional, to be tested)
- Will allow for a bone conduction audio output, easily integrated around the ear in place of earphones, to be tested for effectiveness. Replaced with earphones otherwise. https://www.adafruit.com/product/1674

## Subsystem 5: App Subsystem
- React Native App/webpage, connects directly to ESP
- Does the heavy processing for the spatial awareness algorithm as well as object recognition or scene description algorithms (using libraries such as yolo, opencv, tflite)
- Sends audio output back to ESP to be outputted to stereo audio circuit

## Subsystem 6: Battery and Power Management
This subsystem is in charge of Power delivery, voltage regulation, and battery management to the rest of the circuit and devices. Takes in the unregulated battery voltage and steps up or down according to each components needs

- Main Power Supply
- Lithium Ion Battery Pack
- Voltage Regulators
- Linear, Buck, Boost regulators for the MCU, Sensors, and DAC
- Enclosure and Routing
- Plastic enclosure for the battery pack



---

# Criterion for Success

**Obstacle Detection:**
- Be able to identify the difference between an obstacle that is 1 meter away vs an obstacle that is 3 meters away.
- Be able to differentiate between obstacles on the right vs the left side of the user
- Be able to perceive an object moving from left to right or right to left in front of the user

**MCU:**
- Offload data from sensor subsystems onto application through a wifi connection.
- Control and receive data from sensors (ToF imagers and SPI camera) using SPI and I2C
- Receive audio from application and pass onto DAC for stereo out.

**App/Webpage:**
- Successfully connects to ESP through WIFI or BLE
- Processes data (ML and depth map algorithms)
- Process image using ML for object recognition
- Transforms depth map into spatial audio
- Sends audio back to ESP for audio output

**Audio:**
- Have working stereo output on the PCB for use in wired earphones or built in speakers
- Have bluetooth working on the app if a user wants to use wireless audio
- Potentially add hardware volume control

**Power:**
- Be able to operate the device using battery power. Safe voltage levels and regulation are needed.
- 5.5V Max

Healthy Chair

Ryan Chen, Alan Tokarsky, Tod Wang

Healthy Chair

Featured Project

Team Members:

- Wang Qiuyu (qiuyuw2)

- Ryan Chen (ryanc6)

- Alan Torkarsky(alanmt2)

## Problem

The majority of the population sits for most of the day, whether it’s students doing homework or

employees working at a desk. In particular, during the Covid era where many people are either

working at home or quarantining for long periods of time, they tend to work out less and sit

longer, making it more likely for people to result in obesity, hemorrhoids, and even heart

diseases. In addition, sitting too long is detrimental to one’s bottom and urinary tract, and can

result in urinary urgency, and poor sitting posture can lead to reduced blood circulation, joint

and muscle pain, and other health-related issues.

## Solution

Our team is proposing a project to develop a healthy chair that aims at addressing the problems

mentioned above by reminding people if they have been sitting for too long, using a fan to cool

off the chair, and making people aware of their unhealthy leaning posture.

1. It uses thin film pressure sensors under the chair’s seat to detect the presence of a user,

and pressure sensors on the chair’s back to detect the leaning posture of the user.

2. It uses a temperature sensor under the chair’s seat, and if the seat’s temperature goes

beyond a set temperature threshold, a fan below will be turned on by the microcontroller.

3. It utilizes an LCD display with programmable user interface. The user is able to input the

duration of time the chair will alert the user.

4. It uses a voice module to remind the user if he or she has been sitting for too long. The

sitting time is inputted by the user and tracked by the microcontroller.

5. Utilize only a voice chip instead of the existing speech module to construct our own

voice module.

6. The "smart" chair is able to analyze the situation that the chair surface temperature

exceeds a certain temperature within 24 hours and warns the user about it.

## Solution Components

## Signal Acquisition Subsystem

The signal acquisition subsystem is composed of multiple pressure sensors and a temperature

sensor. This subsystem provides all the input signals (pressure exerted on the bottom and the

back of the chair, as well as the chair’s temperature) that go into the microcontroller. We will be

using RP-C18.3-ST thin film pressure sensors and MLX90614-DCC non-contact IR temperature

sensor.

## Microcontroller Subsystem

In order to achieve seamless data transfer and have enough IO for all the sensors we will use

two ATMEGA88A-PU microcontrollers. One microcontroller is used to take the inputs and

serves as the master, and the second one controls the outputs and acts as the slave. We will

use I2C communication to let the two microcontrollers talk to each other. The microcontrollers

will also be programmed with the ch340g usb to ttl converter. They will be programmed outside

the board and placed into it to avoid over cluttering the PCB with extra circuits.

The microcontroller will be in charge of processing the data that it receives from all input

sensors: pressure and temperature. Once it determines that there is a person sitting on it we

can use the internal clock to begin tracking how long they have been sitting. The clock will also

be used to determine if the person has stood up for a break. The microcontroller will also use

the readings from the temperature sensor to determine if the chair has been overheating to turn

on the fans if necessary. A speaker will tell the user to get up and stretch for a while when they

have been sitting for too long. We will use the speech module to create speech through the

speaker to inform the user of their lengthy sitting duration.

The microcontroller will also be able to relay data about the posture to the led screen for the

user. When it’s detected that the user is leaning against the chair improperly for too long from

the thin film pressure sensors on the chair back, we will flash the corresponding LEDs to notify

the user of their unhealthy sitting posture.

## Implementation Subsystem

The implementation subsystem can be further broken down into three modules: the fan module,

the speech module, and the LCD module. This subsystem includes all the outputs controlled by

the microcontroller. We will be using a MF40100V2-1000U-A99 fan for the fan module,

ISD4002-240PY voice record chip for the speech module, and Adafruit 1.54" 240x240 Wide

Angle TFT LCD Display with MicroSD - ST7789 LCD display for the OLED.

## Power Subsystem

The power subsystem converts 120V AC voltage to a lower DC voltage. Since most of the input

and output sensors, as well as the ATMEGA88A-PU microcontroller operate under a DC voltage

of around or less than 5V, we will be implementing the power subsystem that can switch

between a battery and normal power from the wall.

## Criteria for Success

-The thin film pressure sensors on the bottom of the chair are able to detect the pressure of a

human sitting on the chair

-The temperature sensor is able to detect an increase in temperature and turns the fan as

temperature goes beyond our set threshold temperature. After the temperature decreases

below the threshold, the fan is able to be turned off by the microcontroller

-The thin film pressure sensors on the back of the chair are able to detect unhealthy sitting

posture

-The outputs of the implementation subsystem including the speech, fan, and LCD modules are

able to function as described above and inform the user correctly

## Envision of Final Demo

Our final demo of the healthy chair project is an office chair with grids. The office chair’s back

holds several other pressure sensors to detect the person’s leaning posture. The pressure and

temperature sensors are located under the office chair. After receiving input time from the user,

the healthy chair is able to warn the user if he has been sitting for too long by alerting him from

the speech module. The fan below the chair’s seat is able to turn on after the chair seat’s

temperature goes beyond a set threshold temperature. The LCD displays which sensors are

activated and it also receives the user’s time input.

Project Videos