|1||Prosthetic Hand for Typing
|Dhruv Mathur||Joohyung Kim||design_document1.pdf
Rahul Krishnan (rahulk3), Rohil Hatalkar (rah2)
Currently, the field of prosthesis is making advances in bionic arm technology, however, these solutions are extremely expensive and not available to the public. In the modern age, the use of computers has become prevalent and people with disabilities (specifically those without one or both arms) may be at a significant disadvantage due to their lack of speed or inability to type.
We propose a simple solution to this problem by creating a prosthetic hand that can be used to type with a user's feet, primarily the two big toes. This device will have two main systems. The first system will have 5 buttons placed below the user's feet corresponding to the 5 fingers on a hand. The second system is a lightweight prosthetic hand that will support lateral finger movement. We will use bluetooth to communicate the correct binary response from the foot device to the prosthetic hand. Small motors on the joints of the fingers will move to extend the finger so the correct key can be pressed. The user is expected to be able to move their hand forward, backward, up, and down while hovering over the keyboard, so they can press any key on the keyboard with the use of their foot.
Solution Components: (We have provided links for specific parts we are considering.)
Subsystem #1: 5 buttons will be placed on the top of the foot system corresponding to the fingers of the hand. When the user activates the button with their toe, the corresponding binary signal will be sent to the prosthetic hand that will complete the press on the keyboard. The diameter of the button should be about an inch as this would allow a toe to comfortably press the button. We will place the buttons approximately an inch apart to ensure the user does not press multiple buttons simultaneously.
Subsystem #2: An Arduino will be used, so we can get feedback from the buttons and send the binary response to the prosthetic hand system via bluetooth.
Subsystem #3: We are considering using a HC-05 Wireless BT Module as it is known to work with an Arduino. The bluetooth module will require a range of 1.5 meters as we consider that to be the maximum height of a table. This module ensures a quality connection at a maximum distance of 10 meters, which is sufficient for our project.
Subsystem #4: We plan on using a rechargeable battery that will power the pcb and Arduino. The battery will supply a voltage of at least 5V in order to minimize size and cost. Therefore, we will keep all of our sensors below that voltage range. The battery will be charged using a micro USB cable.
Prosthetic Hand System
Subsystem #1: We will have five motors that are attached to the joints of each finger. We are thinking of 3D printing the hand or getting it made from the shop with wood. When the bluetooth module receives the data packet from the foot device, the motor will rotate to extend the finger forward and push the key down.
Subsystem #2: An Arduino will be used, so we can receive and handle responses from the bluetooth module inside the foot device. The Arduino Microcontroller will also be responsible for moving the motors correctly.
Subsystem #3: We are considering using a HC-05 Wireless BT Module as it is known to work with an Arduino. The bluetooth module will require a range of 1.5 meters as we consider that to be the maximum height of a table. This module ensures a quality connection at a maximum distance of 10 meters, which is sufficient for our project.
Subsystem #4: We plan on using a rechargeable battery that will power the pcb and Arduino. The battery will supply a voltage of at least 5V in order to minimize size and cost. Therefore, we will keep all of our motors below that voltage range. The battery will be charged using a micro USB cable.
Criterion of Success:
The goal of this project is for people with only one hand to be able to type with both hands. This typing system should allow the user to use their toes to press the buttons, which translate into key presses from the prosthetic hand. The foot system has to be easy to use with limited room for error such as pressing multiple buttons simultaneously. The prosthetic hand must be able to press the keys accurately when receiving the binary response from the foot system. The delay between pressing a button with a toe and the motor extending a finger to press the key should be short.
|2||Submetering the ECEB
|Evan Widloski||Jonathon Schuh||other1.pdf
|see in attached file|
|3||Blue Light-Tracking Glasses
|Charles Ross||Jing Jiang||design_document1.pdf
|Erik Lundin [erikjL2], David Yan [davidzy2], Jane Zhao [janejz2]
Blue Light-Tracking Glasses
Problem: The increased use of electronic displays has led to concerns over the effects of visible light on the eyes. While it is now widely known that UV radiation and blue light is very damaging to the eyes, there is currently no device that tracks the amount of exposure.
Existing Alternatives: Blue light glasses are glasses made with polymers or other materials that block blue light. These glasses do not track exposure time and only block light.
Solution Overview: A pair of glasses can be equipped with sensors to detect UV radiation and blue light. If the user is exposed for longer than a determined threshold then indicators will go off depending on the type of light that exceeded exposure.
Sensor package: Consists of at least two photodiodes to detect blue and UV light. Dichroic filters that selectively pass blue light (400~500 nm) will be placed near the photodiodes corresponding to blue light. The circuit will be calibrated to best respond to light coming from a screen at a distance of about 3 feet.
Sensor amplifier: Amplifies signals from the sensors using transistors and determines if the sensor output reaches the threshold for eye damage and sleep interference. This subsystem also includes a logic circuit to indicate to the counter the exposure time is increasing in the case of blue light. For UV light the indicator LED will light up any time the threshold is exceeded.
Power subsystem: Consists of the system that converts power from lithium ion batteries for internal use and a switch. It interconnects with the indicator, sensor, and timing subsystems. Also includes the necessary safety precautions for charging and discharging the battery
Counter subsystem: Consists of a basic counter IC and 555 timer IC, which will only be enabled when the sensor threshold is reached. When the user specified time has elapsed a signal will be sent to the indicator subsystem.
Indicators: Consists of one red and one yellow LED for blue light and UV respectively. As a threshold is reached, the LED that corresponds to that threshold will light up to alert the user.
User interface: Consists of two knobs at the side of the glasses and an on/off button. One knob connects to a circuit with a variable resistor/capacitor that adjusts timer frequency for the exposure time threshold. The other allows for the adjustment of the light intensity threshold needed to trigger the timer or indicator.
Criterion for Success:
Successful detection of UV radiation and blue light.
Counter counts only when detecting blue light.
Indicators go off when time exposure threshold is surpassed.
User can change intensity and exposure time threshold.
|4||HAULLELUJAH! A SOLUTION TO PACKING A U-HAUL!
|Stephanie Jaster||Arne Fliflet||design_document8.pdf
|Partners: Nathaniel Stoll (NetID: nstoll2), Anthony Lambert (NetID: acl3)
Project Description: There are some common questions people ask when they move: "What size U-Haul do we need to rent?" "How should we arrange things in the U-Haul?" "Which box did we put [object] in?" Wouldn't it be nice if there was a system which could answer all these questions for you? We would like to develop a system which 1) keeps a record of all the things you plan to move, 2) determines what size U-Haul you will need to transport all of your items, and 3) gives recommendations on where to place boxes within the U-Haul to maximize space and minimize damage in transit.
• Digital Tape Measure: Needed to accurately and quickly measure boxes and/or objects
• Digital Tape Measure Transfer: Needed to transfer incoming measurement data from digital tape measure to mobile application per Bluetooth module. (Part of PCB would be implemented here)
• Bluetooth: Needed as a medium to pass measurement data to mobile application. (Part of PCB would be implemented here)
• Mobile App: Needed as an interface to users to show efficient placement of boxes as well as relative weight of each box. The application will also interpret the incoming measurement data and maximize the boxes sizes to the packing area. Specifically, the algorithm would be able to calculate the volume of each box and using that data order/stack those boxes in a manner that maximizes the space efficiently. Additionally, for users to know what is in each box they can either be given the option to utilize the camera and take a picture of the box contents or type out the boxes content. The latter option gives users an easy way to search for their items
• Battery/Power System: Needed only if we have to construct and build our own digital tape measure instead of using an existing one.
• Casing: Needed to protect the digital tape measure transfer and Bluetooth subsystems from any damage.
Criterion for success: We would like our product to be able to accurately measure boxes in a timely manner (preferably under 10 seconds). During that span of time be able to send data to the mobile application consistently with each data set referring to its specific box. After gathering all necessary data have the application tell us how large a U-Haul we need and have the app show us the most efficient way to fit the items in the designated space. We would like our product to keep track of all items and boxes as well as distribute the weight of the products in the trailer well. Additionally, we hope our design offers users a cheap, cost efficient way to store and pack their belongings in a moving truck. Moving can be a very laborious, time consuming task with our design we hope to make that process much faster.
Other Competition: There are a few mobile applications on the market that are used to keep track of a user’s inventory to tell them what items are in each box. Some of these products make use of the user’s smartphone camera and some do not. We find these applications to be a more cosmetic than informational to the users as a way to keep track of what boxes hold certain items. However, currently there is no mobile application out there that gives specific measurements of moving boxes to the users. Additionally, there are a few companies that specialize in moving a user’s items for them. However, this solution is costly and workers may not take the proper precautions when handling a person’s belongings.
|5||Automatic Three Piece Bike Lock
|Vassily Petrov||Arne Fliflet||design_document1.pdf
|Automatic Bike Lock
Problem: Properly securing a bike can take a lot of time and can be done improperly, leaving your bike susceptible to theft. The main way to properly secure a bike involves using a u-lock for the back wheel along with a chain that will lock the bike to a rack and secure the front wheel. This act takes time and some cyclists do not utilize two locks, making it easy for their bike to be taken. Our product seeks to alleviate this. We will create a three piece bike lock that is able to lock the front and back wheels along with locking the bike to a rack. This system will only require for the user’s fingerprint and will lock the bike in a fast and easy process. The main goal behind this project is to make bike locking a seamless and secure process.
Solution Overview: Our solution consists on having three separate locking mechanisms that are responsible for the following: locking the rear wheel, the front wheel, and locking the bike to a bike rack. A fingerprint sensor along with a microcontroller will be used to lock/unlock the three locks. To prevent theft, an alarm system will also be implemented and will audibly signal whenever a lock is being compromised. There will be 2 different locking mechanisms implemented for the device. The middle one will be different so that the user can secure their bike to different types of bike racks. Each of the five subsystems will let a user simply place their bike next to a rack and use their fingerprint to seamlessly lock their bike. This locked bike will be incredibly difficult to steal and saves the user time in securing their bike.
Subsystem #1 Microcontroller : This subsystem takes input from the fingerprint sensor and uses this information to control the locking and alarm systems of the project. Another way that the microcontroller controls the alarm system is via feedback that it obtains from the locking system; if the feedback, in the form of current, indicates that a lock is being compromised then the alarm system will go off due to the microcontroller. We currently plan on using a raspberry pi as the microcontroller but this may be replaced with a stm 32.
Subsystem #2 Fingerprint Sensor: The fingerprint sensor will take user’s thumb fingerprint allowing the locking circuit to unlock the bike lock. This only occurs when successful authentication happens for the owner of the bike lock, otherwise the bike lock will stay locked. After three failed fingerprint authentication attempts then the alarm system will be activated. The sensor will be mounted on the middle lock.
Subsystem #3 Rear/Front Wheel Lock: This subsystem will utilize a metal wire that will be encased by a curved piece of 3D printed plastic. The metal wire will have two loops at its end and one loop is connected to the teeth of a DC motor, via epoxy. When the user inputs their fingerprint the motor will rotate and move the wire to the other side of the wheel and then a proximity sensor will detect when the plastic is fully inserted to the other side of the housing. We will make a proto actuator that uses a screw, a DC motor, and two gears that will move the screw to secure the wire when the proximity sensor signals that the lock is in the optimal position. This actuator will disengage to allow the user to unlock their bike. To prevent the lock from closing on a spoke there will also be an IR sensor in the plastic piece so that if it detects a spoke the system will stop. On the other side of the wheel there will be white tape so that the sensor will not stop the lock from moving when it is in the lock position.
Subsystem #4 Middle Lock: This subsystem takes in input from the microcontroller.The implementation of the lock is different from the front/rear locks so that the user is free to lock their bike to a variety of different sized bike racks. This system will consist of a chain that will be moved by the user around the bike rack and upon using their fingerprint sensor a linear actuator will be used like a deadbolt to hold the chain in place. A proximity sensor with a LED light will be used to let the user know if they inserted the chain far enough into the lock. A wire will be looped around the chain so that if the chain is cut then the alarm system will be activated. This system will also have a hold button that will allow for only the middle lock to be activated so that the chain can be held when the bike is not locked.
Subsystem #5 Alarm System: This system is responsible for alerting whenever the locks are being tampered with. This alert comes in the form of three speakers that are in each lock. There will be two ways for the system to activate: an unauthorized user attempts to use the fingerprint sensor and one of the locks are cut or damaged. If the alarm system sounds only the users fingerprint will stop the alarm system. The microcontroller will directly control this system by taking in feedback, in the form of current, from the locking circuits.
Criterion for Success:
The high level goals of what our project needs to meet to be effective are:
The Automatic Bike Lock can accurately identify the authorized user’s fingerprint and quickly lock or unlock the bike.
An unauthorized fingerprint will prevent the bike from locking or unlocking. After three successive unauthorized user attempts to use the fingerprint sensor, the bike alarm will sound. If a lock is tampered with or cut, an alarm will sound.
Each lock is able to move it’s wire or rod and secure it in the desired placement with the appropriate signal from the microcontroller.
The bike lock is manufacturable from at most $200 worth of parts.
|6||Bluetooth Audio Splitter
|Dhruv Mathur||Rakesh Kumar||design_document1.pdf
|Niharika Agrawal, Kathryn Fejer, Nathan Narasimhan and na13, fejer2, nanaras2
In order to share in a listening experience, two people would need to either play the audio out loud, perhaps disturbing others such as on the plane, or acquire two pairs of wired headphones and an eighth inch cord splitter. Currently Bluetooth 5, a new Bluetooth protocol, allows for this, but only with common platforms. Such as, you can connect two pairs of Apple headphones to an IPhone, but not a pair of AirPods and a pair of Sony headphones.
Create a bluetooth splitter that can take in a bluetooth audio signal and repeat it to multiple bluetooth outputs in order to connect multiple people to one device, wirelessly. We would have a bluetooth receiver and two or more bluetooth transmitter. Most of the similar products on the market have an aux cord to the device playing the audio, but it would be easier for the user if that cord was eliminated. Our device would be platform-independent, such that you can connect a pair of Sony and a pair of Bose headphones to the same phone.
Subsystem 1: Computer to Bluetooth receiver
This subsystem will connect the computer to the bluetooth receiver. It will also transfer the music data to the other two bluetooth devices. Lastly, this device will also help pair the other two devices to the respective headphones. This will use a microcontroller chip and a bluetooth chip.
Subsystem 2: Bluetooth to Headphone 1 receiver
This subsystem will connect a bluetooth chip with the first set of headphones. This will primarily consist of the bluetooth chip and a button. Initial pairing and setup will be done through a serial monitor and AT commands.
Subsystem 3: Bluetooth to Headphone 2 receiver
This subsystem will connect the other bluetooth chip with the second set of headphones. This will primarily consist of the bluetooth chip and a button. Initial pairing and setup will be done through a serial monitor and AT commands.
Subsystem 4: Micro
We will use a microcontroller chip (for example, an ATmega328P) on a PCB to connect the bluetooth chips and control the input and output from each chip. We would use chips like the HC-05 or 06 which allows us to interface with a microcontroller to allow for pairing and sending of audio. If needed, we could include something akin to a microSD card for buffering or memory.
One concern brought up in office hours was connecting multiple Bluetooth chips to a single Arduino. However, there are two ways to ensure we can send data simultaneously. One, we could use a ATmega328P and connect the Bluetooth chips to the PWM pins and make serial input/outputs. Or, we can use a ATmega2560 chip. This has four hardware communication RX/TX pins, and therefore we could connect three Bluetooth chips here and be able to transmit and receive data reliably.
Subsystem 5: Power
Our microcontroller and PCB will be connected to a battery pack and will then supply power to the bluetooth chips. The chips are 3.3V, therefore we will have voltage dividers to supply the proper power to these chips.
Criteria for Success:
Successful pairing between the computer with our device, and then subsequent pairing with the two seperate bluetooth subsystems with the two seperate devices such that the data sent by the computer is able to be received by the two end devices.
|7||First Person Virtual Reality Interface with RC Car
Sang Baek Han
|Weihang Liang||Jing Jiang||design_document2.pdf
|**Project Members:** Deniz Yıldırım (dy2), Erik Jacobson (erikj2), Sang Baek Han (shan67)
**Title:** First Person Virtual Reality Interface with RC Car
We will use a VR set and the camera module that can display in 180 degrees to give players the perception that they are a small person inside a remote car, making the experience more immersive and fun. People controlling the car would never lose sense of where the car is as they would feel like they are inside the car and would be able to see their surroundings by looking around. A steering wheel and hands could be rendered on top of the video feed to let players control the car by holding the virtual steering wheel with their VR controllers. While there are some remote cars with cameras in the market, they fail at being immersive because they keep using traditional remote controllers and often do not give users the ability to see the car’s surroundings.
Remote Control Subsystem
- We will design [H bridge and PWM circuits](https://www.acmesystems.it/pcb_pwm) to control the speed of DC motors of RC car. H bridge and PWM circuits are connected to the microprocessor [ATmega328](http://ww1.microchip.com/downloads/en/DeviceDoc/ATmega48A-PA-88A-PA-168A-PA-328-P-DS-DS40002061A.pdf), which receives the control input from the controller through Bluetooth. [HC-05](https://www.amazon.com/dp/B00INWZRNC) Bluetooth module is connected to the microprocessor.
- The smartphone app will be used to control the RC car. The smartphone will connect to the RC car through Bluetooth connection. This will work as a prototyping tool until we can merge with VR gear. We will use VR controller on the final stage.
Video Transmission using Wi-Fi Subsystem
- We will use [Raspberry Pi 3 Model A+](https://www.raspberrypi.org/products/raspberry-pi-3-model-a-plus/) to receive the image data from the camera and send the image data to VR gear through WiFi. This Raspberry Pi has Broadcom VideoCore IV MP2 400 MHz GPU and 2.4GHz and 5GHz IEEE 802.11ac Wi-Fi, which can help to reduce the delay of wireless transmission of video.
- We will use [USB Camera with a 180 degree angle fisheye lens](https://www.amazon.com/dp/B00LQ854AG/) for the camera module to provide a 180 degree view. Since we are using a fisheye lens, the raw image data is curvilinear. The curvilinear image will be converted to the rectilinear image using [Fisheye camera model OpenCV](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html).
VR Headgear Facing Direction Subsystem
- We will use [Oculus](https://www.oculus.com/?locale=en_US) VR gear for this project. We are most likely to borrow a VR gear from CS498 or [UGL](https://www.library.illinois.edu/mc/lt/emergingtech/). If not, we might buy the used product to work with. We will program with Oculus [SDK](https://developer.oculus.com/).
- VR gear will receive the image data from Raspberry Pi through Wi-Fi transmission and display the video in real-time to the user. VR gear will continuously receive the entire 180 degree view image data from Raspberry Pi. The facing direction of the user will determine which section of the 180 degree view to display to the user.
- We will use 6V Li Battery for all the power source. H bridge and PWM circuits and DC motor will use 6V. 5V Voltage Regulator will be used to provide 5V for the rest of the parts.
- VR Headgear will be connected to PC to turn on.
**Criterion for Success:**
- Display the camera input of high resolution in real-time with less than 1s delay.
- The real-time image from camera can be displayed through VR headgear.
- The camera can display in 180 degrees with the user input.
- The direction of where VR headgear is facing to serve as the user input to rotate the display in 180 degrees.
|8||WeightB0ard - An Internet connected, weight-sensitive ingredient tray
|Dhruv Mathur||Arne Fliflet||design_document1.pdf
|Thomas Driscoll, August Gress, Kyle Patel - [netid: tfd2][netid: augustg2][netid: kylep2]
In any environment that involves cooking or food preparation, knowing the amount of ingredients on hand is of the utmost importance. This can range from large-scale restaurants that have massive quantities of any given food to athletes engaging in meal prep, often down to the gram. In between these two extremes also exist the average consumer, who rely on unreliable memory and insatiable hunger when shopping instead of their objective needs.
Solution: An internet connected, weight-sensitive kitchen cabinet/tray that pings a grocery list app. For items such as rice, sugar, flour, protein powder, creatine, etc., a scale could measure the amount at home. If it falls below an ingredient-appropriate threshold, a microcontroller will send an update to a user's phone. Simply checking the app once in the store, or while placing a large order, allows the user to purchase the correct amount of food. It will have 7 separately sized sensors that accurately measure ingredient amounts placed on top of it, which will be a proof-of-concept to show our idea's scalability.
- We will use an ATMEGA328P micro-controller as the main interface for the tray, and an ESP8266 WiFi adapter to connect to the internet. There will also be a power jack for plugging the tray into an electrical socket. We will code using the Arduino IDE. The PCB substrate will likely be basic FR-4 and the board itself will most likely consist of one layer.
- The tray itself would contain 7 separate pressure sensitive plates that can measure the weight of whatever is placed on top of it. The division would be as follows: 1 large sensor, 2 medium sensors, and 4 small sensors. Each plate will require a power source; our goal is to design a circuit such that we can use one input source (i.e. a wall socket) to power the entire tray (including micro-controller).
Mobile App (software)
- Our app would be very simple and mostly serve to complement the WeightB0ard. The front-end would be written in React Native to allow cross-platform support while the back-end would be a very simple Flask API that performs GET operations. In the interest of robust data collection, we would have a server hosting a MySQL database that the API pings. The server itself would be hosted on any free platform like AWS.
Criterion for success :
- Able to measure the weight of ingredients placed on pressure sensitive pads
- Weight (in grams) is reported to a mobile application
- Weight sensitive pads are precise enough to accurately measure weight differences on the smaller pads (as well as the larger ones)
- Board will send information about the ingredients on it at any time and works over the Internet (not just a local network)
- Board is compact enough to comfortably fit within a cupboard or half a kitchen cabinet
- Tray is resilient to any food spillage such that no electronics are seriously damaged in case it occurs.
|9||Automatic Weeding Arm
||Shuyue (Lucia) Zhang
|Johan Mufuta||Joohyung Kim||design_document1.pdf
|Sowji Akshintala [akshint2]
Sophie Liu [yiqiaol3]
Shuyue (Lucia) Zhang [shuyuez2]
For generations, humans have used manual labor to curb aggressive weeds, which leech nutrients and resources from staple crops. As agricultural demands and farm sizes grew, the industry started to heavily rely on chemical herbicides and pesticides in order to ensure maximum yields. Weed control through herbicide is recently under dramatic controversy for its carcinogenic potential  and environmental-contamination concerns . Currently, farms use about 44 gallons of herbicide per acre in order to kill unwanted weeds . This practice comes with risks. Runoff from the herbicide sprays threatens the natural ecosystem, hurting not only native plant species but poisoning some animals as well. Herbicide use has also affected human lives, as research has linked an increase in cancer with the use of glyphosate, a popular weed killer used in the industry . In terms of economics, chemical crop control has been slowly bleeding farmers dry. Agrochemical companies have been selling genetically modified seeds that have herbicides in them, but this only boosts their herbicide sales over time as weeds have evolved into “superweeds” which require higher and stronger doses of chemicals to kill . This ballooning effect can be clearly noted in the soy industry, where, as of 2008, 92% of soy plants had become glyphosate-resistant , requiring the industry to begin using genetically modified crops with herbicide and liquid herbicide in tandem. This is all while agrochemical companies have quietly quintupled their prices for both genetically modified seeds and chemical herbicide within the last two decades . Ethically, herbicide use must be phased out but regressing to the use of human labor is not a realistic solution. Modern agriculture needs a way to streamline the repetitive act of finding and destroying specific plants while keeping the desired crops safe and healthy. Naturally, robotics can provide an answer which is both ethical and cost-effective in the long run.
We propose a solution of an automatic robotic weeding arm, which can identify post-emergent weeds and cut them with an attached blunted sheer. Automated weeders do exist in the industry, but they still rely on herbicide use . Since there are existing agricultural robots in the market that can navigate the difficult terrain of crop fields, such as the TerraSentia , we are not focusing on the robotic base. Rather, we see the arm as a potential extension of a robotic base, allowing us to target the specific problem of chemical-free weed removal. Our arm focuses on the identification of various seedling species and atomization of the weeding process.
The arm is fitted with a camera that can not only detect different seedlings through neural network training but also enables real-time video monitoring from a connected computer screen. Once the arm is able to detect the unwanted plant, it can maneuver and cut the weed with it’s motorized sheer. We decided to cut instead of pulling the weeds because cutting requires less force and it is more efficient when treating plants. To accomplish this, the arm will have 4 motorized joints with 180 degrees of freedom, allowing the arm to trim weeds on either side. The flexibility of the arm allows it to attack hard to reach plants effectively. Due to the arm’s trainability, it can also be easily repurposed to perform many different agricultural functions. For example, once the arm is able to learn from various plant databases, it could easily be used to pick fruit or trim foliage just by switching out the sheer-hand attachment for other applicable tools.
#Hardware and mechanical components
- Camera: An Arducam 5MP OV5647 Raspberry Pi camera module with motorized focus is connected to Raspberry Pi series board for image detection and real-time video monitoring.
- Motor: Four MG995 servo motors (4.8-7v) with stall torque 12-13kg/cm are used at the joints. The controlled rotation is 180 degrees (90 on each dimension), providing enough flexibility for the joints.
- Skeleton and mechanical support: The skeleton and dimensions of the arm are designed on AutoCAD and will be laser cut using 0.25’’ thick acrylic panel. Appropriate screws will be purchased and installed on the arm to provide the necessary support.
- Battery: First plan: A LP-E8 Li-ion rechargeable battery will be used to power the electrical system with 7.2V and 13Wh power. The battery is planned to be taken from a Canon 550D camera to save the cost of the project. We are planning to solder the battery holder our own to hook up the electrical system. Second plan: 4 AA batteries will be connected in series to power the system in 6v. The battery holder will be purchased online.
- Raspberry Pi Board: Raspberry Pi 3B+ will control the camera module, communicate with the microcontroller and allow us to record test runs and review them at a later time.
- Ultrasound: We are going to use an HC-SR04 ultrasound module. This module can be controlled through Raspberry Pi to detect the distance of the arm tip to the ground, as part of our robot’s weeding mechanism.
- LED: We are going to use an LED light bulb (no particular restriction) to indicate the status of the arm. For example, when weeds are not detected, the LED is green. When weeds are detected, the LED is red.
- Switch: We will include a switch to turn on the arm.
- Potentiometers: In order to make our arm going to the home position, we are planning to include 4 potentiometers . Plan 1 is to mount them on PCB. Plan 2 is to mount them on the microcontroller. We have found both ways feasible and common and will decide based on prices and other factors. But we are leaning towards plan 2.
- Control Circuit: A microcontroller can be used to control the 4 motors utilized in the three joints and automatic sheers. We could possibly implement our homing mechanism through microcontroller utilizing potentiometers.
- PCB: A PCB board is going to be used to host our hardware. The elements on PCB will include: power (battery), LED, a mounted RaspberryPi, microcontroller, switch.
Our software subsystem aims to achieve weed detection and real-time monitoring. In order to detect and classify the weeds from other crop seedlings, we plan to build a Neural Network model. We will start with a simple model and increase the complexity to achieve higher prediction accuracy. The training dataset will be mainly based on V2 Plant Seedlings Dataset  from Kaggle, which contains images of crop and weed seedlings at different growth stages. We will expand the dataset by adding images taken by the camera module. In addition, by connecting Raspberry Pi board to the computer, we plan to enable real-time monitoring through computer screen to better evaluate robot arm performance.
#Criterion for Success#
- Detect and classify weed from other crop seedlings with high accuracy (>80%).
- Achieve real-time monitoring by obtaining input video through the camera module.
- Motorized joints move effectively to reach the seedling to cut.
- Motorized sheers can safely and effectively snip unwanted plants.
- Achieve effective and efficient hardware and software communication.
 Emily Dixon, “Common weed killer glyphosate increases cancer risk by 41%, study says”, CNN, Feb 14 2019, https://www.cnn.com
 Van Bruggen A.H.C., He M.M., Shin K., Mai V., Jeong K.C.,Finckh M.R., Morris J.G.Jr., “Environmental and health effects of the herbicide glyphosate”, Science of The Total Environment, V616-617, March 2018, pp 255-268, https://doi.org/10.1016/j.scitotenv.2017.10.309
 Pesticide Stewardship Calibration Formula
 Brooke Borel, “Weeds are winning the war against herbicide resistance”, Scientific American Sustainability, July 18 2018, https://www.scientificamerican.com
 Big Ag’s Dirty Little Secret
 Historic Fertilizer, Seed, and Chemical Costs and Projections for 2019
 “The autonomous robot weeder from Ecorobotix”, Ecorobotix, https://www.ecorobotix.com/en/autonomous-robot-weeder/
 EarthSense, Inc. https://www.earthsense.co
 V2 Plant Seedlings Dataset. https://www.kaggle.com/vbookshelf/v2-plant-seedlings-dataset#105.png
 RO botic Arm Position Control http://www.robotoid.com/appnotes/electronics-arm-control-circuitry.html
|10||Plug and Play Modular Keyboard
|Shaoyu Meng||Jonathon Schuh||design_document1.pdf
|Christian Held ; Fangqi Han ; Daniel Chen
When people think of keyboards, they think of large HP keyboards with a number pad and feet that kick up in the back. Full keyboards are useful for situations where the keyboard does not need to move and for right handed people. On the go, though, they can be bulky and too large for travel. Additionally, the number pad being on the right can be annoying for left handed people. There are keyboards that go smaller (similar to the size of a laptop keyboard, which have their own inconveniences), but then sacrifice the full keyboard functionality that all those extra keys provide. For all these considerations, one should be able to pick and choose the right amount of keys and orientation to maximize their workflow and work situations.
Solution Overview -
Our solution starts with a relatively portable main keyboard, with the main keys like the letters and numbers called the tenkeyless or 60% keyboard layout, that has a microcontroller and can connect to the computer. Then, there are other modules that can be connected to this main keyboard that will add functionality to the main keyboard. One module would be the number pad, which is necessary for some and needlessly bulky for others. Another would be the function keys (f1, f2…) that have this same tradeoff. Then we can have modules that provide added customization such as a volume knob or other I/O not normally found on a keyboard.
The main workflow for this projects starts with the keys. They give a signal to either the microcontroller or I/O expanders (which then feed signals into the microcontroller as well). The microcontroller identifies what key(s) have been pressed and sends serial commands back through the USB to the main computer. The code for the microcontroller should allow for the user to change what each key does so they can have maximum capabilities even in the smallest physical setting. Each module should have a case in order to protect it and the user from one another.
Solution Components –
Subsystem 1 Key wiring: the mechanical switches of the keyboards, and the wiring/PCB between them that will be connected to the main controller or I/O expander. Examples of switches include cherry switches or gateon switches. Brought up in our comments, these mechanical switches have generally a debouncing period of 5ms, and we think this is short enough that we will not have to deal with this mechanical issue on the software side.
Subsystem 2 Microcontroller: interprets key signals from keys or I/O expander into USB signals to be sent to the computer. One microcontroller we are heavily considering is the Teensy 2.0 controller.
Subsystem 3 I/O expanders: connects the outer boards to the main “tenkeyless” board. They are connected with 3.5 mm jack plug to communicate. These 3.5 mm plugs need to be TRRS cables, and will send GND, +5, and I2C Serial Data and the Serial Clock between different components with the I/O expanders. (Even though the 3.5 cable can send analog signals, the data will be digital) We are also interested in alternatives for our connection that are more compact, but the 3.5 mm plug seems to provide everything we need to send between components and is hardware that we already understand and can implement straightforward.
Subsystem 4 Firmware: code for the microcontroller that will interpret the key signals and also control things like caps lock and the function layer (pressing fn on a laptop, which we plan to implement).
Subsystem 5 Programmability: Allows user to modify their keyboard to produce what characters or potentially commands, such as changing the fn layers, having programmable keys, and other features that would help with user productivity and customization, which is one of the main pillars for our design of a modular keyboard.
Subsystem 5 Structural components: casing that protects user and keyboard from one another. We plan to use 3D printing and perhaps aluminum plates as simple shells for our pcb and controllers. We would also like to explore alternatives.
Criterion for Success -
A working main keyboard and one module that can be connected that adds more functionality to the keyboard, as well as disconnecting this module without crashing everything. While a keyboard with a clean case, nice cables, and new keycaps would be nice, the idea of adding more functionality and having an interesting and useful way to connect these separate components are what we want to focus on the most.
|11||Emergency Vitals Monitor
|Madison Hedlund||Jing Jiang||design_document1.pdf
|Zihong Zeng, zihongz2, Songtao He, songtao2, Brandon Noble, bnoble2
Problem: Administering first aid in disaster situations is an extremely stressful task, that is prone to error if not done by a very well trained professional. Under circumstances with multiple injuries (gunshot, heat/fire related, earthquake, etc.), with limited bystanders available, treatment for these injuries are typically left for emergency response personnel.
Solution: A uniquely colored automated blood pressure cuff with extending rod for taking temperature, which doubles as the alignment for the pressure cuff's microphone. This simplifies the use as much as possible so that only a few people can attach them to as large a group as necessary in as little time needed. After activation, the blood pressure cuff will automatically take readings of blood pressure, heart rate, and temperature every thirty seconds. The blood pressure and heart rate can be used to determine an individual's 'shock index', which would set up a triage system automatically. This data is collated on a device for the user so as to see the ranking of each cuff in use in terms of who needs attention first, and gives first aid advice on how to treat the conditions detected.
Subsystem1: Automated Blood Pressure Cuff - A modified digital blood pressure cuff. Uses modern blood pressure cuff with colored strip, and rod with IC temperature sensor at the tip. By extending it along the inside of the arm, it would also align the microphone used for the digital cuff, removing the need to ensure it is aligned by the user. Rapidly inflates pressure to 180 mmHg, drops off to room pressure, listening for Korotkoff sounds to determine systolic and diastolic blood pressure, as well as heart rate.
Subsystem2: IoT communication - For prototyping, we'll send information over wifi to be processed online, and then display the ranking of active cuffs in terms of worst vitals at the top.
Subsystem3: User Device - For the finished product, have the cuff information collated on a screen that ranks the active cuffs in terms of worst vitals at the top.
Subsystem4: Local wireless communication - For the finished product, remove the wifi component of the blood pressure cuff and replace it with an alternative frequency communication to communicate with the user device without the presence of wifi or cellular reception.
Criterion for Success:
Basic implementation: Automated blood pressure cuff with blood pressure, heart rate, and temperature readings processed using IoT. Handle multiple uses of the cuff, establish priority system based off of condition (shock index, rate of vitals dropping).
Complete implementation: The above with IoT functionality replaced with local wireless capability, and a device to display that data for the user.
|12||ECE OpenLab Automated Equipment Checkout System
|Dhruv Mathur||Jonathon Schuh||design_document1.pdf
Aditya Bawankule, David Hickox, Alex Ortwig, Abby Starr
(adityab2, dhickox2, aortwig2, amstarr2)
Checking out equipment in the ECE OpenLab takes a long time, and it is something that the OpenLab monitors have to do fairly frequently in our job. There are several steps to checking out equipment, including taking down the i-card information, getting the kit, and making sure all of the components are in the kit. One problem we experience frequently is that the kits often come back disorganized with the cables are missing. Additionally, this takes out a lot of time from our other lab monitor duties, such as working on projects, due to the distraction and the timely process of checking out a lab kit.
We would like to make a system that can handle equipment checkout, logging of data, and that would keep track of the parts in each box. We could have a locker style system, with one prox card scanner to check out the equipment, and in each locker have a scale to tell if the weight is the correct amount that we expect to return as well as an rfid or bluetooth based identifier per individual kit. We could also have logging and a web interface so that the lab monitors can keep track of which kits are being checked out, and if there are any issues with any of the kits.
- Locker Design & Lock
- Physical Locker housing
- Magnetic lock, is locked when power is off
- Solenoid lock
- Must be connected to the network to communicate with web interface
- NXP 1064 with custom pcb and ethernet connection
- Locker Controls
- Well labeled button interface (either physical or touch) with lcd for info
- RFID System
- Prox card scanner
- Equipment security
- Weight scales at the bottom of each locker ensure that materials are returned correctly assuming this is not cost prohibitive for the material needing to be stored
- Connected to AC Power 24/7 (people should be able to get lab kits at any time of day)
- AC-DC power supply purchased to ensure safety and reliability. DC-DC conversion for various power rails (student designed).
- Web interface
- Provide a system for the lab monitors to monitor the system and check on the state of the system remotely also alerts incase of fault states.
- Potential lab user facing side similar to the 391 big brother
- Potentially Microsoft Power BI visualization
- Locker touch screen interface
- User select which locker to open
- Prevents selection of empty lockers
- Displays what contents are in each locker
- Allow users to submit a report about a kit that has broken/missing parts
- Backend data storage
- MySQL database to store users of openlab and prox card information
Criterion for Success:
- Constant access to equipment, provided that ECEB power is running
- Self explanatory locker interface and powerful web interface for administration
- Semi-accurate kit verification system. Ethics regarding potential false accusations are of a high concern and will mean that lab monitor intervention is needed for any issues.
- User input system on locker functions without extreme unnecessary complications
|13||Wearable device for Amusement Parks
|Johan Mufuta||Joohyung Kim||design_document1.pdf
|AMUSEMENT PARK WEARABLE DEVICE
Currently amusement parks have very inefficient systems where people end up spending most of their time waiting in lines. Further, it is easy for children to get lost and there is no easy way of finding them. Also since amusement parks are crowded, it’s annoying to carry things like wallets, locker keys etc around with you all the time and it's very likely that you may lose them.
Create a wearable device which can be used for various functions such as payments, lockers, food etc. The wearable also contains a gps tracking system which will be valuable for parents trying to find children in large amusement parks. The parents can add a distance parameter on the wearable that alerts them when their child or loved one is greater than a certain distance away from them. An LCD screen can be used as a display for information such as wait times, money remaining, and location of people.
Optional Feature (Based on time ): Ability to call or send messages to each other. Many times in amusement parks, friends tend to get away from each other and it's annoying to carry your phone around, if the wearable gave you the option to send messages, make calls or use as a walkie talkie then it would be easy to stay in touch with each other in the large parks.
Microcontroller currently preferred : AdaFruit Flora
Possible candidates based on cost and compatibility with other modules ( such as wifi etc ) :
AdaFruit Flora is an arduino compatible microcontroller that is commonly used for wearable devices. It has an inbuilt gps functionality and hence is our preferred choice currently. It also is a very low power microcontroller which is essential for a wearable device.
AdaFruit Flora comes with a GPS module attached to it so we plan on using that for the GPS
Neo gps 6M is compatible with the lilypad arduino.
For the GPS, we will have a different parent and child watch. So, the parent watches are always tracking their own location as well as all the child watches. If the distance between the parent watch and any child watch linked to it is greater than a certain amount, we will send out an alert.
RFID Tag :
RFID Transmitter (CTRA1816F)
This will include the RFID transmitter connected to the microcontroller above and attached to the watch. This will be used for payments, unlocking lockers, etc
RFID Reader :
RFID reader (RC522)
This will consist of a RFID reader (RC522) connected to an Arduino. This will be used to mimic the devices in amusement used for payments and unlocking devices etc.
LCD display on watch:
This display acts as the interface to see wait times, get alerts on when to go to a particular attraction and track your friends and family.
Optional Components if time Permits
*if there is time to enable the walkie talkie feature the messages will be read through the lcd screen
A WiFi module connected to the microcontroller to send and receive data to mobile app for saving user profiles, locking, unlocking, adding chores, etc.
*Optional - Could be used for WIFI calling as well if time permits
A compatible wifi module with AdaFruit Flora: ESP32 WiFi-BT-BLE MCU Module / ESP-WROOM-32 (https://www.adafruit.com/product/3320)
A compatible wifi module with lilypad arduino : ESP8266 Esp-12E wifi module
*Optional Walkie Talkie Module if time permits:
SA818 Walkie talkie module with RDA1846S chip
Has a 5km range which can be used for communication since people usually do not like carrying around their phones in amusement parks, especially water parks. Plus in most rides they ask you to remove their phones. And in water parks people don't carry their phones.
A mobile application that allows the initial set up for being able to use the wearable device inside the theme park. At the start, the user can enter their credit card info in the app and load a certain amount of credits from their credit card into the application. That amount of money will get stored in the wearable device for the user to use, and will be decremented as the user pays, using a local counter on the wearable.
In the end, the app will show an entire summary of the user’s trip: rides done, wait times experienced, bills for food and merchandise purchased. Also, all the images clicked of the user inside the theme park will be available for the user to see on the application.
Criterion for Success
Locate people wearing the band precisely and display that location on the wearable accurately using GPS.
Wearable properly interacts with the RFID readers for various purposes and the LCD screen displays the appropriate messages.
Software application safely allows credit card payments for loading credits.
The WiFi module can accurately transfer information from the mobile app to the watch and vice versa, for example, loaded credits, photos etc.
|14||Button Remapping for GameCube Games such as Super Smash Bros Melee
|Evan Widloski||Jonathon Schuh||design_document1.pdf
|Michael Qian, Yeda Wu, Srikanth Yaganti, mqian20, yedawu2, syagan2
Problem: In fighting games, it is usually beneficial to remap certain buttons to perform different actions for ease of doing combos. For example, a player might want to remap the X button on their controller from "jump" to "attack". This is present in the game settings of many popular fighting games except for Super Smash Bros Melee for the Nintendo GameCube.
Solution: Create an adapter that sits between the GameCube and GameCube controller. The controller will plug into the adapter which plugs into the GameCube. Users will have an interface where they can choose how to remap their buttons. This adapter will then take in the signals of the button presses of the controller and translate them to signal button presses based on the button remapping. This hardware will also allow for button remapping for any other GameCube games
This subsystem will perform the data reconfiguration of the incoming signal from the controller. It needs to remap certain input controls to different output controls while minimizing delay and maintaining compatibility with the GameCube console. Currently, we are looking at the Atmega328.
Subsystem2: Phone App
This subsystem will be the control unit to configuring/remapping buttons on game cube controller. It will have a user friendly UI that will allow users to pick and choose how to remap buttons.
Subsystem3: BlueTooth receiver/Transmitter
This subsystem will allow the phone app and the microcontroller to communicate. When the user submits a new configuration on the app, the bluetooth receiver will notify the microcontroller how to remap the buttons. We are considering using the HC-05 Arduino Wireless Bluetooth Receiver: https://www.amazon.com/LeaningTech-HC-05-Module-Pass-Through-Communication/dp/B00INWZRNC
Criterion for success:
Successful sending of button configuration from phone app to adapter.
Controller to console remapping without noticeable delay
|15||SMART AUTOMATIC PASTA / RICE COOKER
Tae Kyung Lee
|Jonathan Hoff||Joohyung Kim||design_document1.pdf
|Teammates: Anusha Kandula (kandula2), Gautam Putcha (gputcha2), Tae Kyung Lee (tlee82)
When we return home from work or school and want to eat as soon as possible, starting a rice cooker and waiting for another 30-40 minutes is inconvenient. In general, the time taken to cook the base/staple of meals (ex: rice, pasta, noodles) is long compared to the remaining steps in a simple cooking process. For example, if we have the pasta cooked, adding pasta sauce to it takes a negligible amount of time. Using a high-pressure rice cooker still takes 15-20 minutes and is more expensive than a simple rice cooker. An effective and unique solution to this problem has not been found yet, even though a lot of people face this issue!
A system that is a fully automatic smart pasta/rice-cooking system and would be an extension on existing very basic rice cookers. Our system would be a module that is connected to a water supply as well as a rice reservoir. This rice reservoir would simply be filled when a bag of rice/pasta is bought from the grocery store. The user, while still at school/work could use our mobile/web application to prepare the desired amount of rice. For example, if the user would like to cook 2 cups of rice, the correct amount of rice would be released from the reservoir into the rice cooker, with the correct associated volume of water, and the cooker would be started so that the rice would be ready for when the user arrives home.
Since a lot of people (especially college students) have unpredictable schedules, it can often be difficult to plan when we may come home or if we may have already eaten by the time we come home. With this system, you can start the cooking process from anywhere, with no preparation beforehand.
We plan to address the safety concerns with the following precautions so that our users are not afraid that their houses might catch on fire:
1) The addition of a smoke detector, which on detection will send a message to the user's mobile phone and immediately cut off power to the entire device.
2) An abnormal rise in heat will also send a message to the user and cut off power to all components.
3) Additional in-built surge protection to safeguard against a potential voltage spike.
There are very few “smart” rice-cookers on the market today, but none with the abilities that we are proposing. An interesting device that we found was the Xiaomi Mi Induction Pressure Rice-Cooker. This has the ability to remotely start the cooking of rice through an application but has one issue: it requires the user to have already put in the rice and water, basically rendering the system as a simple on/off smart switch. Our system on the other hand, would not require the user to prepare for future cooking at all. Since the cooker is already connected to both the rice/pasta and water sources, a user request with the number of cups would begin the cooking process at any time.
Sensor Subsystem: A level sensor will be used to find the accurate measurement of rice/water in the valve and will also be used to warn the user when the container needs a refill. A temperature sensor to detect any abnormal rise in the temperature and potential smoke for any fire hazards. A smoke detector will also be added for safety to cut off power to the entire system in case of a hazard.
Processing Subsystem: A PCB that acts as the processor and is placed on the base station which receives measurement data from mobile application. The sequence of events is executed in order by the PCB.
Power Subsystem: Base station will be plugged into a standard wall socket plug point.
Communications Subsystem & User Interface: Wireless internet communication using a WiFi module along with a mobile/web application to facilitate remote cooking. The Wifi module will also be integrated into our PCB.
Reservoir controller Subsystem:
This system will aid in dispensing the necessary amount of rice and water into the cooker to prepare for cooking. The two main components of this subsystem will be:
1) A reservoir of rice whose release is governed by a motor-controlled auger. The number of cups as requested by the user is measured by the change in volume as indicated by a level sensor.
2) A reservoir of water whose release is governed by a solenoid valve. The number of cups as requested by the user is measured by the change in volume as indicated by a level sensor.
The behavior of both of the reservoir releases are controlled by an arduino. This subsystem is attached to a tripod-looking structure above the cooker that holds the reservoirs into place.
Cook & Warm Switch Subsystem: A single button on the cooker which is turned on (to begin cooking) using a simple DC servo motor.
Hinge system for opening and closing the lid: The arduino will control the rotation of a DC servo motor that is attached to the lid will be connected by the A4988 stepper motor driver carrier to control the speed and angle of the turn of the motor.
Criterion for Success:
The goal of this project is to have a simple app interface from which a user can begin cooking rice remotely with no preparation whatsoever. The Smart Rice-Cooker Extension System will then take off the lid and dispense the exact amount of rice and water after the program is run. It then closes the lid and turns on the rice cooker for a quick meal ready to eat when you are back home.
A basic sketch of what the final product will look like and a picture of the rice cooker that we want to use:
|16||Single-Handed Video Game Controller
|Megan Roller||Jing Jiang||design_document5.pdf
|Problem: Many people are unable to use a standard video game controller, for example those with the use of only one hand. Consoles like the Nintendo Wii require fewer buttons and are usable with one hand. A similar solution for other platforms is difficult to find.
Solution Overview: We propose a one-handed adaptation of a standard video game controller, allowing nearly all possible button combinations of the two-handed version. Our solution would take on a form loosely based on joystick controllers, but be symmetrical, increasing accessibility.
-Controller Base: Houses joystick sensor, menu buttons non-critical to gameplay, and main microcontroller. Joystick handle can also be twisted to change functionality.
--Not twisted: functions as left circle pad
--Twisted: functions as left directional pad
-Mounted Controls: Four standard buttons Y/X/A/B and right stick for thumb. The left and right triggers will each be mounted on a sideways-oriented button, representing the left and right bumper buttons. Buttons will be centered so as to be symmetrical
Innovation & Uniqueness: Our project is a redesign of Xbox/Playstation controllers, with changes in mapping and combination of the buttons, so the controller is operable by one hand.
Flight simulation joysticks and Xbox adaptive controller.
Our project can be competitive for the following reasons:
-Symmetric: other joysticks are either right or left handed
-Simple to operate, easy to learn:
-Able to handle complex games:
Criteria for Success:
A challenge will be compound sensor mechanics (triggers mounted on buttons, twisting joystick), and device shape. We are inexperienced in CAD, so we may seek help with physical design.
Baseline: User can perform all common button combinations and send serially to computer
Enhancements: Connect to games available online, improving user experience, reaction-based/timing games.
|17||John Deere Modular Vehicle Control Board
|David Null||Jing Jiang||proposal1.pdf
|Sumanendra Sanyal, sanyal3
Sam Huhta, shuhta2
Zach Hoegberg, zh2
Modular PCB interface for John Deere equipment
Problem: John Deere currently manufactures an autonomous lawnmower that uses a buried wire to define the boundary of the yard. Furthermore , the hardware in the mower isn't applicable to other John Deere equipment. They would like to eliminate this wire by implementing a localization algorithm using a combination of vision, LIDAR, and other sensors, but the current Vehicle Control Unit does not have the necessary computing power to enable this.
Solution: We will replace the existing board with a modular microcontroller design capable of running Linux applications, consisting of a universal main board and machine-specific perception and vehicle boards. The modular design keeps the high-level automation code on the main board and swaps out perception boards and vehicle boards designed for a specific piece of equipment. The main board boots a LINUX system developed by John Deere for a particular piece of John Deere equipment. The vehicle board will drive motors or send command to the existing equipment as appropriate. The perception board will accept sensor input, for example from cameras and LIDAR sensors, and include a GPU for running a Deere-provided neural net or machine learning algorithm. The boards will communicate using ethernet/USB or some other method determined to deliver enough speed. All of the boards are custom-built microcontroller boards/PCBs that meet a specific hardware specification. The boards will have to demonstrate basic functionality by running test code (provided by John Deere software developers) and then they will run the actual code pertinent to the operation of the vehicle (also provided by John Deere). We intend to design these boards with the design of the the preexisting board at hand (non-modular design) and close coordination with John Deere PCB engineers and software developers.
Power: SMPS Voltage Regulators so all of the boards receive a stable power supply from the on-board battery.
Main Board: This board is responsible for running the top-level Linux software. Unlike the other two boards, this one is the same for all vehicles. It contains a multi-core ARM processor, and it connects to the other two boards via ethernet/USB cables (or whatever method of communication we choose).
Vehicle Board: This board is responsible for dealing with the physical operation of the mower, and is connected to wheel motors, steering, Hall effect sensors, etc. The vehicle board is unique to the type of John Deere equipment and we will be demonstrating the vehicle board for the tango mower.
Perception Board: This board is responsible for taking in all the data from perception sensors, including cameras, LIDAR, and other potential sensors in the future. This board can be switched out in the future as other sensors are introduced.
Criteria for Success: The main board will boot the provided Linux kernel, and communicate with the vehicle board. The vehicle board will control the Tango drive motors as commanded by the main board. The perception board will deliver sensor data to the main board.
Additionally, Deere would like us to run a neural net on a GPU on the perception board, but they understand that adding that task may be out of the scope of a semester-long project. We will leave this as an optional task if time allows for the time being, and decide in the next few weeks if we can accomplish this task as well.
|18||Low-Cost Integrated Spectrometer
|Charles Ross||Jing Jiang||design_document1.pdf
|Project Members: Lukas Janavicius - janavic2, Drew Ingram - andrewi2
Problem: Optical spectrometers play a critical role in the characterization of chemicals and materials. However, budget digital spectrometers start at over $1000. This cost barrier effectively limits techniques like Raman spectroscopy to Universities and other research institutions. We aim to bring the cost of the device to under $100, as to lower the barrier of entry to spectroscopy techniques.
Solution: A spectrometer's cost lies in its expensive optics, dedicated high-frequency data collection hardware. Our proposed solution is to eliminate costly optical components by integrating an optical circuit in acrylic plastic and collecting the diffracted light using a linear CCD image sensor driven by a low-cost microcontroller [1, 2].
Solution Components: An integrated photonic circuit, data acquisition hardware, MCU software, and Host software.
Photonics: To spatially resolve the input light's spectrum, we propose fabricating an integrated photonic diffraction element . Specifically, we aim to make an elliptical Echelle grating, such that the output spectrum is tuned by changing the angle of the input waveguide . Although Echelle gratings can support broadband spectrometers, we aim to optimize our photonic circuit for the wavelengths of 400-800nm. Such wavelengths fit within our selected CCD's response curve, while also offering applications in spectrometry.
Data Acquisition: Our proposed solution requires three distinct electronic components; a TCD1103 linear CCD image sensor, a high-speed ADC to read the CCD data, and an ESP32 to acquire and send data to a host device. The Toshiba TCD1103 belongs to a family of devices shown to work with 8-bit MCU's, although this particular model can deliver faster data rates than demonstrated . To read the CCD data, an ADC and ESP must collect around 1 MSPs and stream the data to a host device.
MCU Software: To extract 1 MSPs, the ESP32 must communicate with the ADC over SPI operating at its maximum frequency, 40 MHz. The contents of the device's memory must be dumped to a host device over Wifi to avoid saturating the memory. We will accommodate these speeds by splitting Wifi and SPI communication across the ESP's two cores, with a message queue relaying the data between them.
Host Software: Raw data will stream directly to a PyQt application running on the User's PC. After correcting the TCD1103 data with the device's spectral response curve, and calibrating the positional data with a known source, a pyqtgraph presents the data to the user.
Criterion for Success and Challenges: We aim to resolve wavelengths in the 400-800nm range, we will assess the resolving power of our spectrometer by characterizing the emission spectra of an argon plasma. Perhaps the greatest challenge of our project is in the design of the photonic circuit; to minimize losses, we must simulate our structure, and characterize the lithography process of the circuit. However, we are unsure of which FTDT package to use in this situation. Fortunately, after our photonic design is verified, we can pattern and develop the circuit in under 10 minutes with only rubbing alcohol, ensuring we can tune the processing parameter rapidly.
 X. Ma, M. Li, and J. J. He, “CMOS-compatible integrated spectrometer based on echelle diffraction grating and MSM photodetector array,” IEEE Photonics J., vol. 5, no. 2, 2013.
 R. Cheng, C. L. Zou, X. Guo, S. Wang, X. Han, and H. X. Tang, “Broadband on-chip single-photon spectrometer,” Nat. Commun., vol. 10, no. 1, Dec. 2019.
|Ruomu Hao||Jonathon Schuh||design_document1.pdf
|Samira Tungare (samirat2), Sarah Kolak (kolak2), Edward Harper (ewharpe2)
There are a lot of situations in which people like to have music playing through their speakers while participating in other activities in their house or apartment. However, if there are a lot of people trying to have conversations then people may find themselves screaming just to be heard by the other person over the music. On the other hand, if there is a lull in the conversation and the music is too quiet then people have to deal with an awkward silence in the room.
Speakers that are able to automatically adjust based on the ambient noise. If they detect a lot of noise (such as conversation) in the room the speakers will then reduce the volume of the music and if they detect minimal noise they will raise the volume of the music.
Microcontroller - The microcontroller will interface with the speakers to control the volume based on the signal received from the noise sensor. We will have to differentiate between the music playing and the external noise in the room perhaps by filtering out the frequencies from the music.
Bluetooth Connection - A phone will be able to interface with the speaker through bluetooth to play the music
Audio Sensor - One or more audio sensors will be used to detect the intensity of the noise coming from the environment.
The speaker will utilize a wall adapter to be powered, along with voltage regulators to support our different components.
Criterion for Success:
It would be considered a success if the volume of the speaker decreases when detecting louder environmental noises or increases when there is little noise in the area with smooth transitions between the different volume levels. This would ensure people won’t have to yell over the music or sit in silence without physically turning up or down the device.
|20||Real Time Fire Escape Plan
|Johan Mufuta||Rakesh Kumar||design_document1.pdf
Current fire escape routes are rigid and do not adapt to quickly changing situations during a fire. Fire alarms will tell you only that there is a fire, but not where that fire is located. If a fire has engulfed the escape route, already frightened people can panic and be trapped in a building without knowledge of where to go.
- Our solution would be to employ a system which would contain 2 parts. The first part would be a set of sensors which would monitor the temperature and smoke levels at various points around the building. This data would be relayed to a master board which would communicate with each sensor in the building.
- The building’s existing fire alarm would be connected to this master logger board
- Heat resistant temperature sensor
- Accurate smoke detector
- Wifi connectivity from external sensor array to master logger and from master logger to app
- The second part of the project would be to develop an application which would take this sensor data and determine an optimal path through the building to escape the fire. It would include the ability to select a starting position and would update in real time to show inaccessible paths due to high smoke and fires.
- Once the fire alarm goes off, all sensor data is tracked and saved on this app. This allows for a temperature trend for each room to be recorded locally on each user’s phone and if a sensor fails, previous data will determine if the room is safe to go into.
- This way people who lose connection to the wifi do not lose any data about the fire that was previously recorded
- There will be 2 PCB’s manufactured for this solution.
- One that governs the master logger, receiving data from the sensor array, and transmitting it to an app.
- The other will be a board to transmit the sensor data to that master logger
- The sensor array and master logger boards will be hardwired into the main power of the building while also having a battery backup system
- Master Logger Board - This subsystem will ping each of the sensor modules intermittently over Wi-Fi in order to ensure connectivity of the entire system. When a sensor is triggered, the logger will capture the event using a microcontroller and send the location of the triggered sensor and the data it measured to a phone application.
- Sensor array board(s) - Smoke and Heat Detector. This subsystem will measure the temperature of the room and measure the air quality to determine if smoke is present. We will utilize a thermocouple (https://media.digikey.com/pdf/Data%20Sheets/Digilent%20PDFs/240-080_Web.pdf) for the measurement of the room temperature and a smoke detection module such as the following: https://www.amazon.com/SUKRAGRAHA-Detector-Module-Arduino-Genuino/dp/B01F2X3VY6.
- There ideally will be multiple of these boards made and placed around a building
- All of the components surrounding the sensors will be heat shielded
- Power Subsystem - This subsystem will convert wall power or battery power to the necessary power and control voltages. The system will rely on wall power and use the battery power as a backup.
- Mobile Application - Includes a mapped version of the building and paths are created utilizing optimal path finding algorithms. Assisted with wireless communications to update in real time to adapt to danger.
Criterion for Success
- Developed application with a displayed blueprint of the building and a display of the path to take. This path should lead through minimal fire and smoke. The path will also update to the changing situation.
- Simulated data can be used as input to determine functionality.
- Successful transmission of data from external sensor to master logger
- Successful transmission of data from master logger to app.
- App saves temp and smoke data in local memory
- According to https://www.ready.gov/home-fires. At eye level, temperatures can reach 600 degrees Fahrenheit. So as long as the sensors can withstand a temperature of 500-600 degrees Fahrenheit, failure of a sensor is a non-issue. This is due to the master logger saving the temperature of the room from when the sensor was functional.
|21||Modular 3D Holographic Display
|Stephanie Jaster||Jing Jiang||design_document1.pdf
Taofik Sulaiman (tosulai2), Pavan (pavanh2), Charles Ekwueme (cekwue2)
Displaying objects in 3D formats has tremendous benefits but is severely limited. Current modes of 3D display are expensive and can be disadvantageous, and at times even harmful to certain users, especially if viewed for extended periods of time (e.g. 3D picture via red/blue anaglyph glasses). Anaglyph glasses are very eye tiring and lead to headaches since they distort images or the user’s eye focus. Further, solutions like VR headsets also require wearables which are heavy, also cause eye fatigue and are limited to one user.
Our main goal is to allow users to better visualize objects in a 3D space without the limitations of a 2D screen and without eye fatigue.
Solution Overview -
General description of idea
Our device would take in 3D model files (e.g. STL/CAD file) via USB or other I/O then display them as a hologram projection by converting the 3D model into 4 different 2D images that are then projected into the hologram display. This solves the problem by allowing users to input their own 3D models (as STL/CAD files) and create an interactive display without the use of a wearable or any of the health implications that come with those.
What makes our project unique?
Our project is novel in that we would be taking the simple home made hologram experiment that is available on phones and building a version that can sit on a table to display bigger scenes and allow user input to modify the scene or interact with the object.
Unlike other solutions, our design will decouple the graphics processing and display logic from the control device (i.e. laptop/computer).
Other modes for 3D viewing feature AR/VR devices and 3D images which use glasses.
* Our project will use a 3D model file as an input file in which it will convert this file into a 2D video/image intended to be used for the Holographic display.
* This project would incorporate and require the design a board that interfaces with the holographic display, and possibly a sensor that tracks user motion. To make the project interesting we could combine input from different sensors to account for error.
* We will allow limited manipulation of the projected scene by allowing the user to move the object around the scene. There will be no need to re-process the 2D image back to 3D as the object itself will not be modified however its position or orientation may be.
- This unit will render the hologram via a standard 2D screen and shaped glass. Example
- This subsystem will include any required video driver and 2D, LCD screen
* IO Peripherals
- This subsystem will encompass input to the processing unit for 3D model input or control signals.
- Likely this will be implemented via a USB controller that takes in the serial input from a laptop/computer.
* 3D->2D mapping algorithm
- No specific algorithm is currently in mind however with the use of graphics libraries, specifically OpenGL ES, calculating appropriate projections onboard will be simplified significantly.
* Processing Units
- As a result of graphics processing requirements, likely this will include a low power CPU (e.g. ARM based) and a graphics accelerator of some form.
* Power Subsystem
- Used to power components from other systems reliably. This may include AC/DC converters, wall adapters and or voltage regulators.
Criterion for Success
* Final Milestone 1: Successful static rendering of 3D model to holographic display. (3D to 2D mapping)
* Final Milestone 2: Successful dynamic rendering (changing smoothly and requiring real-time scene calculation) of 3D model to holographic display.
* Final Milestone 3: Accurate user control of the holographic object from serial input or from capacitive sensor input with maximum of 2 second delay.
* Final Milestone 4: Displayed image has decent resolution: i.e. image look clear.
* Final Milestone 5: Can operate for extended periods of time without fail (at least 20 seconds).
* Holographic Display:
* 3D-2D Mapping Algorithm Resources:
|22||Electric Motor Scooter Battery System Upgrade and Custom Battery Management System
|David Null||Jonathon Schuh||design_document1.pdf
|Problem: Fagen Scooters in Champaign, IL ordered its first electric scooter a few years ago in hopes of capitalizing on the electric transportation market. Unfortunately, the model that they ordered was very poorly designed and built by a company called Numi. The battery system on the scooter has failed and even when operational, the scooter was not capable of monitoring its own battery health, discharge metrics or charging metrics. As such, the scooter is not operational and cannot be sold. The company that manufactured the scooter, Numi, has since gone out of business and Fagen Scooters has been unable to reach them for support or documentation of any kind.
Solution Overview: We will design and build a new and robust Battery Management System (BMS) for the Numi scooter. This Battery Management System will contain a diagnostics system so that it can warn the end user of specific problems. This will be very similar to a "check engine light" in a traditional vehicle. This way, the user will have a chance to service the scooter before it totally stops working like it did for Mr. Tom Dillavou, owner of Fagen Scooters. We hope to have warnings for unhealthy charging, unhealthy discharging, high battery temperature, and other things as we see fit. The BMS will also stop power going from the charger to the battery while charging if it detects a large power draw from the battery. This will mitigate the risk of battery damage during the charging process. In addition, we will also design and build a new lithium ion battery pack to give the scooter better all around performance. Mr. Dillavou would like the scooter to be lighter, faster, and for the battery charge to last longer. Additionally, in our process of documenting everything for ECE 445, we will distribute our documentation to Mr. Dillavou so that he will be equipped to support his customer after their future purchase of this scooter.
# New Lithium Ion Battery Pack
We will design and build a new 72 V battery pack from 18650 lithium ion cells. These 18650 cells are the standard in electric vehicles and most, if not all, EV manufacturers (Tesla, BMW, Nissan, etc.) use them in their own battery packs. We will research cell specifications and design the pack to come up with the most cost effective method using the highest quality cells possible. We will be using "name brand" cells (Sony, LG, Panasonic, or Samsung) to ensure that the battery pack is of the highest standard and performance. Cheaper unbranded Chinese lithium ion cells perform significantly worse than name brand cells and have a history of causing fires due to overheating. We will be designing this pack specifically around the Numi Quadhopper scooter owned by Mr. Dillavou, so space requirements will be one of the most important factors in the design process. Additionally, this pack will be designed in conjunction with our custom BMS so that we will be able to provide the most realtime data as possible to the BMS in order to diagnose problems as soon as they arise. This will prevent damage to the battery and will ensure that the scooter is behaving properly.
We will need to build the cell array in such a way that we can monitor each portion of the 72 V battery pack. For example, when our BMS throws a voltage "flag" we will need to diagnose the cells within the pack itself. Most of the time, rechargeable battery problems are caused by a few cells in the pack and can be fixed by replacing bad cells individually rather than buying a new pack all together (incredibly expensive). So, to do this, we will wire our pack to send data from each ~3.6 V (single cell) portion, so that we can monitor voltage from each one. If 1 of the 20 cell portions is reporting a lower voltage than the others, the end user will easily be able to open the pack up, find the problematic cell portion, and fix it rather than throwing the entire pack away and building a new one.
# Custom Battery Management System
We will also be designing and fabricating a custom BMS for the scooter as previously mentioned. This BMS will have the following sub-components:
## In-Line Current/Voltage Sensor Network:
The in-line voltage and current sensors will measure the voltages and currents of several subsystems within the electric bike. These subsystems are the following:
- load current from controller/display for the electric bike
- voltage of the battery
- voltage/current coming from the charger while it’s charging
- voltage across the electric motor
- load current of the electric motor
Due to the larger current draws coming from the systems on the bike, especially the motor, a high power in-line current sensor will be used to measure load currents. Look below for a link to the current sensor. High Voltage Transducers will be used to measure the voltage across the motor and the battery. These devices are capable of taking the current/voltage data and converting it to a lower voltage that can be measured by embedded hardware.
## Control Unit
The next subsystem is the embedded hardware that will be taking the voltages coming from the sensor network, converting it into digital data, analyzing the data, control the diagnostics output screen that the user will see, and will send a signal to cut charger flow to the battery if it detects a large power draw from the battery.
The data will be delivered to an STM32 ARM processer where the processor will be able to determine (from the sensor data) if there are problems with the battery or electric motor by throwing "flags" if the system is drawing an abnormal amount of current or if the system is reporting an abnormal voltage level. As a safety measure, there will also be a relay attached to the battery line to stop current flow from the battery to the motor / from the charger to the battery depending on if there is a problem when the battery is being charged or discharged. The voltages will be converted from analog to digital data using the onboard ADCs on the STM microcontroller.
These "flags" will be triggered when either current draw or battery pack voltage level is outside of a certain tolerance. For voltage, we will have the controller throw a flag whenever the voltage is 48 V or below. This number was calculated from a Samsung INR18650-25R cell (one of the cells we are considering to use) data sheet. We will be using 20 cells in series to make the 72 V required for the motor. The data sheet reports a discharged cell voltage of about 2.5 V. So, multiplying 2.5*20 gives us a discharged pack voltage of about 50 V. This means that the battery pack would report a voltage of about 50 V when it is completely "out of juice" and needs to recharge. Setting the threshold at 48 V or less gives us a good tolerance so that the system WON'T throw flags when the battery is experiencing normal discharge and WILL throw flags when there is an actual problem. A reported voltage below this voltage would indicate a problem with the battery and flag would be thrown by the BMS. Additionally, any reported voltage above ~72 (74 V or more to be exact) will also throw a flag. We decided on 74 V because of experience with multimeter/volt meter inaccuracy and component measurement fluctuation. Current will be handled in the same way, except with a max current of 40 A (the max current the existing motor controller can handle). We won't really need a low point cutoff, because the battery will be routinely providing less than 1 A of current when the motor is stopped (i.e. scooter is at a stop sign or red light).
## Relay System
A relay will be connected to a digital output pin on the microcontroller, and it will be used to cut flow from the charger to the battery if there is too much power going to the battery at any time.
## Diagnostics Output
This will be an array of LEDs that will communicate if there are any issues (flags) with the battery system, and what kind of issue it is.
The micro controller and sensors will be powered from a 5 V voltage regulator circuit made from stepping down the 12 V line available from the stock motor controller to the necessary 5 volts needed to power the microcontroller. The link to the regulator that will be used is below.
Link to Samsung 18650 cell data sheet: (https://vruzend.com/wp-content/uploads/2017/09/SAMSUNG-INR18650-25R.pdf)
Link to in-line current sensor: (https://www.digikey.com/product-detail/en/allegro-microsystems/ACS770LCB-050B-PFF-T/620-1541-5-ND/4473980?utm_adgroup=Sensors%2C%20Transducers&utm_source=google&utm_medium=cpc&utm_campaign=Shopping_Allegro%20Microsystems_0620_Co-op&utm_term=&utm_content=Sensors%2C%20Transducers&gclid=EAIaIQobChMI8bvnm9-r5wIVhcDACh3BtAHYEAQYAiABEgI5n_D_BwE)
Link to voltage regulator (stepping 12 V line down to 5 V for microcontroller): (https://www.digikey.com/product-detail/en/analog-devices/LT1086CT-5%23PBF/LT1086CT-5%23PBF-ND/891719?utm_adgroup=Integrated%20Circuits&utm_source=google&utm_medium=cpc&utm_campaign=Shopping_Linear%20Technology%2FAnalog%20Devices_0161_Co-op&utm_term=&utm_content=Integrated%20Circuits&gclid=EAIaIQobChMIi_Pg8-Cr5wIVUSCtBh2HYANHEAQYBSABEgKfIfD_BwE)
Criteria for Success:
- The scooter runs and drives from the new battery pack that we design and build.
- The new scooter battery pack weighs less than the depleted stock Sealed Lead-Acid battery pack.
- The new scooter battery pack lasts longer than the depleted stock Sealed Lead-Acid battery pack. (more charge cycles)
- The custom BMS can accurately report pack voltage levels and current draw.
- The custom BMS can correctly throw "flags" when voltage or current metrics are outside of thresholds.
|23||Canine Insulin Delivery System
|Chi Zhang||Jing Jiang||design_document1.pdf
|Adam Newhouse, Dillon Hammond (arn2, dillonh2)
It is difficult and inconvenient to manage the regular infusion of insulin in a dog with diabetes. Diabetic dogs require insulin shots every time they have a meal. This is often twice a day. It requires measuring insulin into a syringe, injecting, and disposing of sharps. This is a wasteful process that is also very time intensive. Additionally, the dog may not respond well to needles.
Our solution is a system composed of software and hardware. The hardware component is a wearable insulin pump. This pump connects to an accompanying smartphone app over Bluetooth which allows the owner to dispense insulin doses when desired. The device will also be battery powered and will be charged whenever the insulin reservoir is refilled. The app will track feedings, insulin infusions and any discrete blood glucose measurements. Based on glucose measurements, the insulin dose will be adjusted and tracked over time.
Will connect to pump over Bluetooth Low Energy (BLE)
View pump battery level
Track changes in blood glucose levels (inputted by user)
Track feedings and infusions
Delivery push notifications when dose/feeding is due
Will export the data for vet usage
Mobile App Backend
Database that will securely store user data
Google Firebase or a similar technology
Canine insulin pump hardware
Reliably and accurately dispense insulin
Rechargeable battery will last at least one week
Battery protection and charging circuitry
Low power ARM microcontroller manages device functionality
Connect to smartphone using BLE
Small enough to fit on a medium sized dog’s collar
Criterion for Success
All of the functionality for each solution component in the previous section works as described. Demonstrate that the correct amount of liquid can be dispensed, as commanded by the Android app.
|Shuai Tang||Arne Fliflet||design_document1.pdf
Our project aims to enhance safety for bikers and runners that are often much less visible than other transporters on the road. While lights, reflective clothing, and even some basic signaling apparel exist in the market today, none of these solutions allow for users to convey their intentions naturally using gestures. We believe that by overcomplicating the interface by which users indicate their intentions could actually lead to rider distraction and decreased safety. Instead, we hope to provide an intuitive UX that piggybacks off of natural hand and arm motions and uses inferences made based on the user’s motion to control the signalling lights. This safety apparel will increase users’ visibility and make their intentions and motion clear to others on the road.
Our solution is to create a custom jacket fitted with omnidirectional lights located on the back, chest and arms that can be used to create turn signals, a brake indicator, and other signals such as ‘pass’, ‘yield’, etc. By fitting each sleeve with and IMU to manage arm movements, along with a third IMU on the torso used to track the motion of the user’s body, we will be able to detect simple gestures made using the users’ arms, which will then trigger the lights on the jacket to light up and blink in accordance with the users intent.
Gesture Identification - This module handles the logic for interfacing between the other components used in the prototype and converting a user’s motion and gestures into outputs that are visible to others in the vicinity via the LED array. We plan to use something along the lines of an Atmel MCU to ingest data from the IMU sensors and decipher motions such as each arm being waved, raised, or lowered. Additionally, we can automatically detect turns, harsh events, and other rider actions that can be used to perform actions such as turn off signals, activate a high-visibility mode, or even (stretch goal) contact the authorities via a connection to the user’s phone
Accelerometer -- This unit will focus on the acceleration and direction of the user, determining when the user is slowing down and speeding up and sends data and signals to the LED microcontroller and display a large red light for others to see.
LED Array - This module will implement a simple microcontroller that handles the outputs of the separate microcontroller logic unit and displays the current action correspondingly on the LEDs. For example, when we receive that there will be a Right turn, the gesture identification microcontroller will use IMU sensors in order to see if a right turn has been signaled. Once determined, the gesture will be sent over to the LED Array microcontroller, which powers the appropriate LEDs.
Criterion for Success
Our project has a number of high level goals that need to be met in order to achieve an increased safety and satisfaction of our users.
- All lights and symbols must be easily visible by all others.
- Right Turn Signal, Left Turn signal, Brake light, etc
- All lights respond accordingly to the proper hand signals for turns
- The product must be simple to use while Biking, Running, Skateboarding, Scootering, etc.
- The product must be easy to set up and simple to put on for ease of access. Usage of the product is seamless.
|25||IN-ROAD VEHICLE SPEEDING MONITOR
|Dhruv Mathur||Arne Fliflet||design_document1.pdf
|Noah Salk [noahs2], Arnav Das [arnavmd2], and Darius Haery [haery2]
Problem: Speed limits exist because authorities determined that abiding by these limits makes driving safer. The most common method of enforcing speed limits utilizes police officers with radar guns and is problematic for multiple reasons:
Police officers are greatly outnumbered by the number of drivers on the road. This results in drivers breaking the speed limits when no police car is in sight and slowing down when one is spotted. The low risk of getting caught is a chance drivers are willing to take in order to get where they're going faster.
Police officers are trained to handle dangerous situations and keep civility. Having them sit in a car for hours on end waiting for someone to speed is a great under utilization of their abilities.
There have been attempts by some cities to implement a speed camera system. Most, however, have been met with public disapproval for privacy concerns due to the camera. In addition, these cameras are typically placed on the side of the smaller roads and wouldn't be feasible for multi-lane highways. Signs that warn drivers of speed camera zone combined with their lack of prevalence results in the same issue as stationed police officers. When the speed camera zone ends, drivers begin to speed again.
Solution: A small packaged, speed measurement system can be embedded into the middle of a lane so that cars pass over it and placed frequently enough so that drivers need to always be aware of their speed in relation to the speed limits. If the measurement system determines the driver to be speeding (within a certain margin based on traffic, weather, and road conditions) a camera is angled so as to take a picture of the rear license plate. Computer vision software will be used to determine the license plate number without human interaction and the road, weather, and traffic conditions (the first two manually entered every day, the latter measured) will be noted in addition to the speed of the vehicle and sent as a bill to the driver.
The measurement system will consist of two object sensors (of type to be determined, could be sonar or some type of laser sensor) pointed upward and placed some distance from each other. The time between when the first sensor detects an object and the second sensor detects an object will be used to determine the vehicle speed. The device will wait until both sensors are clear before it resets and waits for another car to pass over. In winter months, a heater could be placed near the sensors and cameras to melt snow covering the device, although this could be unnecessary given that in general drivers use more caution when snow covers the roads.
A 3D printed prototype will hold all electronics, sensors, heaters, a micro-controller, and a camera. The device would be powered by the same power lines that service road lights (may need a AC-DC converter onboard) and therefore does not require a battery. If time permits, we could develop a communication method (wired or not) to interact with the device to extract speeding violations and transmit weather and road conditions.
Speed Sensing - This will be accomplished using two line-break sensors pointed upward. The type of sensor has yet to be determined, as each type will provide different tolerances and complications. Potential sensor types include reflective laser sensors, sonar sensors, photoelectric sensor, etc. Calculating speed will be straight forward based on sensor separation distance and time between each sensor break. We need to make sure the sensor can operate at a high enough frequency and that our micro-controller can read input at a high enough frequency.
License Plate Sensing - This will be accomplished using a camera unit slightly aimed forward to take a picture of the vehicles license plate after it passes. The angle will be determined and the type of camera should be chosen based on price concerns mostly, but it should be able to take the picture before the vehicle gets too far away. OpenCV will be used to create a license plate number classifier from the snapped image and the program will be run on a micro-controller to be determined. Training will happen off-board prior to implementation on the micro-controller.
Powering the device - Local transportation authorities will be contacted to find out the type of power street lamps recieve (voltage levels, ac or dc, etc.) This information will be used to develop power electronics to power our device via the same lines that service street lamps. This will allow easy implementation into current road system.
Control unit - A micro-controller will be used to process all sensor and camera data in order to make decisions. This micro-controller should have a relatively fast operating frequency in order to make decisions before a speeding car passes over. It will need enough capabilities to classify a license plate via a neural network.
Criterion for success - If we can successfully measure the speed of a vehicle, determine whether it's speeding based on a number of variables, snap a picture of the license plate, and successfully determine the license plate number from the picture we will accomplished the goal of the project. The device needs to be verifiable, which means it needs to be semi-implemented on a road and tested to meet the criterion.
|26||Mailbox with delivery detection and notification
|Shaoyu Meng||Rakesh Kumar||design_document2.pdf
|Mailbox with delivery detection and notification
Problem: Sometimes people may forget to check their mailbox for some important messages about their cars or other stuff and it is very normal that people are waiting for something in the mailbox and it is really annoying to check the mailbox several times a day for some important mails.
Solution: We want to design a smart mailbox that can remind people when the mailbox receives a new mail. This mailbox can detect if there is a new mail received in the mailbox
and send a notification to peripherals. Besides, this mailbox is using solar power.
[Subsystem #1:] Sensing System
There are two layers in the mailbox, one is the receiving layer and the other one is for the homeowner to put the mail waiting for picked up.There are weight sensors in both layers. For receiving layer, there is a weight sensor to sense the weight change and a counter to count the number of times does the sensor sense the weight change to track the number of new mail. On the other hand, we also consider a light sensor to detect brightness changes since mails are turned in mostly in day times.
[Subsystem #2:] Motor System
A motor can be used to control the door of the mailbox. After receiving the signal from the user’s phone, a motor will start to open the mailbox door. So we don't have to always bring the key for opening the mailbox. It needs verification and mobile implementation that can communicate with the mailbox.
[Subsystem #3:] Notification System
The part of system allows us to give the notification to users once the Sensing System gives a positive feedback. It will push the notifications This part is connected to peripherals indicating if there is a new mail waiting for picking up. The peripherals include moblies phones or a dock that can put inside home and shows the status of mailbox.
Criterion for Success and Challenges:
Criterion for success means that a good interaction can be established between users and the mailbox; users can rely on the smart mailbox’s notifications as the credible information for their incoming mails. This project will make the experience of picking up new mails a better one than before.
The challenge of this project will be how do we detect multiple mails are being put in the mailbox at the same time and design the way that mailbox should communicate with peripherals. We also need to design a perpherials that can show the status of mailbox other than pushing notifications to phones.
|27||Multi-Phase Solar Power Converter with MPPT
|Ruomu Hao||Jonathon Schuh||design_document1.pdf
James Arnold (jpa2)
Justin Meyer (jlmeyer4)
Nate Post (npost2)
Solar panels have unique power curves that maximize at a certain voltage and current point, but the panels don't automatically operate at that point. By making a converter with maximum power point tracking (MPPT), systems can extract more energy from them. This converter is for the solar panels on the roof of the ECEB and might not be directly applicable to other solar panels. The panels have another issue due to their tight placement; solar panels partially block each other during certain times of the day and a solar panels output power is determined by the most shaded region. So the converter we are proposing measures the power output of three different sections on the panels and tracks the maximum power point of each section to improve efficiency even further. We are also designing this to fit to a microgrid system so we want to communicate through wifi how much power we are extracting from each panel. The output of our converter will act like a current source so it can adjust to what voltage the DC power rail is demanding of us.
# Solution Overview
We will design a power converter that connects to segments of a solar panel, converting the power from each segment separately. Each segment has its own power curve that can be maximized, thus allowing us to extract as much energy from each panel as possible. In addition, this solution will be scalable, with the output of each device being a current source that can inject current onto a governed DC rail, such as one for a pre-inverter stage of a Micro-grid. Finally, to allow for scalability and monitoring in such a large system, we will integrate WiFi data transmission so that the operator of the panels can read individual panels’ power transmission information.
# Solution Components
## Power Converter
We will create a switching power supply that is based on the topology of the forward converter in order to convert power from multiple terminals of a single solar panel and extract maximum power from all sets of photovoltaic cells.re
The optimization function for MPPT must be run on some sort of processor, so a microcontroller that can quickly run the control loop needs to be part of the design. The microprocessor will also need an ADC running at least as fast as the sampling rate. A microcontroller similar to the STM32 would be a good choice.
## Wifi Chip
will most likely use the esp8266 because it gives us many clock cycles to run our control loops.
# Criterion for success
- Power extraction from solar panels is at least 85% efficient across all cells
- Power output data can be sent across a network to a server/client for display and analytics
- Output can be connected to a governed DC rail and operate based upon that rail's control
|28|| Portable Bluetooth Music Player
|William Zhang||Rakesh Kumar||design_document1.pdf
|# Portable Bluetooth Music Player
Arpan Choudhury arpanc2
Joseph Yang josephy2
Robert Conklin rmc2
## Problem :
Current music playback devices are increasing in size to meet the demand for larger and larger screen sizes. Along with this, the weight of these devices is rather large, as a result of using metals and glass to give users a 'premium feel,' and increasing battery size to maximize the charge life of the device. These factors combine to make good smart devices; however, they also lead to bulky/inconvenient device profiles for physical activity, especially activities like running.
## Solution Overview :
A clip-on wireless music player, capable of storing the user's music, and connecting to wireless headphones via Bluetooth while still still being lightweight and convenient to wear while exercising. The music player will use a Cortex-M series microcontroller to interface with a BLE module to communicate with the paired Bluetooth headset, read from flash memory to store and play back audio, and read user input from buttons on the device.
## Solution Components :
### MCU :
For this device, a versatile, low-power, and compact microcontroller is required, as the focus of the project is to design a lightweight, small profile music player. Due to the power-efficient design and low cost of the K32 L2 (K32L2B31VLH0A) MCU, this microcontroller appears to be the current best fit for the design, as it fits all the conditions listed above. Additionally, it has native USB 2.0 support hardware, simplifying the design process, and ensuring that the device will handle USB communication. Along with this, the K32 L2 has a sufficient amount of GPIO pins in addition to the required DMA and I2C connections to handle the flash memory and various peripherals, respectively.
### Bluetooth :
For connectivity to the Bluetooth headphones, a Bluetooth module is required to manage and handle the communication between the headphones and the MCU.
### Memory :
The music player should have enough space so that users can listen to music for the duration of an entire workout. We decided that 2 GB would be a reasonable size for this purpose (roughly 500 songs). While this may be less than the amount of space available on modern smartphones, we only need a few hours of storage capacity at max, and 2 GB is capable of holding significantly more than that.
### Interface :
To reduce weight, size, and cost of production, we decided to use a simple monochrome OLED display and button interface. We can limit pausing, playing, and track selection to to a single button, using double, or triple tapping to skip forward and skip backward respectively. Mapping these functions to the same button would simplify the design, allowing for less space to be used, while still having strong functionality. In addition, there would be power button, to turn on and off the device. Pairing the device could be accomplished using multiple held button presses,
### Battery :
We are planning on using a standard lithium-ion battery and charging system. This is due to the compact and high energy density of these systems, along with the variety of available systems.
### Display :
A small monochrome OLED display, interfacing with the MCU via I2C, allowing for pairing information to be displayed to the user when setting up the Bluetooth connection to a pair of wireless headphones.
## Criterion for Success :
A device capale of receiving and writing audio data to flash memory, and playing back audio from memory via a Bluetooth device. This includes meeting the memory/storage requirements in the design of the device, working user interface to control device, and having electronic components configured compactly enough to fit into a slim exterior profile.
|29||Foot Posture Sensor Insole
|William Zhang||Joohyung Kim||design_document2.pdf
|Tyler Schuldt (tschuld2), Isha Sharma (isharm4), Umaiyal Sridas (usrida2)
Some patients with injuries develop a bad foot posture while walking, which can lead to knee issues and muscle mass loss. In growing children, if bad foot posture is not corrected, it can lead to significant muscle loss and even uneven growth in height of legs. Patients with mild cases of cerebral palsy also suffer from this and the only solution currently is physiotherapy and slings.
We propose to design shoes with pressure sensors embedded into the sole which will be prescribed by physiotherapists for patients. The sensors will detect bad foot configurations as determined by a physiotherapist and we will provide haptic feedback (vibrations) to alert the patient and to help them change their habits.
The pressure sensors will be connected to a processor which will be on a PCB that we will design. The processor/battery will be put on either the tongue of the shoe or on an ankle band, depending on size. The band will be an extra piece to wear, but is a much better alternative to bulky slings.
The device will be programmed by the physiotherapist using software we will provide. The device itself will have a start/stop recording button. The software will display a picture of the foot with all the pressure sensor positions shown on it. The physiotherapist will press start and ask the patient to walk and then press stop. After transferring the data over a micro-USB the software will display the different readings from the sensors as they had occurred in real-time. This will let the physiotherapist know what he or she is dealing with. Next, he or she will have to choose when the vibrations go off based upon the relative difference in the readings from the sensors. He or she will then upload this data into the PCB through the micro-USB.
If someone has bad posture, they continuously keep their feet in the same wrong orientation while walking (confirmed by a physiotherapist). Therefore, once programmed, the insole will not need to be continuously updated, though occasional check-ups and reconfigurations may be necessary.
Either on the ankle band or somewhere on the shoe we will have a button that can be pressed which will shut off the vibration in cases where the user doesn't want vibration like sitting or other situations.
# Possible Additional Features:
Counting how many times the vibration went off and recording the data. Use this data over a few months to see if the number of vibrations go down. The physiotherapist will be able to download this number of vibrations data through a micro-USB.
# Solution Components:
*Subsystem 1:* Pressure Sensors - A number of pressure sensors will be embedded into the sole of the shoe. We are looking at two alternatives for this right now:
* 5kg - 50kg Resistive Pressure Sensors
* 50kg Half-bridge Load Cell Body Scale Weighting Sensors
*Subsystem 2:* Power - Replaceable battery
*Subsystem 3:* PCB - This will include a microprocessor to analyze the data incoming from the sensors and a micro-SD to store data.
*Subsystem 4:* User Feedback - We need something to generate vibration for correctional notifications.
* Mini Vibration Motors
*Subsystem 5:* Buttons - We are looking at 4 buttons for the user to control behavior (turn off, turn on, start recording/stop recording). These can be accomplished with simple push buttons. These buttons will be used as input for the processor.
*Subsystem 1:* Basic program for the PCB that checks for any of the physiotherapist’s conditions.
*Subsystem 2:* Basic program/user interface to show the doctor what happened while he or she collected data and to allow him or her to tweak the condition in which the feedback goes off.
# Proposed Timetable:
* Milestone 1: Hardware development -- Decide best placement of sensors on insole to get usable data and create PCB design. We may need to consider placement of PCB, sensors, and vibration motors; we want the insole to be comfortable, and lightweight.
* Milestone 2: Software/app development -- Decide how to collect and visualize data onto a simple, user-friendly application which can be used at orthopedic appointments to track progress. The data collected should also help correct posture in real-time through vibrational reminders.
* Milestone 3: Additional features if time permits.
# Criterion for Success:
* The pressure sensor insole can accurately and reliably transmit weight distribution data to microprocessor to map/visualize foot orientation.
* If the user has an incorrect foot posture, the device should set off a vibration reminder to correct foot posture.
* The device can be controlled and data can be visualized through a software application.
|30||Electric Thermos Box
|Ruomu Hao||Arne Fliflet||design_document1.pdf
|Project Members: Zerui An (zeruian2), Tingfeng Yan (ty7), Celine Chung (mwchung2)
Normal thermos cups preserve the temperature of the liquid inside by using proper physical structure to slow the dissipation of thermal energy, but we often find the liquid too hot or too cold when we are using them. Usually, if we find the liquid too hot to drink, we might let the cup open or add some same kind of liquid at a lower temperature. These methods either take long or cannot be performed due to limited conditions. This situation is even worse when we want the liquid to be hotter since we hardly have any ways to heat up the liquid.
# Solution Overview:
We can design an electric thermos cup. This cup can heat up or cool down the drink inside by simply pressing a button, or by setting a desired temperature using the provided buttons.
# Solution Components:
- Subsystem 1 (heating): This module starts heating up the drink once the heating button is pressed (or when desired temp. is higher than current temp.), changing the light color to red at the same time.
- Subsystem 2 (cooling): This module starts cooling down the drink once the cooling button is pressed (or when desired temp. is lower than current temp.), changing the light color to blue at the same time.
- Subsystem 3 (control): This module heats/cools the drink to a user-specified temperature (by sending control signals to subsystem 1 & 2). In case we decide to add a pause button, this module is also responsible for stopping the heating/cooling process when the pause button is pressed.
- Subsystem 4 (display): A screen displaying current liquid temperature, which is measured by a temperature sensor.
- Subsystem 5 (power): Power supply of all the other subsystems.
- Subsystem 6 (safety): This subsystem will take in the data of the temperature sensor and force the system to pause when the temperature is too high or too low. Also activated when the circuit is behaving abnormally (e.g. when the current goes too high)
We now have 3 possible ideas for subsystem 1 (heating subsystem):
(1) By Joule’s Law p = I^2*R, we could use resistors to generate heat. The main challenge of this approach is how we could supply enough power while keeping the voltage and current under control (avoid burning the circuit).
(2) We could run a heat engine in reverse (as a heat pump). Compared to approach (1), this approach requires less power (the exact amount is determined by the efficiency of the heat pump). The main challenge of this approach is to build an efficient yet small heat pump.
(3) We could make use of some reversible chemical reaction that absorbs/releases a fair amount of heat. The main challenge of this approach is to find a satisfying reaction and to build a control system for it.
Approach (2) and (3) can also be applied to subsystem 2 (cooling subsystem)
# Criterion for Success:
Portable size and weight. Heat up and cool down some water in a reasonable amount of time and consume a reasonable amount of energy.
|31||A Modular Light Array
|Megan Roller||Jing Jiang||design_document4.pdf
|Project Members: Noah Feinberg - nbf2 , Ashwin Mukund - amukund2
Problem: During the holidays its always really fun to setup light displays with cool designs. However I find that my family tends to set the lights up in the same way every time. So I would like there to be a tool out there for me to easily make a quick drawings and have it display on the lights for me.
Solution: An application program where you draw a design on the application and it translates the design onto a 2D panel LED hardware light display. A user will draw an image on an application in gray-scale. The gray scale drawing then be translated into brightness for each light in the display. The 2D panel LED light display, which will be lights hooked up in parallel, will be connected to a raspberry or something which will interpret the gray scale image and update the lights to match the drawn image. Such a system would allow for simple updating of light based simple images. We decided to use a 2d grid for our light display to make it easier to translate images from an application to the grid itself.
-LED Strips - Display grid/hardware for displaying patterns drawn. LED lights would be primary components for this subsystem, with the important requirement that we have the ability to modulate their brightness. For this project, we would have an 8x8 LED grid Panel.
- Control system for Led Strips - Distribute signals to each of the LED's in strip to dictate brightness of each light within grid. Control system will uniquely identify each light as simply as possible.
-network subsystem to connect control system to physical interface - Bluetooth receiver pair, one connected to the control system for dictating commands, an the other would be receiving the LED signals from the application. The PCB would contain the Bluetooth module that would enable this component
-front end interface - User menu to input images to display on hardware grid, would most likely be a phone app. Instructions to controller would be sent through Bluetooth.
-image analysis - transformation of image received in front end interface into proper instructions to be sent hardware grid. Analysis would be performed on phone.
Criterion for Success: If we are able to display a sample gray-scale image onto the light display with some level of fidelity. Another criterion would be the ability to properly transform a gray-scale image from the front end application to a level that would be mimic-able by the hardware grid.
Side-note: As mentioned in the idea post, this idea is similar to the Christmas light product referenced to us. We are trying to simulate images onto the hardware with the restriction of gray-scale, meaning we will have the ability to modulate brightness, as opposed to an on-off system implemented by the Christmas lights suggested to us. There is also the requirement for this to be modular, meaning multiple displays could be chained together for decoration purposes.
|Chi Zhang||Joohyung Kim||design_document1.pdf
Not everyone knows how to play an instrument. And if they do, they might not know how to do it well and be able to stay in key. But they would like to be able to make some music!
We propose a hand gesture controlled instrument. Much like how a conductor waves their hands around, so would you. The gestures would translate into notes, and things like how quickly you move would make the music faster or slower as well. An accelerometer/gyroscope would be able to detect speeds and direction, while flex resistors on your fingers could control other musical aspects. These flex sensors will control how many notes you could play at a given time. For example, if you want to play a 3 note chord you could hold three fingers down.
The gesture control would be just for one instrument. We plan on making our own tones to use. For each note in a given range of notes, we will produce our own unique sound. In order to ensure that the sounds made by the user are pleasant to the ear, we will make the range of notes possible set to only notes in a specific key.
The gloves will be powered with the help of a lithium ion battery.
An Accelerometer/Gyroscope sensor placed at your wrist which can calculate the speed of your hands as well as the angle. The angle value will be used to distort the chord you play with your fingers.
Flex Sensors will be placed in each finger of the glove to detect the angle at which you bend your fingers and how many fingers are bent.
An ESP8266 wifi board to accumulate and send out the data to a computer which will then realize the note to play and then play it.
Criterion for Success
Our gloves will allow the wearer to produce some sort of music in response to their hand/arm gestures and motions.
There’s a company called Stretchsense which has a hand gesture based glove. However, their product isn’t specialized as a musical instrument, as our project aims to be. Also, it appears that their gloves produce music based on their orientation with respect to a curved table, not just the gloves themselves. In addition, their product is very expensive and uses expensive sensors; while ours will aim to be more cost efficient.
|33||Puzzle Module for Portable Escape Room
|Weihang Liang||Jonathon Schuh||design_document1.pdf
|Colin Flavin (colinlf2), Nick Russo (nerusso), Helen Swearingen (hes2)
Champaign-Urbana Adventures in Time and Space (CATS) escape room company is working on a portable escape room device that can be moved to different locations and set up/torn down quickly. They need the main puzzle module designed and built for the box as well as a power distribution system that can connect each module together. The device should be able to take in wall power and convert it to useful voltages for each system in the device.
For the puzzle module that we are building, we will build an "invisible maze." This will consist of a few parts. On one side of the Escape Box Device, there will be a screen which will display a path to be followed by Person A. Person A will be on the opposite side of the screen wearing a device with visual and haptic feedback systems (most likely lights and vibrations). Person B will be watching the screen. Person B will see the path on the screen and direct Person A to follow the path. Person A will follow Person B's directions, with the wearable device changing color/vibrating if they get too far off track, in which case they will have to start over. After the maze is followed, the device may be taken off and they may continue with the rest of the Escape Box Device. This puzzle module will consist of a camera to track the location of the person wearing the feedback device (which contains an AprilTag QR code), which will serve as a location marker and an additional source of feedback to the players. There will also be a screen that displays the correct path to follow and the location of the maze-traveler in real time. There would be a foot or so of tolerance for a "proper" path to follow with an additional 1-3’ of tolerance outside of that as a "warning zone," where the wearable device would provide feedback that the maze-traveler would need to get back on track.
For the power system, we will design a power converter system for them to use in their device. We want a modular sort of approach with a variety of voltage rails that can be used as new modules come in and are replaced. We will make use of a variety of AC/DC converters to meet the needs of the various sub-modules.
This system will coordinate the various modules.
Wearable Device Subsystem
A robe, coat, or backpack with an AprilTag visible on it, and lights and buzzers to give the wearer feedback about how close they are to the path they need to follow. It would have rechargeable batteries or be able to charge from within its place in the crate between games.
A camera will be placed on top of the box. One player will have an AprilTag visible to the camera at all times. The AprilTag will then be read in by the camera and processed on a Raspberry Pi using the AprilTag library. From that, we can extract the distance the person is away from the box.
While one player is in the maze area, another player will read a display in a place obstructed from the first’s view. This display system will display a randomly generated maze, as well as the player currently in the maze. It will give live updates of the players location, providing valuable feedback to the players in the game.
An AC power supply (outlet) will go into the system and then convert it into DC rails for each puzzle on the portable escape room. We plan to use AC/DC converters to accomplish this, as well as design a failsafe circuit for potential power problems.
Processing power may be an issue given the amount of data we need to process. To remedy this, we are planning on designing an arm processor to run the game and display, while a Raspberry Pi runs the AprilTag library for tracking portion.
#**_CRITERION FOR SUCCESS_**
The goal of this project is to build a fully functional, wall-powered puzzle module for the escape box device. The device should be able to take in wall power and rectify it to DC voltage, stepping down that voltage to power different voltage rails necessary for the device.
For the puzzle module itself, it should be able to choose a path for players to follow and display this path on the screen on one side of the box. On the other side, the camera should be able to find the AprilTag mounted on the wearable device and correctly identify its position relative to the camera’s location, with a 1-3’ margin of error. 1-3’ out of that, the wearable device will provide haptic and visual feedback as a warning. Any further than that, and the wearable device will signal that the run has failed and needs to be restarted. The screen on the opposite side of the box should be able to update real-time to track the location of the maze-walker.
|34||Dryer Temperature Probe
|Charles Ross||Arne Fliflet||design_document1.pdf
|Joshua Rodriguez (jkr2), Michael Pauls (mepauls2), and Yoon Park (ypark66)
Improper use and maintenance of laundry dryers lead to the accumulation of lint and fabric softener in the dryer vent. Without the removal of this debris, the internal temperature of the dryer would be too high, resulting in dryer fires that could cause substantial property damage and potential bodily harm. A thermocouple could be used to measure the internal temperature of a dryer during its operation and a temperature above 250°F indicates a large accumulation of lint that must be cleaned out. However, such a solution is rather pricy with units costing about $100.
# Solution Overview:
A more cost-effective user-friendly dryer temperature probe can be created by utilizing three components. The temperature probe would be a k-type thermocouple that can withstand and measure temperatures of up to 350°F inside the dryer while it is running. The hardware unit would physically be placed on top of the dryer while it is running to read the data, convert it from analog to digital, and transmit the data via BlueTooth. A smartphone would then be used to view the temperature data in real-time and indicate whether or not the temperature is potentially dangerous. This interface would be more user-friendly compared to the LCD display commonly found on handheld thermocouple temperature sensors.
# Solution Components:
## Temperature Sensor:
This will primarily involve a k-type thermocouple. This will allow the temperature of the lint trap to be accurately read. Then this information will need to be converted from an analog signal to digital. A cold junction compensation circuit must be constructed to bias the thermocouple voltage and properly read the temperature value.
## Power subsystem
This will consist of a 9V battery and a voltage regulator used to power the electronics on the control board as well as the Arduino.
## Control board
This will be made up of an Arduino which will take the thermocouple signal as input, apply conditional logic, and send the resulting information via a Bluetooth module to a smartphone. The hardware will be placed on top of the dryer during operation.
An application on a smartphone with bluetooth capability would be used to display information to the user. Warnings would be displayed if the temperature exceeds a certain value that is deemed dangerous for operation.
# Criteria for success:
- Be able to accurately detect the current temperature in the lint trap within a tolerance of 2-3 degrees Celsius. We can test our prototype with Greg Tucker’s thermocouple sensor side-by-side
- Must be durable enough to withstand dryer operation. The thermocouple itself must be rated properly to operate at up to 400 degrees Fahrenheit which is within the expected operation range of dryers. The physical housing of the circuitry must be sturdy enough to not be damaged by the vibrations from the dryer during operation which could result in unwanted circuit behavior.
- Clearly and accurately depict which temperature zone a dryer unit is operating in real-time. Less than 185 degrees F: The dryer safety sensors are activated needing service (clothes will not properly dry), Blue range color code.
185- 210 Degrees: dryer working optimally (only one cycle needed to dry clothes) Green range color code.
220-250 Degrees: dryer running in yellow range meaning safe but running a little warmer than should be - the dryer should be serviced soon.
250+ Degrees: dryer running hotter than recommended and needs service – as over 250 degrees starts to burn clothing (cotton/wool) red color code.
|Stephanie Jaster||Rakesh Kumar||design_document1.pdf
|# **Team (Net-ID):**
Bhavish Bhattar (bbhatta2), Santan Katragadda (skatrag2), Ali Berk Eroglu (aeroglu2)
Problem: Sexual assault and harrassment has been a common problem in society and has been know to be the most under reported violent crime. Cases are often times dismissed or not even brought to court as the evidence presented becomes a "He said vs she said'' case in which the victim's pleas struggle to gain validity without concrete proof.
Solution Overview: Our solution seeks to help victims through the use of a wearable device that can record audio and emit a loud siren to attract attention to the situation. The audio will be saved to the user’s phone via bluetooth. Upon deploying the loud siren, the device will reach out to other parties for help.
# **Solution Components:**
- Trigger mechanism: A button on the wearable device that upon a three taps will begin recording audio and upon five taps will deploy an alarm and reach out to parties for help.
- Alarm system: Once the alarm is triggered, the device must be capable of outputting a loud enough siren to attract attention from people in the vicinity. A speaker with an amplification circuit would be used here.
- Audio recording: Device must be able to record audio in the nearby vicinity. We will use a microphone with an amplification circuit to accomplish this.
- Bluetooth: device can send the audio recording via bluetooth to the victim’s mobile device
- Application: will allow the audio to be saved on the phone.
- Memory: Micro SD card to store audio recordings on wearable device
- Microprocessor: Contains logic for managing memory, bluetooth, audio recording, and alarm system.
- Battery: Battery with a charging circuit and USB-to-serial converter
# **Criteria for Success:**
- Reliable way of initiating audio recording and alarm system
- Saving of audio recordings onto phone
- Reaching out to parties for help upon alarm
- Device with reasonable size that can be wearable
|36||EyeCU - Assistive Eyewear
|Shuai Tang||Jing Jiang||design_document1.pdf
|Project Members: Irfan Suzali (isuzali2), Abishek Venkit (avenkit2), Nikhil Mehta (nikhilm3)
Problem: Living a normal life can be difficult for the visually impaired. Although people with visual impairments can be largely independent in most scenarios, there are still cases where assistance is required in day-to-day activities. An example of this is reading the ingredients on a package of food, or identifying a mysterious object. We would like to provide this market with another layer of connection to their surroundings.
Solution Overview: Our solution is assistive eyewear for people with vision impairments. The eyewear will include a camera to capture the user’s field of view, a bluetooth module, and a battery to power the device. This visual information will be sent over bluetooth to the user’s smartphone, which will then compute the necessary contextual information about the scene in front of them, and output audio through the smartphone. This audio can be played on the phone’s speaker or through a pair of headphones connected to the smartphone. Additionally, we could add a microphone to the eyewear as well, allowing the user to input voice commands to specify what type of information they want about their surroundings. This feature is not essential to the function of our product, but could be an extra feature, given the time and need.
- Integrated Eyewear including camera, bluetooth, and battery
- Dedicated circuit to store and transmit images via bluetooth
- Low power circuitry for 24+ hour use
- Bluetooth system to receive images
- AI/Computer Vision system on smartphone to analyze images and generate context info (May be connected to a cloud classification database like Azure)
- Text to speech to output audio information to user
Secondary Subsystem (optional):
- Microphone built into eyewear, also connected over bluetooth
- Natural language processing to understand commands from the user
- Used to specify what type of feedback the user wants on their field of vision.
Criterion for Success:
- A successful solution will improve the quality of life for the visually impaired in numerous ways, and will depend on the speed, accuracy, and usefulness of the glasses.
- The glasses must take clear, detailed photos of the surroundings to transmit to the phone.
- The processing of the images and response must be quick (under 2 seconds).
- The device allows the user to choose between different usage modes : text-to-speech, object classification, and possibly other modes.
- The feedback given to the user must be helpful (give them information otherwise unknown to them). For text, the device must recite the text back to the user (within 95% accuracy). For object classification, the device must recite the object back to the user along with its confidence (eg: “There is a bear in front of you with 20% confidence”).
Idea Post Link: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=35979
|37||Electronic Badge System for Career Fairs
|Shaoyu Meng||Jing Jiang||design_document2.pdf
|Varad Khandelwal, varadk2
Pragya Aneja, pragyaa2
Ninad Godbole, ninadag2
**Electronic Badge System**
1. Career Fairs have extremely long lines. Students spend a lot of time waiting in lines for bigger companies, which results in them missing out on other promising smaller companies.
2. While waiting in queues, students have to manually fill out a form with their contact information which wastes time and creates more inefficiency.
3. Career Fairs end up using a lot of paper as students generally print around 15-20 resumes to then hand out to various companies, which is environmentally unfriendly.
The solution involves each student attending a career fair to wear an electronic badge which would display their name and major, and this device would be used to solve the problems mentioned above. Additionally, each company attending the fair would have a unique receiver that can detect the badge whenever a student taps on it. Through the tap 3 distinct things will occur:
1. The virtual queue to eliminate standing in line for longer than 5-10 minutes. Whenever a student wishes to be added to the virtual queue for a company, they would tap on the company’s receiver. This would automatically add them to a virtual queue of the company. Now, they don’t have to worry about standing in long lines as the badge would just slightly buzz to alert them whenever they are ~10-15 students away from being in front of the queue.
a. For smaller companies that don’t have a line longer than 10-15 students, students could just tap the badge on the receiver for the company and be good to go. They would then just stand in the physical line for the company.
b. For bigger companies, the students would just tap their badge on the receiver and then be alerted later by a light buzz or LED screen whenever they are 10-15 students away from being in front of the queue. After the alert, as soon as they get free, they would tap on the company’s receiver again to be removed from the virtual queue. Now they could go to stand in line for the company as the queue would be only 10-15 students.
c. Have a button interface to remove oneself from queues and limit the number of virtual queues per student.
2. Eliminating the need for company-specific check-in by transferring basic contact information (name, phone number, email address, major, sponsorship status) whenever the students tap the system for being added on the virtual queue.
3. Transferring resumes to the company and compiling them into a resume bank.
**Solution Components **
**1. Active RFID system**
a. RFID tag/reader system: The badge would be an RFID tag and the company receiver will contain RFID readers. This will be used to exchange basic contact information and student UIN which will be used to put the student on a virtual queue and complete company check-in.
b. Mini LED screen and buzzer: This will display the position in the virtual queue and buzz when the position in the queue is under 15.
c. Buttons to remove yourself from the queue
d. RFID Antennas: They will periodically ping the RFID tags with updated virtual queue position
**2. Bluetooth Module**
a. Both the tag and receiver will contain a Bluetooth module through which resume will be transferred.
b. Resumes can’t be transferred through RFID due to larger size.
**3. Microcontroller with memory chip** which will receive data from RFID receiver and maintain the virtual queue and contact information database. Additionally, using Bluetooth or wired connection allow PC connection to be established, so that the company can transfer the data to their own personal devices.
**Criterion for Success**
1. An electronic badge that:
a. Allows the student to tap a receiver (see below) that puts him/her on a virtual queue and alerts him/her when next on the queue through light buzzing.
b. Allows him/her to monitor his position in the queue through a small LCD, embedded on the badge.
c. Contains a button so that the student can remove himself/herself from the queue.
d. Allows the student to tap a receiver to transfer necessary contact information as well as a resume file.
2. A receiving system that:
a. Recognizes a badge uniquely and handles the logic for the virtual queue
b. Establishes a connection to the badge and allows contact information & resume transfer. Additionally gives the ability to transfer the data to a PC.
c. Stores and manages all the contact information transferred to it.
|38||NannyBot for Robots Developing to Walk
||Alejandro Diaz De Argandona Araujo
|Vassily Petrov||Joohyung Kim||design_document1.pdf
A common way for robots to learn to walk is through trial and error. The robot walks, falls, records the failure, and walks again and repeats the process until it learns to walk stably. However, this learning process requires many failures. This process is tedious, as people need to manually bring it up and back to the starting place. It can be costly if the robot becomes damaged when it falls, and even dangerous if a person makes a mistake when trying to put it back up and bring it back. There are methods that make the robot, such as installing a motor and strings to reel in the robot when it falls. However, these methods require installation, which may not be possible in all environments.
The process of robots learning to walk would be greatly simplified and much safer if it was aided by another robot, specifically designed for the purpose. Our idea is to create a robot, namely a NannyBot, that can aid the robots learning to walk. Instead of humans, our robot would follow the robot learning to walk, and pick the said robot up when it falls. It would be a wheeled robot, so that it won’t fall over, and it would have a strap tied to the walking robot, and lift the robot using the strap. This robot will be controlled by a human user by a wired controller (such as a video game controller like an Xbox controller), who will use the robot to follow the walking robot and bring it back when it falls. It would not require installation anywhere, and therefore be relatively unrestricted by the environment. The robot will have omni-directional wheels, allowing it to follow any direction the robot may go. The NannyBot will have a skeletal box shaped structure, with one side open, and have 4 strings attached to a strap attached to the walking robot. When the human controlling the NannyBot pushes a button, the 4 strings will wind, bringing the walking robot up into the air. The NannyBot will then return with the walking robot the where it started. The target of this Nannybots are walking robots of size approximately 30cm*30cm*50cm, and 5kg heavy. A possible robot to use this on is Robotis OP2, which Professor Kim Joohyung, the professor who pitched this project, has. The Robotis OP2 has a default walking speed of 24cm/sec, which we will aim to match.
Originally, the idea was to have this done completely autonomously, but the TAs we talked to told us that the idea was beyond the scope of the class, and that we needed to simplify it.
The structure of the NannyBot may be difficult to understand, so we attached a of the rough drawing of the structure of the Nannybot as photo.
Solution Components: -
- The NannyBot will have straps attached to it, used to lift the walking robot when it falls. The straps will be connected to the walking robot as well. There will be four strings, each on an upper corner of the NannyBot, which will bring up the walking robot stably when it is lifted.
- The structure of the NannyBot will have a box shape, partially surrounding the walking robot, so that the NannyBot can withstand the walking robot’s weight as it lifts the walking robot
- Aside from the straps, NannyBot’s dimensions will be big enough from the walking robot so it will not impede the walking robot’s path, or have the walking robot crash into the NannyBot when it falls.
- We will use omni directional wheels for the NannyBot, so that it can follow the walking robot in any direction.
- We will require powerful motors with encoders for lifting the walking robot, and powerful motors for the wheels as well, as the NannyBot is expected to return to the beginning place with the walking robot.
Criterion for Success:
- Provide a mechanical structure that can support the robot.
- Have the NannyBot move in direction of robot with the help of a controller.
- Have the NannyBot be unable to accidently crash into the walking robot.
- Upon receiving a command from the controller, the NannyBot should be able to pick up the robot.
- The NannyBot should be strong enough to bring the walking robot back to the starting point.
- Have the NannyBot automatically track the robot.
- Have the NannyBot sense when the robot has fallen down and pick the robot up after this detection.
|39||Automated launcher release for a flapping wing robotic bat
|Jonathan Hoff||Joohyung Kim||design_document1.pdf
|# Team Members:
Abhishek Bhandari (anb4)
Kousthubh Dixit (kmdixit2)
Vyom Thakkar (vnt2)
We are working on a project that was pitched by Jonathan Hoff. Jonathan and his research group developed a bio-inspired robotic flapping-wing bat robot that mimics the agility and efficiency of bats using silicone membrane wings. The initial robot launcher that was developed by Jonathan did not control the timing of the launch, which leads to different initial positions of the bat wings which ultimately causes the robot to take different trajectories at launch time.
Video link for the bat robot: https://youtu.be/OfwX6X4Nx20
Video link for launcher release: https://youtu.be/C1epTUGQZ3w
# Solution Overview:
Thus, what we will be working on, is an automated launcher release for the robot that allows the user to control the timing of the launch as well as the position of the wings at launch time which ultimately determines the trajectory that the bat robot takes.
# Solution Components:
## [Subsystem #1] Sensors:
We are planning to use about 3-5 IR sensors and 1-2 ultrasonic sensors. They will all be part of our bat launching device. The IR sensors will be spaced equally vertically (we could 3D print a structure to hold it or build it in the machine shop) such that the lowest sensor corresponds to the lowest height that the wing can go down to. The highest IR sensor will be at a height corresponding to the highest point that the wing can reach vertically. The rest of the IR sensors would be at specifically chosen points in the middle. Using the output of the IR sensors and the associated time stamps, we will extrapolate the coordinates of the wings at a given point in time to define 8-10 wing orientations (different heights). The user can choose the specific wing orientation that he wants to launch at using three switches (more on that in the switches section discussion). Ultrasonic sensors will be placed at the bottom at an angle such that its output would be used to corroborate the existing coordinate data of the wings. (Planning to use: URM06 - Analog Ultrasonic Sensor, Oiyagai 5pcs IR Infrared Barrier Module Sensor)
## [Subsystem #2] Motor, batteries and regulator:
A servo motor (Micro Servo - High Powered, High Torque Metal Gear) would be used to flick a switch that would release the tension in the bow that would launch the bat. Additionally, we are planning to use a micro-controller (Arduino), and a boost switching regulator. We intend to power our device using batteries rather than wall supply as the device is going to be used outdoors.
## [Subsystem #3] Switch configuration and control:
In this project the user can control two parameters: the launch delay time and the position of the wings at launch time which in turn determines the trajectory that the bat robot takes. There will be preset desired trajectories of the bat robot based on its wing orientation. These preset trajectories will be enumerated and the user can select the desired one. Our launching system would trigger the launch of the bat when a user chosen wing orientation is met during the flapping of the wing. The user can also specify the launch delay time which will also be preset and enumerated, for example: 5s, 10s, 15s, etc… In order for the user to specify both the launch delay time and the desired trajectory of launch we can use two control knobs (DAVIES #1100) each capable of enumerating seven different possibilities.
# Criterion for Success:
(1) For a given frequency/flapping rate (8.5 Hz) the system must be able to accurately detect the
position of the wings of the robot and trigger a response that launches the robot when
signaled to by a controller.
(2) System must also accurately trigger the launch of the robot after a user-specified period of
time utilizing the switches on the controller.
(3) For a series of 10 launches there should be no more than an error margin of 15% of the
flapping rate in the difference of the launch time between any two of the launches.
(4) The system must be seamlessly integrated with the launcher in order to avoid collisions and
interference with the launch path of the robot.
|40||Glove based Wheelchair Navigation
|Jonathan Hoff||Joohyung Kim||design_document1.pdf
Tanvi Shah(tanviss2), Lakshya Lahoty (llahoty2), Anumay Mishra(aam2)
Glove based Wheelchair Navigation
Individuals with disabilities like paralysis or cerebral palsy find it hard to navigate a wheelchair using a joystick on the arm of the chair, considering limited hand control and movement. This is the inspiration to create a pair of gloves which can facilitate the manoeuvring of the wheelchair with limited arm movement.
The device (i.e. pair of gloves) would consist of two hand gloves; one with flex sensors to control the acceleration and speed of the motors. This is not accounting for the direction. We utilize a ball tilt sensor or an accelerometer in order to allow the user to accurately control the direction of the wheelchair, forward, back, left and right. The data from the sensors is fed into a microcontroller (we are using Arduino) which transmits the information of motion and acceleration/deceleration to the motors (the glove is wired to the chair). We will have one PCB that will encompass the data from the two sensors and relay the information to the motors. We will implement certain bandwidth filters that won't allow faulty data, essentially a threshold to make sure readings are appropriate. There will also be an emergency fail-safe switch that decelerates and stops the wheelchair if needed.
- 5 flex sensors for each finger of the left hand: the idea is to use the data and pressure from the folding of the hand to quantify the speed of the motors in the chair.
- Ball tilt sensor or ADXL345 Accelerometer for the second glove to detect the direction of the motion.
- Microcontroller - Arduino will be used to get feedback from the sensors in the gloves and control the speed and direction of the motors of the wheelchair accordingly
- Batteries, rechargeable and powerful enough to last hours
- Push-Button switch which we use to incorporate the emergency fail-safe to decelerate and stop the motors quickly
- Motorized toy car/wheel for the wheelchair simulation
**Criterion for Success**:
- Successfully be able to control speeds using clenching of the glove
- Successfully be able to navigate the wheelchair by tilting the hand in the desired direction
- Have a fail-safe mechanism to be able to detect harmful speeds and react appropriately to emergency situations
- Be able to filter out the faulty data such that the values remain inbound that are acceptable.
|41||Door Access Tracker
|Chi Zhang||Arne Fliflet||design_document1.pdf
|##### Patrick Connelly (prc2), Benjamin Wasicki (wasicki2), John Scholl (johnts2)
# Door Access Tracker
Many areas of day-to-day life involve the opening and closing of a door. We believe that better information on the state of a door can improve quality of life. For example, one could monitor a door as a security measure, such as a front door, a liquor closet, or a medicine cabinet. In addition, knowing when the mailbox has been accessed could be time saving, especially for someone who has mobility problems. Therefore, we would like to create a device to solve this problem that is cheap, versatile, and easy to install.
We propose the *Door Access Tracker* to solve this specific problem. This would consist of a four part system:
- **Door Status Sensor** - This is a two-piece system for tracking the state of the door. Our current idea is to mount one magnet on the door itself, and another on the frame of the door, allowing for the magnets to act on each other only when the door is closed. Close proximity of the magnets would pull on a conductive component, opening a circuit and changing the current output signal.
- **System Controller** - This is the main computing device, consisting of a battery, a micro-controller, and a WiFi card. These would require very lower power and bandwidth. The primary function of this component is receiving a signal from the sensor and transferring this signal to the WiFi card, which provides this data to a remote server.
- **Backend Server** - This would be the server that would receive updates from each system controller and send out updates to each app associated with the specific system controller based on the configurations set in the app. We plan to run a basic container with our server on some cloud computing platform, possibly Google Cloud. As our server requires very little computing power, costs associated with running it would be negligible.
- **Android Application** - This would be the app that would connect to the backend server. It would tell the backend server to associate it with specific system controllers and receive updates based on configurations it sends to the backend server.
### Criterion for Success
To be effective, our device must meet the following criteria:
- Accurately determine the state of a door
- Reliably send the state of the door to the server upon a state change
- Server sends notifications of door state to the app based on set configurations
- App alerts user based on notification received from server
|42||Vehicle Detection Cane
|Johan Mufuta||Rakesh Kumar||design_document1.pdf
|Neva Manalil (manalil2), Nick Halteman (nth2), Aditi Panwar (apanwa3)
Blind people who use a cane rely on their hearing to determine if it is safe to cross a street. Gas fueled vehicles make a loud noise when driving by, but electric vehicles are virtually silent. With electric vehicles becoming more common it becomes more difficult for blind people to navigate as they cannot easily determine if it is safe to walk.
# Solution Overview
Our solution for determining if an area is safe to walk is a battery-powered cane attachment. When activated by pressing a button, it uses a radar sensor to determine if there are cars or other fast moving vehicles in front of the user and alerts the user with vibration if it is not safe to walk.
# Solution Components
## Sensor Subsystem
The sensor subsystem is responsible for using the doppler effect to identify moving vehicles. This technology has been in development in recent years for use in fully and partially autonomous cars. By emitting high frequency microwave “chirps” (above 77GHz) and “listening” for reflections off of objects, their general location and speed (the doppler effect) can be determined. Further processing can be performed to get more data on the object such as size and certain material characteristics (useful for differentiating between cars and other moving objects like people). We plan to use a radar transceiver such as the TEF810X (linked below), that has been designed for automotive use, and thus has no problem detecting cars at typical driving distances.
An accompanying radar microcontroller is necessary to control and process data from the radar transceiver. It supports a hardware interface with the radar transceiver and hardware acceleration of common radar signal processing tasks. We intend to, as with the transceiver, use a radar microcontroller designed for automotive use such as the S32R Radar Microcontroller (linked below). This microcontroller is actually designed for use with the TEF810X.
## User Interface Subsystem
The radar microcontroller lacks the ability to interface with motors, speakers, and buttons, so a secondary microcontroller will be responsible handling them. The two microcontrollers can communicate through I2C or a similar interface. This allows us to be flexible with where some of the processing is done, as only the DSP intensive tasks have to be completed on the radar microcontroller. The following devices will be controlled by the secondary microcontroller:
Rocker Switch (with raised mark on one side) - turns the device on and off
Push Button - enables car scanning when held down
Vibration Motor - Relays information to the user through various patterns of vibration. This can include the presence of cars, mode of operation, etc.
Piezoelectric Speaker - To make a sound when the battery is about to die
## Power Subsystem
The system will run off two 18650 cells. A usb charger pcb (board used for making portable phone chargers, example linked below) will allow the cells to be charged with a usb cable . The board will also supply 5v at set currents. Voltage regulators will be used to correct the voltage for individual components (likely just one for 3.3v). A voltmeter will be used to determine the voltage across the battery and if the voltage becomes low, the piezoelectric speaker will alert the user.
18650 Cells - provides power for the system
USB Charger PCB - handles recharging the batteries and provides 5v
Mini Voltmeter - To keep check on the charge in the battery (may be included in secondary microcontroller)
# Criterion for Success
The device reliably detects moving cars and alerts the user.
The device is easily operated by a blind person.
The device is comfortable and doesn’t infringe upon regular use of the cane.
The device is safe to rely on.
|43||Automatic Parking Meter
|Shuai Tang||Arne Fliflet||design_document1.pdf
Elliot Salvaggio (elliots2), Kishan Surti (ksurti2), Rutu Patel (rpate347)
Often times when driving on campus, we find that it can become a hassle to pay for parking. We’ve found that sometimes it can be hard to estimate the length of stay you anticipate you will be parked at a location, and may underpay, leading to a very annoying parking ticket, or you may overpay when you think you’ll be parked much longer then you do. It can also be annoying paying parking meters with coins. Although existing solutions include a mobile app that allows one to pay online in 20 minute increments, our solution is still an improvement over this method, which not only eliminates the need for coins, but also the problem of users having to remember paying online on time. Because, it can be hard to remember how much time you have left at your parking spot, and people tend to forget about adding additional minutes on their time.
**Solution**: We propose a product which will be an easy add-on to the base of parking meters. Current open parking lots have parking meters for each car parked at a particular parking space. Our solution is to have an app that takes in a person's card details and vehicle information (license number). We plan to implement a small unit which can be added underneath the current coin meters. The unit consists of a camera, QR scanner, microcontroller, and a LED. At each parking lot, we have a camera at each parking space, that would take a picture of the car's licence place. The camera will be triggered when a car enters the parking spot, using a proximity sensor. Using computer vision, we would be able to extract numbers from the images captured by the camera, and we would use that information to recognize the vehicle parked (if the license plate matches with a user who uses the app). If we recognize the car parked, we would have an LED on our system lit green, or else red on the parking space itself so the user knows, and to begin timing their stay. Now in cases, we are not able to retrieve the vehicle's information, as a backup, the user can open the app, and click on an option for generating QR code, which they can bring towards the built in QR scanner to identify themselves, which will turn the LED green. When the user drives off, the proximity sensor can tell the car has left, the user's card associated with the app would be charged for the time the vehicle was parked there. For purposes of the scope of this class, we will emit the app functionality that charges a users credit card, and stick to recording their time spent in the spot, and calculating the amount that they would be charged for their stay - down to the minute.
This solution is an add-on to the current parking meters at each parking spot. So, the user can always pay by cash, if they do not want to use the app.
- Proximity sensors for car detection and phone scanning detection
- Camera to take car's number plate pictures
- QR/Barcode scanner for scanning the code from the application
- Microcontroller (Arduino) for driving the logic of the system
Microprocessor/Microcontroller (Arduino) for decision making based on whether a license plate was successfully captured and can be read.
Computer Vision library (OpenCV) to be able to read a car’s plate and have it look up the owner’s account to display on our App how long their car has been parked in a spot.
IP Camera for license plate detection and the ability to send that data to our processor subsystem to make meaningful use from it.
Proximity sensor to detect whether a car is present in a spot, this helps recognize when a car approaches a parking spot, to trigger the camera to take pictures.
QR code reader that can detect the placement of a QR code inside the vehicle. This is a backup solution if for some reason the Camera cannot detect a license plate due to weather conditions such as snow, or rain.
LEDs will be displayed depending on whether a vehicle has been successfully registered into a spot and the time can begin accumulating.
Mobile Application will also be provided, this is for a vehicle owner to log in and enter their credentials, such as license plate of their car, so we can quickly look up their plate. This will be a basic app to keep track of our data - rather than the focus - we do not plan to do significant software engineering, but just enough to display that our system works.
The application will also display the current amount of time a vehicle has been parked in a spot and the appropriate amount to be charged.
- Coin Meters : Coin meters do not solve listed problems, since they are the most basic solution in parking payment
- Paying via app (Eg: Mobile Meter (Urbana-Champaign)) : It needs manual input of the parking zone, lot, parking space etc. Even more, it takes in the time the vehicle is parked in advance, if the user leaves earlier than the paid time, it still is a non-refundable payment, and the user just paid for the spot he/she is not using - main problem we are trying to solve while also eliminating carrying cash as well as putting minimal work on the user's side.
- Building parking lots: Existing building parking lots have a similar implementation, however our solution has computer vision added to that solution. There are building parking lots that work by giving a user a QR scanner ticket while the user enters a building, but we are trying to solve a general open parking lot problem where each space is metered.
**Criterion for success**:
- The IP camera on the parking meter is able to capture a vehicle’s plate when it comes to park.
- The microprocessor is able to match a vehicle’s license plate with the respective owner’s account and begin accumulating time parked in that space.
- The parking meter is able to stop the time on the vehicle once it departs from the spot (using proximity sensor) and display the appropriate amount charged in the application.
- The LEDs display as green once a vehicle has successfully been matched and the time begins to accumulate, but display as red once the vehicle departs a spot or a match has not been made.
- The QR scanner acts as a backup if we cannot obtain or read the plate information.
|44||Dog Training Collar
||Gonzalo Pastor Carrascosa
|Megan Roller||Joohyung Kim||design_document1.pdf
Jihyun Lee(jihyunl2), Louis Kim(ltkim), Gonzalo Pastor(gonzalo5)
Many dog owners face the problem of their dog breaking, damaging, or otherwise interacting with an object or area they shouldn’t. Additionally, owners with multiple dogs may have trouble keeping one dog from eating the other’s food.
Our solution is to create a bluetooth-enabled collar system that will spray the dog in the face with citronella spray when the dog approaches the object or region of interest programmed by the user. This spray has been tested and is completely harmless to the animal, with multiple products using this spray already in production.
Our product will be unique in that it allows distance sensitivity to be configured by the user via a free mobile app. Existing products that are most similar implement a similar spraying collar that detects barking rather than distance, or a remote control that triggers the spray. The first product offers a solution to a different problem completely, while the second is inconvenient (when the user is not with the dog, it is completely ineffective).
1. Sensor(BLE) Subsystem:
The RSSI values from the Bluetooth Low Energy controller(nRF52832) will be used to indicate the distance between the dog and the sensor. When the RSSI value is less than the desired range, the controller will signal the spray to stop the dog. Both the sensor and the collar’s bluetooth device will be on, so that they we automatically connect when the dog enters the connection range.
2. Spray Subsystem:
For the spray that will be activated after it receives a signal from the BLE controller, we will use a mini 12V water pump with a short thin tube for a spray effect.
3. Power Subsystem:
Since all subsystems will be connected to a power source at all times, we plan to use a rechargeable lithium ion battery that can be charged via a micro USB cable.
4. User Interface:
The user interface(app) will be used to program the controller to set the desirable distance.
Criterion for Success:
Since we cannot test the effectiveness on an actual dog, the goal would be to accurately activate the spray at the desired distance.
|45||Universal Key and Proximity Lock System
Talita Maria Briganti Barbosa
|Madison Hedlund||Jonathon Schuh||design_document1.pdf
|Jason Hackiewicz (jph2), Talita Barbosa (talitab2), Brant Bedore (bbedor2)
Opening locked doors with the traditional key and lock system can be a hassle. Managing several keys which all belong to various locking systems is difficult to operate when carrying groceries and traditional RFID locks require the user to touch a card to the locking device. Moreover, people with impairments to arm or hand movements such as Parkinson's disease have a particularly difficult time with this task.
# Solution Overview:
Our solution to this problem is a touchless proximity lock. While previous projects have touched on using the idea of proximity locks, this solution differentiates itself by being touchless and using bluetooth communication instead of a traditional RFID system. The system will include a key and locking device which are “paired” via bluetooth. This functionality would allow the user to pair the key with multiple locking systems to act as a universal key. When the key is brought within a proximity of about ten feet or so of the locking system the door will automatically unlock. This functionality would solve the original problem of managing keys as well as being a solution for individuals with a physical impairment.
# Solution Components
## Locking Device
- External user interface: This would be represented by a small display shown on the outside of the locking system. The display would show information that anyone outside of the door can see. For example, this display might show information such as the remaining battery for the locking device.
- Internal user interface: The internal user interface displays some of the data that the external user interface displays like the battery life but also displays additional information that only people inside can see. For example, this might display whether the locking system is currently in the locked or unlocked state. This side of the user interface would also have a button that pairs with the universal key when the button on the universal key is pressed at the same time.
- Power System: To power the device a small battery system will be used as the device should run with a low power consumption.
- Bluetooth Communication: This internal part of the locking system will process and correctly interpret a signal from the universal key. When a signal is received from a paired device within the appropriate distance, the door will unlock.
- Alternative mechanical locking system: This subsystem is a backup in case that the battery dies or something else with the system is not working properly. This will operate like a traditional lock and key and allow the user to mechanically unlock the door with a backup key.
## Universal Key
- Power System: The key will contain a small battery system to power the device for use. This battery need not be powerful as the function of the device is minimalistic.
- Bluetooth Communication: This component inside of the key will send a signal when within the appropriate distance from a paired locking system.
- Key Enclosure: The key will be a small device that can easily be carried in someone’s pocket. Therefore, this key enclosure must protect the internal electronics from being damaged. In addition, the enclosure will have a button that allows the device to be paired with the locking device. Once paired, the device should not need to be paired again and will unlock within the set proximity.
# Criterion for Success:
Based on the definition of the problem, it is important that the system operates hands free. This was the primary goal of the project and would extend the audience of the device to users with disabilities. The device must also be able to be manually overridden in the case of a malfunction (something that is taken care of by the alternative mechanical locking system). Finally, the device will be designed to be as cost effective as possible as many of the locking systems currently on the market come at a high price tag.
|46||Hip Hop Express Window Equalizer
|Jonathan Hoff||Rakesh Kumar||design_document2.pdf
|This project was pitched by Dr. Patterson and involves collaborating with Architecture and School of Music students on outfitting the Double Dutch Boom Bus.
Dr. Patterson is trying to repurpose a school bus to serve as a mobile musical laboratory where kids can learn about creating music through an interactive experience, and he needs help outfitting the bus with appropriate technology to make that happen.
We want to enhance the Boom Bus experience by integrating the windows as a part of the audio mixing process by using them to control an audio mixer. We will fit the windows with optical sensors, which will be hooked up to a microprocessor that can apply effects to audio. The windows sliding up and down will provide a visualization of how a mixer works and can demonstrate a crucial part of the music making process.
This project will try to address the most important aspect of Dr. Patterson’s presentation and that is pull more people, especially the younger generation into the music experience. We believe the best way to do that is to let the audience be a part of musical experience themselves.
Plenty of equalizers and mixers have been created before using both hardware and software solutions. Our project is unique in that rather than directly using a knob or slider to control the equalizer and sound effects, we will use proximity sensors to control the effects’ strength. In addition, our mixing will be done through a microprocessor, making our unit more condensed as opposed to using software on a personal computer, phone, or similar.
School buses have vertical sliding windows that don’t quite open all the way, so we will attach IR sensors in the gap at the bottom of the windows to measure how open they are without interfering with their function as windows.
We will make a PCB to discretize the signals from the sensors and route them to a microprocessor.
The microprocessor will take in the signals from the PCB and generate a filter, which will then be applied to the sound input from an AUX or RCA cable and then outputted to an AUX or RCA cable connected to the amplifier for the sound system.
**Criteria for success:**
A successful demonstration would involve sliding the windows of the bus up and down while our device is connected and turned on, and being able to hear the difference in musical qualities, such as differing volume for certain pitches or effects such as distortion, reverb, or tremolo.
|Madison Hedlund||Arne Fliflet||design_document1.pdf
|Fethi Bartu Alp - falp2
Derek Niess - dniess2
John Quinn - jmquinn2
Problem Statement: The most common reason for death in the event of an avalanche is not because of snow hitting people with very high magnitudes but it is because people suffocate trying to find a way out while staying under a huge amount of snow. After an avalanche hits a person, he/she becomes very dizzy and loses his/her orientation. Seeing everywhere white, the person under the snow can't find the way he/she needs to dig in order to reach the surface. Because of digging in the wrong direction people suffocate and sadly die under the snow.
Solution Overview: What if we can integrate a direction display/pointer into a wristband that will constantly display the direction the person needs to dig in the event of an avalanche. The direction would always adjust itself to show the opposite direction to the ground, showing the person the direction he/she needs to dig. This same wristband can also be used by surfers since they also suffer from the same threat, just in a different environment. The display would be LED with an arrow in 3 dimensional space (a X would be pointing down while a dot will point upwards just like in physics.)
After some thought we found out that to accomplish this, instead of a wristband that always shows the opposite direction of gravity we would need a wristband that constantly shows the direction of the normal force. Since many mountains are inclined surfaces the shortest route out of an avalanche is to move perpendicular relative to the ground, which is the direction of the normal force. To accomplish this we will still need sensors to determine the orientation of the user which will be a magnetometer, accelerometer, gyroscope. Our pcb will be used to take the input data from the sensors and process it in order to determine the correct direction and display it on the wristband screen.
Possible Additional features: We can also add a feature to the wristband that will notify the surfers for the undercurrents so surfers know to avoid certain spots.
Subsystem 1: 2-Dimensional Orientation
We will use a magnetometer, which essentially measures the direction, strength, and the relative change of a magnetic field at a particular location. The magnetometer will be used to determine the 2-dimensional orientation of the wristband.
Subsystem 2: 3rd Dimension
An dual sensor with an accelerometer and a gyroscope will be used to determine the device rotations and hence the 3rd (z) dimension to completely give us the 3 dimensional orientation of our wristband.
Subsystem 3: PCB
There are many things we would like to achieve with our PCB. First off, since we are looking for the direction of the normal force we would need our pcb to determine the slope of the inclined plane we are currently at. After the implementation of the 2 subsystems determined above (after having a detailed orientation system), we can estimate the slope of our plane by the change in the distance we have already travelled. Using the x, y, z data from only the most recent part of the mountain that the skier has skied through will give us a rough estimate of the slope.
Subsystem 4: Power Subsystem
We will need a way to supply power to this wristband in order for the sensors and LED display to function properly. We propose to use a lithium battery to provide power to all of the components of the wristband
Criterion for Success:
The device reliably shows the right direction at any given angle and position of the wrist
The direction is clearly visible for the user.
The device is safe to use and comfortable to wear.
|48||Real-time braille translator
|Shuai Tang||Rakesh Kumar||design_document2.pdf
- Ashmita Chatterjee (ashmita2)
- Aayush Raj (aayushr2)
- Matthew Price (mjprice2)
**Problem**: Visually impaired people have a difficult time reading texts that aren’t written in braille. Public places like restaurants and libraries don’t provide menus and most books in braille. This restricts their independence and limits the amount of knowledge they can consume.
**Solution**: Our solution is to create a handheld device that can assist people in such circumstances. The idea is to develop a device that can scan a piece of paper and translate the written text into Braille in real-time. Each letter in the english alphabet has a translation in braille and we will be exploiting that feature through our device. The device will have the ability to scan a small section of a sheet of paper and translate that portion into braille and display it using a refreshable braille terminal.
- **Subsystem 1**: Camera system
This unit will include a camera and a flash to allow the camera to capture pictures in environments that aren’t well lit. The camera will also have a sensor to detect the amount of light coming in and adjust the flash accordingly. This component will also send the captured picture to the processor for image processing.
- **Subsystem 2**: Image Processing Unit
This unit will mostly be included in the micro-controller chip. The chip needs to be powerful enough to perform an OCR operation in real time. The OCR conversion will allow us to take an image and produce a string of characters that will be then be stored in a buffer in a flash memory chip. A section of this buffer (maybe 5 characters) will be displayed on the braille display at a time. We plan to use an off-the-shelf library to convert the captured image into a string of text.
- **Subsystem 3**: Braille display unit
This unit will comprise of components that will show the braille output. This involves taking a string of 5 characters as an input and pushing the correct tips upwards and having them hold their position till the next string is detected. In order to perform this operation, we will need a hash map that maps each character in the english alphabet (along with the numbers and some punctuation) to the respective characters in braille. A braille character consists of 6 dots (2 columns x 3 rows) raised in a specific pattern. Once this unit converts a character into the braille version, we store this braille character in the form of bits which then convert to electrical signals that will allow us to push the respective dots on the refreshable braille display. We have spoken to Gregg at the Machine shop and have come up with some ideas on how we can set up a refreshable braille display. One potential way to solve this issue is to use a miniature solenoid to represent a single dot on a braille character. Each braille character consists of 6 dots set up in a specific pattern so we would use 6 solenoids to represent a character. The miniature solenoids available online are small enough that we can fit 6 of them in a small rectangle such that it counts as one single legible braille character. Link to miniature solenoid. These solenoids need a continuous signal to keep them pushed up and can be individually controlled. Gregg mentioned that the shop can help us get these parts and then fix them up in a metal plate in such a manner that we only need to care about the signal that goes into this braille display.
- **Subsystem 4**: Next button state machine
Since the braille display can only show a limited number of characters compared to what is being scanned by the camera, we will need a state machine that will allow the braille display to show relevant information in an order. This state machine will use a “next button” input signal to figure out which portion of the scanned text to display.
**Criterion for success**:
- The device is able to capture an image
- The device is able to successfully convert the captured image to text
- The device is able to successfully display the braille version of the captured text (or part of it)
This project was pitched to us by our friend Abhijoy Nandi who is a senior student in Industrial Design here at the University of Illinois. He works on concept design for interesting projects and is curious to see one of these concepts working.
Link to the concept design of this product : https://www.abhijoynandidesigns.com/samanya
|49||U.S. Army Microgrid
|Vassily Petrov||Jonathon Schuh||design_document1.pdf
|Group Members: Sahil Morrow (sahilsm2), Patrick Yang (pyyang2), Matthew Weberski (mwebers2)
There are locations around the world where traditional energy sources are poorly distributed, or not available at all. This can be caused by natural disasters that knock the grid in the area, leaving the people in the area without any electricity. Also, it makes the job of aid workers more difficult because they have no way to power any devices they bring to help people in need. The Army Corps of Engineers wants a system to provide an emergency source of power to places in need.
We will develop a small-scale microgrid that will be powered by multiple diesel engines. After reestablishing power, the diesel engines would then be substituted by a clean energy source. The subsystems for our design will include: the two different power supplies, a power converter to transfer power to the grid, a control system, and a monitor system.
Power Supplies: Initially diesel engines will provide the energy needed to restore power back to the grid. Overtime, these diesel engines would be transitioned to a clean energy source, such as solar panels to maintain the grid.
Power Converter: There must be an interface between the systems providing the power to the microgrid after being generated from the different power supplies. The microgrid will provide AC power, while the power supplies will generate DC power. A DC-AC inverter will be required to convert the output power of the supplies to match the required input power of the microgrid.
Control System: A control system will be needed in order to keep the microgrid operating within the desired power range as specified by the client.
Power Monitoring System: We will have sensors attached to the microgrid, along with a UI individuals deploying this microgrid may use to keep track of the performance of the microgrid and how much energy it is capable of providing.
Criterion for Success:
The Army Corps of Engineers wishes for us to design, model and analyze a small-scale prototype of the microgrid.
|50||Hip Hop Xpress - Audio Synchronized LED Party Lighting
|William Zhang||Rakesh Kumar||design_document3.pdf
|Names & NETID - Kowei Chang [koweic2], Paul Donnelly [pauld2], Oluwatosin Akinsanya [oakins2]
Title - Synchronized Music-LED Lighting System (we can change it)
Problem Statement - The Hip Hop Xpress buses have been a symbol of African-American music and history in the Champaign-Urbana area for years. From introducing new emerging DJ music technologies in under-resourced parts of the community to displaying African-American music legends, the bus prototypes remains a fusion of African-American art and technology. The “Double Dutch Boom Bus” is the third iteration of the Hip Hop Xpress, however, this project is headed by individuals that lack expertise in electrical and electronics applications. We will be installing an LED lighting system that is synchronized with the pitch of the music playing. The colors and lighting patterns of the LED system will be automated to provide a beautifully synchronized musical experience.
Solution Overview - The solution will be a LED lighting system that is safe and low-voltage around the exterior and potentially the interior of the bus. A potential Bluetooth mobile application can be used to remotely control the different lighting modes of the bus. The incoming audio signals will be processed by a state variable filter to isolate different frequencies. The isolated frequencies of the audio signals will be used by an ARM microcontroller to control the LEDs to change color and brightness.
Solution Components - ARM microcontroller - The memory will store the LED programs and using the integrated A to D converter we can measure the level of the bass signal and define triggers to change colors of lights.
State Variable Filters - This circuit contains op-amps, resistors, and capacitors. This is used to isolate the audio signals that each string of LEDs will work with.
Android App (maybe) - Ideally we can have app control overpowering the lighting system on/off, and possibly to choose the LED programs to run.
Criterion For Success - The Hip Hop Express will be “lit” (pun intended) by colorful LEDs inside and out, in order to promote a fun, high energy, party atmosphere for the members of the community that gather around the bus. We want some of the LEDs to change color along with the bass of the current song being played.
|51||Power Bank Sharing System
|Shaoyu Meng||Jing Jiang||design_document3.pdf
|Fangwei Gao(fangwei2), Hanlei Gu(hgu7), Huiwen Song(hsong38)
Power Bank Sharing System
Think about this scenario: when you are out in a restaurant, shopping mall or a Starbucks trying to reach your friend, your phone is out of battery. This could possibly happen to anyone who has a mobile phone. However, most people don’t carry cables or power banks with them all day long because, first, this situation normally does not happen; and secondly, cables entangle and become a mess in the bag and power banks are not only heavy (inconvenient), but also very expensive for some high-performance models. However, once this situation happens, people run out of options.
Power bank sharing station system. Place the stations in shopping malls, restaurants and other public places where people do not have the convenience to stick to a power outlet to charge their phones. Each station holds several power banks. When customers would like to borrow a power bank, they could scan the QR code on the station and the station would automatically pop out one power bank with a enough charge. When customers would like to return the power bank, they need not necessarily return it to the station where they originally borrowed the bank, but they can just find a nearby station with an empty slot and insert the bank into the station. This service can solve the problem because first, the banks provide portability that charging cables does not have; second, people do not need to pay for the bank, they are just charged for the service provided.
A deposit and a payment option would be required from the user. The customers should simply log in to their mobile app (which is part of our system) to scan QR code to check out the bank and the charge would be automatically placed on their accounts when they return the power bank. And the deposit would be returned when they return the power bank. If at place where a valid credit system is implemented, like a U of I, the credit system can be incorporated, so that student could check out banks with their i-card and the fees will directly posted on the the student account. Each power bank should have USB A, USB C, and lightning cables to cover customers with different models of cellphones. The bottom side(to connect with the station) should have metallic parts as a connector and a "charging cable". That is, when the power banks are in the station, they should be charged and stick to the station, which could not be unplugged.
1. The station will have a power unity that power the station and charge every power bank in the station.
2. The slots that hold the power banks should be capable of detecting if the banks are checked out or returned back. It would also be able to detect how much power is left in each bank.
3. Locking unit should be able to lock the banks before the checking out process is completed and release the bank when checked out.
4. Each power bank should be identified when checked out and returned in order to calculate the time and rate.
1.We would like to design an Android app that allows users to locate power bank stations, unlock power by scanning QR code on a power bank station and check the status of the power bank brought. Our current plan for this app is React-Native + Flask + MongoDB.
2. We need to program a microcontroller in a power bank station as the main interface. The microcontroller should be connected to the internet and communicate with our central server to handle unlocking and returning processes.
Criterion for Success:
A good interaction between users and our power bank sharing system can be achieved. Users can borrow a power bank in a short time by simply opening a mobile app and scanning a QR code. The returning process should be smooth as well to create a good user experience. The power banks should be stored and kept correctly and securely when not been checked out.
|52||Modular Autonomous Home Light
|Chi Zhang||Jing Jiang||design_document1.pdf
|## Group Members
Cary Chai (caryzc2), Samuel Darmamulia (sid2), Makomborero Tizora (mtizor2)
Some modern buildings have motion detectors installed which are connected to a room’s circuitry and can shut off the lights and power in a room when no one is occupying it. However, currently, there is no modular solution that can be used with older buildings without having to open up the walls and rewire the internal circuitry.
We will have a sensor unit which will detect the occupancy of a room and communicate with a modular, external unit which can be implemented on manual light switches to automatically turn off and on lights without needing to rewire a building's circuitry. This way, typical families can afford to have motion detected lighting installed without needing to hire an electrician to install motion sensors.
There will be one infrared sensor which will sit at the entrance of the room and when someone passes by it within certain parameters, the sensor will count up one. When someone leaves, the device will count down one. If the current count is zero, a separate unit attached to the manual switch will flip the light switch to turn it off/on after a certain amount of time.
These two devices will communicate with each other through Bluetooth. There would also be a method to calibrate the infrared sensor, so it would be easy to implement in all rooms with old light switches.
In addition, we want to have an integrated phone app which will be able to communicate with the units in order to allow the user to turn off and on the lights from anywhere within the house and get a full IoT light experience.
## Components List
- MCU: We will be using an ATMEGA16U2 microcontroller. The microcontroller will be responsible for communicating with the infrared sensor, bluetooth module, and servo motor.
- Infrared Sensor: We will be using the HC-SR501 PIR Sensor. This sensor will be in charge of determining if people have entered into the room.
- Bluetooth transmitter/receiver: We will be using Bluetooth HC-05 module
- Power supply: We will be using double AA batteries and a battery case.
- Motor: We will be using servo motor
## Criterion for success
- Successful detection of occupants entry and exit from the room using infrared sensors
- Storage on sensor unit of number of occupants in the room and the state of the light switch
- Lightweight servo switch that attaches to currently existing light switches.
- Bluetooth communication between MCU sensor and MCU switch units
- App to control the light switch from anywhere in the room
## Links for Parts
- Servo Motor:
- Bluetooth: (x2) https://www.amazon.com/HiLetgo-Wireless-Bluetooth-Transceiver-Arduino/dp/B071YJG8DR
- Sensor(Motion Sensor):
- Battery cases:
|53||Smart Electronic Component Organizer
|William Zhang||Arne Fliflet||design_document1.pdf
|## Team Members:
Kaiwen Zhao (kaiwenz2) | Yihao Deng (ydeng29) | Canlin Zhang (canlinz2)
As EE students, most of us have stored many electronic components such as resistors, capacitors and MOSFETs. Traditionally, we would store these components in storage organizers, a huge cabinet with many transparent plastic drawers. A small organizer may only have as few as 20 drawers. However, a larger one can have up to a hundred organizers. A big problem is that people usually cannot immediately locate the components they want. They have to look into the transparent boxes or at the tags one by one, wasting lot of time.
## Solution Overview:
We propose a solution of this problem by creating a logger with indicators for people to better store and find components. It would also consist of mechanical designs to push the drawers out from the back. People would use the logger to either assign a certain drawer to a certain component or command the automated stick to push out the drawer with the component they need. The logger would have a simple LCD screen and buttons (with labels of 0 to 9 and r, c, l, ic, value, number, enter, eject and clean). User would be able to log new components and find logged components using the screen and the button. To find a certain component, user would use the buttons to specify the component they want. For example, if user types r, 0603, value and 200 and presses enter, the indicator (a LED) of the drawer which user registered before for this component would be lit. If the user press eject button, the specified drawer would be pushed out from the back.
## Solution Components:
*Logger subsystem* The logger would be composed of an LCD screen and buttons mentioned in the overview. The LCD screen will show the menu for user to interact with the system using buttons so that new components are logged, and old components are located. All the required programs would be instantiated in an ARM chip.
*Mechanical pusher subsystem* The pusher would be located at the back of the organizer box. It would push out a certain drawer specified by the user. The pusher would be a simple structure resembling a robotic arm with two degree of freedom for reaching all drawers.
*Indicator array subsystem* There would be a 2D array of LEDs to show boxes specified by user.
*Power subsystem* The power subsystem would be composed of some regulators and MOSs to drive both the digital circuit including logger subsystem and ARM chip as well as the Mechanical pusher subsystem.
## Criterion for Success:
Users are able to register new components into the system
Users are able to locate components with buttons, LCD screen and indicators
Mechanical structure would be able to push out any drawers
|54||Music Discovery Band
|Stephanie Jaster||Rakesh Kumar||design_document1.pdf
|Michael Faitz (mfaitz2), Nitin Jaison (jaison2), Vignesh Srivatsan (vsrvtsn2)
Music Discovery Band
Problem - We want to simplify the seemingly random task of discovering new music for Spotify users. Many people do not know how to go about finding new music because they don’t know if they will like it in the moment. There currently exists no technology that can provide song suggestions from the entire Spotify library by keeping track of a person’s physical activity levels, environment, and even mood. We hope to use this information to match users to songs that they would be more likely to listen to while they undergo any situation of varying intensity. For example, we would match fast-paced, vibrant songs for periods when the user does physical activity, or calmer, softer songs to accompany the user doing leisurely activities around the home.
Solution Overview - We propose a wearable wristband that can sense a user’s activity level and environment so that this information can be sent to a mobile application to determine ideal music to listen to. When determining what type of music might best accompany any given situation, we must first determine the different motivating factors for listening to music. We mentioned earlier that we aim to match songs of high intensity to the user when they undergo physical stress on the body, like during exercise. For this reason, the wristband will include both a heart beat sensor and an accelerometer in order to keep track of the user’s physical stats over an extended period of time and suggest music of similar intensity levels. Another motivating factor to determining music is the user’s environment, as the type of music they want to listen to might change in different surroundings. For this reason, we also intend to include an ambient sound sensor that can detect noise level changes in the user’s environments to provide further information to the song selection process. Finally, a person’s mood has a significant impact on the music they listen to, and we aim to account for that by implementing physical buttons on the device that the user can press to indicate different moods, which will in turn add a “filter” to the type of music that will be played to more closely match the user’s mood. A bluetooth transmitter will be used to connect the wristband to the user’s phone, where information regarding the user’s heart rate, acceleration, level of noise, and mood will be used to queue up songs to listen to. The app will use this information to find songs that match the user’s situation by finding songs through attributes such as tempo, loudness, and energy. By using the entire Spotify library we give the ability to discover new music, filtering through to find certain songs that correspond with the information from the wristband. We also understand that different people’s heart rates means different things, so we will set a baseline resting, light activity, and rigorous activity heart rate when the user first uses the app. We can do this by asking the user to rest, do a light jog, and run, collecting their heart rate for each activity.
[Subsystem #1] : Sensors
- Since the goal of the device is to capture as many aspects of the individual’s life as possible to make an informed decision about music, there are several sensors that we wish to implement into the wristband. The general layout of the wristband will have the PCB and a heart rate sensor on the bottom side of the wrist to combine the flat zone needed for the PCB within the wristband with the location that the heart rate sensor needs to be for the user. This will provide clear indication to the user that the wristband is on correctly as well as effectively simplifying our layout. We also wish to have an ambient noise monitor within the design to tell us information about the current situation of the user whether they are at home or in a more noisy public environment. Another sensor we want to add is an accelerometer which can read the users current speed and use that to consider the type of music they may want to listen to. If a person is running and moving fast they may want to listen to more upbeat music whereas sitting down at the computer or to read would dictate a more peaceful/calm song choice. Finally, we want to implement some selection of moods available for the user so that they can further provide a filter for the song discovering depending on how they are feeling at the time. These inputs will take the form of small buttons available on the wristband which indicate moods such as “Relaxed”, “Working Hard”, “Frustrated” and factor them into our selection. Along with these buttons will be options for stopping the song, skipping the song, and volume control.
- [Subsystem #2] Power:
- The wristband will need some compact power supply to make all other subsystems function. For this a lithium battery should be sufficient due to its relatively low size and considerable power output.
- [Subsystem #3] - Software:
- There will be a mobile application that will be connected to a user’s Spotify account and the wristband through the phone’s bluetooth capability. After setting the baseline heart rates when the app is first used, it will receive heart rate, acceleration, outside sound, and mood information from the wristband. An algorithm will then be used to filter through Spotify’s music library and match songs using Spotify’s audio analysis features to find ones that match the activity level, environment, and mood of the user. Before the current song ends the app will use the updated information from the wristband to queue up a new song.
- [Subsystem #4] - Control:
- The control for this wristband can be handled via a microprocessor with the PCB and will process information collected from the various sensors and user inputs and send the processed data discussed in the software subsystem to a bluetooth transmitting device that will be connected to the users phone.
Criterion for Success - Describe high level goals of what your project needs to meet to be effective.
- Wristband with accelerometer, heart-beat sensor, sound sensor, and mood buttons that can accurately track information as its pertains to the user’s activity level, surrounding environment, and mood.
- The wristband should also be able to transmit the information it collects from its sensors and buttons to the smartphone app through a Bluetooth transmitter
- The smartphone app must then take this information and use it to automatically queue up and play an “appropriate” song from Spotify’s library. A song is deemed appropriate by an algorithm that we will implement that will consider all of the factors listed above given from the information transmitted by the wristband. It will use this information to queue up a song that should match the user’s activity level and mood.
|55||Sea Slug Simulator
|Ruhao Xia||Rakesh Kumar||design_document1.pdf
|Sea Slug Simulator
We are required to build a prototype robot to simulate the sea slug’s behavior under the presence of several different stimuli such as its enemy and prey.
In this project, the sea slug is abstracted to an organism with motion ability and sensitivity to the presence of certain stimuli. For demonstration purposes, the sea slug is represented by a robot equipped with two types of sensors. Its natural habitat is represented by a special testing environment with marked boundary containing two different types of stimuli that corresponds to the sea slug’s food and enemy.
Motion Module: The slug is represented by a Roomba robot, and Roomba has built-in wheels and motors.
Power Module: The robot system is built on Roomba. Roomba is powered by its own rechargeable battery, and our own controller circuit will be powered by battery.
The sensors are installed at the front of the sea slug simulator.
We will use IR sensors to detect the distance between the slug and its surrounding objects.
We will also use IR sensors to detect heat signature preys of predators.
We will use color sensors to detect the type of food the slug likes and not like.
Raspberry Pi will be served as the processing unit.
It will take data collected from all the sensors and calculate what the slug would do based on an algorithm. Based on the results, the slug will either run from predators, go for prey, or keep wandering around.
It will also be used to guide Roomba’s movement.
Circuit Module: This module consists of simple circuits to integrate the sensors together, and connect Raspberry Pi with the original robot. The circuit module also required to resemble the natural habitat.
Testing Module: This module will consist of a larger environment which has certain components as stimulants.
Criterion for success:
The sea slug simulator will react to the stimulants in the testing environment as a real sea slug does.
The sea slug simulator will make the safest choice under complex environments. i.e. The sea slug simulator will choose to run away when its favorite foods and predators occur in the same area.
|Ruhao Xia||Jonathon Schuh||design_document1.pdf
|Kevin Kovathana [kjk3], John Miller [johnm3], and Collin Haney [cdhaney2]
Problem/Objective - The Hip Hop Xpress is a bus that will travel to different communities throughout the US to educate people on both music and technology. We want people, of all ages, to come together and become a part of the bus and their community. By adding an easy-to-understand, eye-catching, interactive device that anyone could pick up and use would drastically help reach this goal.
Solution Overview - We will design our own flat board, similar to a drum board, that would sit on top of a table. The board would have a wide variety of different sounds to select and combine in a loop to create personal beats. It would be portable in order to be set up outside the bus, so people can use our board without having to get on the bus. The board’s instruments would be organized into several, color-coded sections.
[Subsystem #1] - Audio Output: Our device will have AUX capabilities in order to be used in settings with unique sound systems.
[Subsystem #2] - Display: The device will have a simple display panel to indicate the current settings. This includes volume level and beats per minute (BPM) count. The OLED or LCD panel will display the content provided by the PCBA.
[Subsystem #3] - PCBA: Based on which button the user presses, our PCBA will detect the exact instrument and pitch and send the desired sound signal to the audio output. When the user turns the volume knob, the PCBA will adjust the level on the display and also on the speaker itself. When the user turns the BPM knob, it will change the display accordingly. The created beats that are running in a loop will be stored in the memory. There will also be a clear button that erases the previously created beats which will allow the user to start over.
[Subsystem #4] - Power: Our device will be powered by standard 120V DC power. A three prong outlet cord will be connected to our PCBA to provide power throughout the device.
[Subsystem #5] - User Input: Each instrument based section would contain several different buttons. Each button would contain push button switches which would be our sensors and our way to send user input data to the PCBA. The different buttons would correspond to different pitches of that instrument. It would include features such as power and clear buttons or BPM and volume knobs. This board could be operated by a single person but also supports multiple users.
Criterion for Success -
- When buttons are pressed, the corresponding sound is played through a speaker.
- It is so user friendly that a kid would be able to understand how it works without reading directions.
- The display presents the accurate values to the user and aids the creative process.
- Single or multiple users are able to press buttons to add a variety of instrument sounds to a four beat loop in real time. After creation, users will be able to listen to their finished music. The device should encourage collaboration, and more importantly, bring the community closer together by demonstrating their creativity in music and technology.
|57||Device to track laundry machines and availability
|Dhruv Mathur||Arne Fliflet||design_document1.pdf
|Problem: Not all apartments come with in unit washing machines, so there is usually a communal washing machine that is either within the building or in another building nearby, which would require you to carry your dirty laundry with you to put it in. The problem in this is that tenants would not know if there was an available machine, and there isn’t an easy way to track the availability of a machine other than staying in the room and waiting for the person to come and finish their laundry. This also applies to students in dorms such as our own university.
Solution: Our solution to this problem would be an app that is connected to sensors on each machine to show if one is available for use. We would also implement a queue system in which you can put yourself in line for use of the washing machine which would make it so you do not have to wait in the room for the machine.
The way the queue works is as follows. People sign up to use a washer or a dryer, but not any specific machine, since we don’t want 5 people in line for washer 1 when washer 2 is completely open. In order to prevent line cutting, only the next person in line is notified when a machine opens. In order to prevent the line getting backed up by someone who isn’t using their laundry privileges, we are implementing two solutions. First, we will add a grace period of 15 minutes so that if you don’t change out your laundry within 15 minutes, the next person in line gains your privileges. Second, if something comes up and you can’t do your laundry right now, we will add an option to leave the queue immediately and allow the next person in line to use the machine.
Subsystem #1: Washing machine/Dryer vibration sensor. This subsystem will use a vibration sensor to track the current status of the washing machine or dryer (in use or not in use). We thought about using a heat sensor, but since not all wash and dry cycles are hot, this wouldn’t always work.
Subsystem #2: Pressure Sensor: Even if a washing machine or dryer is finished, there still may be clothes inside. To make sure that the machine is truly ready for the next user, we are going to use a pressure sensor to check if the clothes are still present or not.
Subsystem #3: Locking mechanism: Once the washing machine is available for the next person to use, there would be a lock on the door which would take in a password or combination that is randomly generated and sent only to the next user to open and use the machine.
Subsystem #4: This subsystem will be in charge of relaying information provided by the washing machine status sensors to an app using bluetooth. An arduino with a Bluetooth communication module can be used for this purpose.
Subsystem #5: The app that will track washing machine status and allow users to sign up for a queue to use the washing machines. A washing machine/dryer is listed as available only if the following two criteria are met. Firstly, the machine must be totally stopped, which would indicate that the washing or drying is done. The app would receive this information from the vibration sensor. Secondly, the machine has to be empty, meaning that the washed/dried clothes have been removed. Our pressure sensor will communicate this information.
Criterion for Success: The system is able to accurately track statuses of washing machines.
System is able to provide status information of washing machines to an app.
Users of the app are able to sign up for a queue to use the washing machines.
The device locks the machine and unlocks once the generated code is entered.
|58||Smart Trash Can
Ying Ming Lee
|Ruhao Xia||Joohyung Kim||design_document1.pdf
|Project Members: Didrick Manahan (firstname.lastname@example.org), Syed Ali (email@example.com), Michael Chang (firstname.lastname@example.org), Ying Ming Lee (email@example.com)
*4 person group in discussion with Professor Fliflet
When I was younger my grandma used to live with us and she had a lot of medical problems, the main one being arthritis and her body would ache with every movement. She would only reside on the first floor because of the pain in movement so that she didn’t have to climb any stairs. I recall always having to bring her the trash can whenever she needed because she didn't want to get out of the bed. This was a constant issue because she needed the trash can a lot for various reasons and I was always the one bringing it. We wanted to take this problem and create a real world solution that might come in handy for elderly people.
Nowadays automation is seen everywhere to make life more convenient: cleaning the floor (roomba), self driving cars, etc. and we believe a smart trash can that is capable of coming to where you are located may ease the burden of elderly/disabled people when having to get up and walk in order to throw something away. Our project would be limited to one floor, as the trash can wouldn’t be able to climb stairs, and would be summoned through an app that we would make that connects to the trash can to provide the location of the user as well as monitors the capacity of the trash can through the use of sensors in which the user will be able to see how full the trash can is through the app. We would also like to include a motion sensor so that the user can simply wave their hand to open the trash can.
- Main chassis for electronics
- Attachable bin for garbage can
- Hidden Wheels (similar to Roomba) that can operate on any floor type (hardwood, carpet, tile, etc.)
- Motor Mounts
- Microcontroller (Raspberry Pie or Arduino) for statistical computing and decision making based on data collected from sensors (with library compatibility, built in peripherals, input/output latency, etc.)
- Classification/Object Avoidance Algorithm (possibly with Real-Time OS capability) to detect surrounding environment
- PIR sensor with Fresnel Lens for more accurate detection of hand (garbage lid automatically opens)
- LIDAR/Ultrasonic for sensing possible obstacles/objects on the path from original destination to target destination (possibly with stereo camera)
- NFC sensor used for close range accuracy
- IR sensors for distance detection of original position (If using a charging dock, IR emitters on the dock to determine the position of charging dock)
- *Possibly considering Bosch BNO055 sensor to resolve any issues concerning orientation/navigation
- Power Supply: Battery (possibly charging dock similar to Roomba)
- Battery Management System
- Mobile Application that user can use to signal trash can to arrive at target location
Criterion for Success:
- The trash is able to traverse from its original location to the target location of the user, and then back to the original location while successfully avoiding obstacles in its path.
- The trash can must be able to properly communicate information to the smartphone app.
- The trash can must be able to successfully open and close based on motion from the user.
|59||Autonomous Indoor Food Delivery Robot
||Belen Castellote Lopez
Ignacio Ampuero Gonzalez
|Weihang Liang||Joohyung Kim||design_document1.docx
|**1. Team Members:** Nishqa Sharma (nsharm13), Belen Castellote Lopez (cbelen2), Ignacio Ampuero Gonzalez (ignacio8)
We want to create a robot that can pick up food from an indoor location, such as The Daily Byte Cafe in the ECE Building, and deliver it to any room in the ECE Building. We want to create this project because it would make it very convenient for ECE faculty and students to get food within the ECE building, and they would not have to leave their belongings unattended, or disrupt their concentration or workflow, or stand in queue at the Daily Byte, only to find that their favorite item was sold out! We are also interested in this project because its functionality can also be expanded (beyond ECE 445) to deliver textbooks, documents, office supplies, and everything else desired, by scaling the size of the robot. Although there are food delivery robots present in the industry in companies such as DoorDash, most of them only function outdoors and not indoors. And although there are some hotels that make use of indoor robots to deliver guest supplies, most of them are either line following robots, or track following robots. So this project is unique because it is going to be an autonomous indoor robot, which no one in the industry really building or deploying at the moment.
**3. Solution Overview**
Our design will have 3 components, the robot body and power subsystem, the robot control unit (which will contain the PCB, the camera, the Jetson Nano GPU, and the wifi card), and a laptop,
**(a) The robot body and power subsystem:** will have dimensions 1'x1' (base), and 0.5' in height, which will be mounted on 4 wheels (2 each side on 2 parallel sides). Each wheel will be turned by a 370 brushless motor, and all four wheels will be connected in parallel with a 12 V battery and a voltage regulator.
**(b) The control unit:** the only sensor that we will be using is the Kinect Xbox camera, to perform April Tag recognition and visual SLAM. The data stream generated by the camera on the video will be processed using Jetson Nano, a small GPU enabled computer that will process the camera data stream, and will use the M2 wifi card to transmit the required data to the Laptop. The laptop will run our core programs (explained in part (c)), and send signals back over the wifi, which will be used to control the electronic components of the PCB, which will be used to navigate the robot. Since the components of the control unit require a much smaller voltage than the motor and wheels, we will use a 5V battery for this subsection, along with a voltage regulator.
**(c)The laptop:** The laptop will contain the user interface, which will be written in C++/Python, and will be used to place an order from a Daily Byte Cafe Menu, and to provide the room number in the ECEB to deliver to. The laptop will use the data sent by the GPU over the wifi to control the robot, using the following software sub-components:
**(i) Localization and Mapping:** this will be done with the help of April Tags, which we will be stuck on the walls at specific locations in the ECE Building and detected by the camera, and we will make use of the apriltags_ros libraries to perform bundle calibration and video stream tag detection. We will also make use of visual SLAM, which will make use of the "slam tool box" ROS library.
**(ii) Obstacle Avoidance and Path Planning:** We will be using "global_planner" for global path planning, "teb_local_planner" for optimizing the robot's trajectory with respect to trajectory execution time, separation from obstacles and compliance with kinodynamic constraints at runtime", and locomotor "which will provide a mechanism to for controlling what happens when the global and local planners succeed and fail."
**4. Criteria for Success:**
The project will be considered successful if:
(a) The robot navigates around the ECE Building, avoiding the obstacles in its path, and reaches the Daily Byte successfully
(b) The robot navigates around the ECE Building, avoiding the obstacles in its path, and delivers the food correctly
(c) The robot navigates around the ECE Building, avoiding the obstacles in its path, and returns to a designated "rest station" in the ECE building lobby.
|60||Self Cleaning Table - Revised
|Ruhao Xia||Arne Fliflet||design_document1.pdf
|Anders Cox: ajcox2, Armando Terrones: armando2, Kevin Thompson: krthmps2
Customers expect a restaurant to be clean enough for safe food consumption and limit the spread of disease in public places. However, studies* show that the tabletop can be one of the least sanitary parts of a restaurant, often containing more germs than the toilet seat. Although restaurants are required to wipe a table after each use with disinfectant, they often use the same rag all day, effectively spreading the germs instead of killing them. Employees also tend to miss certain parts of the table, if not forgetting the table completely. This leads to the spread of sickness and unhappy customers.
*2017 quote on Today show, Dr. Charles Gerba, Microbiologist, University of Arizona
# Solution Overview
We propose to design a table which cleans itself after each use.This design is specifically for rectangular tables, where the legs are in the center and people are seated on the two longer sides. The cleaning system is a single bar which spans the table and holds the cleaning tools. The bar is held up by a pair of rails, which are mounted on the underside of the table. These rails sit almost flush against the underside of the table, allowing just enough room for the ‘handles’ of the cleaning system to hold on to them. Next to one of these rails is a gear track, which allows the system to move via a motor and an aptly sized gear. A single motor should be enough to move our entire system, because we will use ball bearings to reduce friction against the guard rails, and we will keep the system itself as lightweight as possible. When not in action, the tools will be removed from the surface of the table, and hidden inside a small case on one of the unused ends of the table. In this way, we minimize the impact of our design on both the tablespace and legroom.
A sketch of this design is here:
A passive IR motion sensor mounted on the underside of the table will be responsible for detecting patrons. If people leave the table for sufficiently long, then we will automatically begin cleaning. As the cleaning system begins moving, the tools are initially raised slightly and the sprayers are off. IR break beam sensors are mounted at the front of the cleaning system, and will detect any large objects left on the table. If anything is found, then the cleaning cycle will be halted and the tools retracted. A warning light will signal that the table was unable to complete its cleaning cycle, and the system will wait for someone to come clear the table. If no people or objects are detected as the bar traverses the entire length of table, then we will lower the tools to begin spraying down and wiping off the table, moving back towards the housing. More break beam sensors on the back side of the system will also double check that the table is still clear, and cause the system to halt if something is found.
The housing at the end of the table will contain a drip pan for any drippings off of the cleaning tools. There will also be a container for disinfectant spray, and a small pump to send it to the cleaning system. The control system will lie either inside or right next to the housing, on the underside of the table. This will connect to the sensors, as well as the speaker and status LEDs on the top of the housing.
## Novelty of Idea
- Automation guarantees consistency in the cleanliness of table
- Relieves employees of work
- Speedy automated cycle for fast usage
- Scalable to many sizes of table
- Cleans directly after use, making table easier to clean
- Only requires employee bussing for large objects or major spills
- Minimal regular maintenance of emptying drip pans when full and occasionally replacing the drying cloth
- Low competition since this is a novel idea
- Main competition is Cleaning robots that operate similar to a roomba
- Issues with cleaning robots that our ideas solves
- Long cleaning cycle
- Must be removed after use
- Typically too large for small tables
# Solution Components
## Power Subsystem:
- Convert wall power to usable power for controller, sensor, cleaning arm, and notification subsystems.
## Cleaning Arm Subsystem:
- Use track and motors to move arm down the table in order to clean and sanitize the table. Tentatively consists of a sprayers for cleaning solution, squeegee, and drying cloth. These are arranged to be used in the order during the cleaning cycle
- Will hide in housing when not in use, not affecting the usable width of the table
- Gear track will be flat against the underside of the tables
- Guide tracks will be almost flush against the table, taking up no more than 2 inches of legroom from the patrons.
## Sensor subsystem:
- Infrared break beam sensor to detect cutlery or other objects on table
- Passive infrared sensor to detect if people are present. And detect when patrons leave to start the cleaning process.
- Start button that patrons can press to start the cleaning process. If not then similar to automatic toilets the table will clean itself when it has sensed that patrons have left.
## Control subsystem
- We will use an ATMega328 microcontroller for the control logic
- It will use sensor data to determine if cleaning is safe and no people/cutlery are present.
- Will also control arms through their motion.
- Controls the notification system.
- Keeps track of the internal finite state machine.
## Notification Subsystem
LED’s to indicate status of the table - Low disinfectant, unable to clean, etc.
Small speaker to alert staff when entering warning state
## High level Cleaning Cycle Process
Check if people are present: if no-move to 2, if yes-return to 1
Wait a short amount of time and move to 3
Check if people are present: if no-move to 4, if yes-return to 1
Move the cleaning arm away from the housing, across the table, scanning for large objects. If an object is encountered: if yes-move to 5, if no-begin cleaning cycle
Return to home and show status saying the table needs objects removed. Move to 1
Set the arm in the cleaning start position
Lower arm to cleaning height
Start the sprayers and begin moving arm toward home
As the arm moves back to home, sense for large objects in the way
If an object is encountered- stop the the arm and show an error status on the table, if no-continue cleaning
Once back at the housing, return to Sensing state
# Criterion for Success
- Completely clean and sanitize table
- Detect objects on the table and halt cleaning cycle if objects are present
- Detect people and do not cycle if people are present.
- Show table status as ready for use or other service needed.
- No impact to the usability of the table for customers.
|61||Hip Hop Xpress: Power Management System for Converted School Bus
|Ruhao Xia||Jonathon Schuh||design_document1.pdf
|Antonio Rivera (amr2), Eros Garcia (egarci90), Anabel Rivera (anabelr2)
**Hip Hop Xpress: Power Management System for Converted School Bus**
**Problem** - Dr. Patterson pitched his project, the Hip Hop Xpress bus, in class. This converted school bus will serve as a mobile educational platform to teach children about STEM through hip hop and music. It will contain DJ equipment, such as stereos and mixing tables, and various LEDs and other electronics throughout the inside and outside of the bus. In order to run the equipment while the bus is parked and the engine is off, we will need to implement a battery reserve. The bus is intended to be as flashy as possible, but it should also be as green as possible. To this end, the bus’s core electronics should be battery-driven and work for an extended period of time without running the engine.
**Solution Overview** - Describe your design at a high level and how it solves the problem and introduce the subsystems of your project.
Due to the scope of this project, a large portion of this project will be bought off-the-shelf. We will be focusing on creating a custom Battery Management System and Data Management subsystem.
We will be designing a battery pack from lead acid batteries due to their robustness and since weight is not a concern for this project. The pack will be managed by a custom battery management system (BMS) PCB. This BMS will have the ability to charge the batteries from either the solar array, which we will help install on the roof of the bus, or via AC outlet. This BMS will also perform the basic tasks of monitoring voltage, current, and temperatures from the batteries. We will use voltage probes to directly measure the voltage from the batteries. To measure current, we were looking either attaching a hall-effect current sensor or creating a current sensing circuit at measure. For temperature, we will look at temperature sensors on critical points on the batteries.
The data management system will use all of these sensors to provide an excellent picture of the condition of the batteries. This information can be used for applications such as emergency shutoffs or battery modeling. The battery modeling is especially useful as it can be then sent to a display, such as an LED display connected to a Raspberry Pi, to provide valuable information on the batteries. We can display information such as battery percentage remaining, outgoing power from the batteries, incoming power from the batteries, and remaining time that the bank can supply at the current power draw. The power data can be stored and transmitted for further analysis.
# Solution Components
## Data Management
This subsystem will clean and transmit sensor data for battery diagnostics, as well as save sensor data for analytics. The signals will be digital. These will be feeding into 3 microcontrollers, such as an Arduino Mega, which has a large amount (54) of digital pins. This system must process the sensor data to remove some noise. We will need to take into account the delay that will arise from processing. Having too much delay can affect the controls and be too late to be useful for shut-off purposes.
By our power-usage estimates, we will likely need 18 batteries to provide the correct amount of power for this purpose. Therefore, there will be 54 sensors feeding into each of 3 Arduino Megas (162 total signals), and these 3 microcontrollers will be feeding into another controller capable of fast computations, such as a Raspberry Pi.
The Raspberry Pi will communicate with the battery diagnostic subsystem. It will also take a snapshot reading every 30 minutes of all the sensor data and power usage data from the solar panels after post-processing and save it, likely as a .csv file for its ease-of-use. Readings can be displayed on a monitor and show information about energy generated from the solar panels as well as an estimate of how much time the batteries can continue to be used. We are exploring transmitting this data to an app, so quick diagnostic information can be viewed from outside the vehicle.
## Battery Management System:
This system would be split up into four different subsystems: input power, output power, battery diagnostics, and power flow system.
Input power subsystem: This subsystem would be able to take incoming power from various sources. The three primary input methods would be from the bus alternator, the mounted solar panels, and a grid connection. This system would then standardize it to a stable 12V line which would then feed into the power flow subsystem.
## Output power system:
This subsystem would take in the 12V line and convert it to whatever the bus will need. Currently, we are looking at creating an AC line to power the majority of the electrical on the bus. Depending on how many DC components end up being needed on the bus, then we can add an additional DC line for those devices.
This subsystem would be in charge of handling all of the raw sensor data and making general decisions for the power flow system. This system would consolidate sensor information and, using triple redundancy, determine what values are true. In other words, we will have 3 of temperature, voltage, and current probes on each battery. This system would also be able to sense if a single sensor in a group is faulty. It will then flag the sensor to be replaced by the user. It would also be in charge of decisions for the power flow subsystem. This system will have three copies to allow for triple redundancy. (link:https://en.wikipedia.org/wiki/Triple_modular_redundancy)
## Redundancy subsystem:
This system will be made to be as reliable as possible. This system would manage the Battery diagnostics subsystems and detect errors in the system. Due to the triple redundancy in the Battery diagnostics subsystems, it will be able to both detect errors and determine which of the subsystems is faulty. It would accomplish this via polling and use the majority rule to determine correctness.
This subsystem controls the power flow to and from the batteries. As incoming power increases compared to the load, this will allow excess power to be used to charge the batteries. As the outgoing load increases compared to the incoming power, it will change the direction of power flow from the batteries to the rest of the bus. This system is also responsible for individually charging/discharging/connecting/disconnecting each battery. This should increase the safety of the charge/discharge cycle. This will also have the ability to disconnect batteries from the circuit for safety purposes.
#Criterion for Success
Our demo would be split up into two different major tests based on which system that we are testing. One small scale test that can be run in the lab and another larger test in the bus.
To test our PCB, we would run tests in the lab to force responses depending on the test. We can start by running the system under perfect conditions and show that the system can reliably charge and discharge the batteries. We would then test its ability to charge the batteries using excess incoming while a load is connected. The last test for perfect conditions would be to draw from the batteries while supplementing them with any incoming power. During these tests, the voltage and current readings on the oscilloscope should be stable and clean. Also, no components should overheat in this section. We will check the thermals using a laser thermometer
We would then simulate errors in the system to show that the system will continue to function. We will simulate an error in the data subsystem to show that the redundancy subsystem will function. It should be able to determine the correct information from the three lines of data and flag the faulty battery diagnostics subsystem. Along with this, we can have the battery diagnostics subsystem flag faulty sensors by simulating sensor failure.
We would ensure that each of the above subsystems are working using oscilloscopes and other lab equipment as necessary.
The larger, and arguably more fun test would be demoing using the entire bus. We would show that our system is capable of running the bus without issue. We would have the bus running lights, speakers, etc. The system should then display as necessary information and should be able to run similarly to the first round of tests in the lab.
|62||Guitar Learning Tool
|Jonathan Hoff||Jonathon Schuh||design_document1.pdf
|Dillon McNulty, Kyle Gibbs, Oumar Soumare | dillonm2, kylepg2, osouma2
GUITAR LEARNING TOOL
Problem: Current methods for learning to play instruments are too difficult and frustrating for many people.
Solution Overview: By utilizing technology to its highest potential, we can create a tool to immediately teach a user any song they desire at any speed, and process real-time feedback about the user’s performance for improvement.
Subsystem 1: Music Sheet Analysis - This would be a script capable of processing a music sheet (either as PDF or PNG, to be decided) into an array of data, corresponding to the notes that should be played at a given time interval.
N/A, this will be software-based
Subsystem 2: Arduino Playback - Once the music data has been processed, we would send that out to the Arduino. The Arduino would be connected to a series of LED strips on the fretboard of a guitar, and light up on the fret to be played at that time interval. Tempo adjustment would be allowed and necessary so that the user can play at slow speeds and gradually increase their performance. Further complexity could include sectional playback, where the user can only play a small portion of the song to practice further.
Arduino Board or Similar
WS2812B Individually Addressable LED Strip
Alphanumeric Display for Song Name and Tempo
Buttons to Control Song Playback
Speaker for Song Playback? (Maybe)
Subsystem 3: Recording Input - While the frets are being lit up, the user shall attempt to play the notes as they come. Some interface from the guitar back to the computer or Arduino would be necessary to detect what notes the user is playing and if they are correct or not.
MIDI Audio Interface to send guitar’s digital signal to the computer
Software to process signal
Subsystem 4: Performance Analysis - This would provide the user with metrics to analyze their accuracy and to record improvement over time. The data recorded from the previous subsystem would be processed and sent back to the user, so they can see what areas of the song they played well or could improve upon more.
N/A, also software-based
Criterion for Success:
Music sheet of any song can be processed for playback in a comprehensive way on the guitar.
The song can be played at any tempo within a certain range to allow the user to practice at a slow pace.
The guitar-to-computer interface can accurately recognize what notes are being played by the guitar.
The performance metric provides meaningful, visual feedback that improves the user’s learning experience.
|63||Hardware Security Module (With ability to persist symmetric keys)
|Evan Widloski||Rakesh Kumar||design_document1.pdf
|Problem: The Trusted Platform Module is an onboard cryptoprocessor/hardware security module on most machines and it offers a cheap way to persist asymmetric keys and do other cryptographic operations on board with a FIPS 140-2 Level 2 security. The problem is that if we want to store symmetric keys (such as AES-256 keys) we cannot store them longterm with the given windows API (CNG: Cryptography Next Generation).
Solution Overview: My group plans to create our own small-scale and affordable hardware security module with an API that allows for persisting symmetric keys on Windows machines that satisfies the conditions for FIPS 140-2 Level 2 (or 3) security. We can then use the keys to encrypt bytes sent to the board. We plan to have a few subsystems working together which includes memory for our key storage, an encryption system, a decryption system, and interfaces with a computer.
[Memory Unit] - A large memory unit which provides the ability to store multiple AES-256 keys. It will allow you to load a key into it with an offset and then also allow you to select that key to be used for encryption/decryption. This will be a separate memory unit that just stores one key which is the one used in the cipher.
[AES Encryption Unit] - Encrypts a message using the selected stored key which is currently loaded. Will implement AES-256 encryption using logic gates.
[AES Decryption Unit] - Decrypts a message using the selected stored key which is currently loaded. Same as Encryption Unit.
[USB Connector] - This allows us to connect our device to a computer and comes loaded with an API which allows the user to send keys to the device and messages to be encrypted/decrypted.
[Tamper Evident Detector] - In order to protect the keys being stored up to FIPS 140-2 Level 2+ standards we will have a subsystem solely based on detecting tampering with the device. In the case that tampering is detected all of the keys stored must be wiped. Likely we will solve this with a pressure sensor in which the only way that less pressure is applied is when the box is being opened. There are a few other pieces to this that we still are looking to add so that there is absolutely no way to open the box and see the hardware without wiping the keys.
In order to be effective we must implement the key storage system, the ability to load keys, and the tamper evident detectors. These are the most basic requirements for a hardware security module. We have hopes to add in more functionalities, more specifically the ability to encrypt/decrypt a message and an effective random number generator all in hardware. The tamper evidence detectors are the most important piece because otherwise our module is no more secure than software.
|64||Ѡand: GESTURE CONTROLLED SYSTEM FOR SMART HOME
|Shuai Tang||Joohyung Kim||design_document1.pdf
|**Problem statement:** The lack of gesture controlled devices calls for a niche in the market that hasn’t been tackled. Gestures and body language have the potential to serve as intuitive ways to interact with technology and we aim to tackle this problem by creating a feature gesture-controlled system for a smart home.
**Solution overview:** We are making a system of a hub and a smart band which specializes in detecting gestures to control various devices. Our band would send input data from inertial measurement unit sensors to the hub which will use our algorithm to detect the gesture. Upon detecting the gesture, it would signal the appropriate device to perform the action signaled by the gesture.
To avoid false positives, our system would have a signature gesture that would signal our hub to start looking for gestures.
**Wrist Band:** The band will act as the detector of all gestures performed by the user. It will be designed using a flexible PCB with IMU sensors (Accelerometer, Gyroscope, and Magnetometer) that will send raw data to a hub. The wrist band would also contain an IR emitter to allow for detection when a device is pointed at.
**Hub:** The hub will serve as the center of our ecosystem by being able to recognize and interpret the incoming gestures from the bands and be able to control other devices.
**Device Connections:** Our device will need to connect with smart devices through wifi/ bluetooth and through Infrared sensors.
**Criterion for Success:** For this project to be an effective solution to controlling devices in a smart home, there are a few key abilities that we will need to have:
1. To control home devices with gesture control
2. Compute and recognize gestures by the band and interact with smart devices
3. Competitive pricing to Google Home/ Amazon echo
|65||Electric Paintbrush Cleaner
|Madison Hedlund||Rakesh Kumar||design_document1.pdf
|Group Members :[Luis Bibian], [Yael Legaria]
Net IDs: [bibian2], [legaria2]
Problem: When painting, most people like to keep a cup/bucket of water around to clean their brushes in but, this can get very messy especially when you're cleaning the brushes with your fingers. Eventually, the water gets dirty, you get all wet, your paintbrushes aren't clean, and your masterpiece gets ruined. Now, what was supposed to be a relaxing hobby has become a stressful situation.
Solution Overview : An electric paintbrush cleaner that perfectly cleans your paintbrush every time. All you have to do is insert your brush. Think of it as an electric pencil sharpener but, instead of sharpening pencils that are inserted, it cleans paintbrushes.
Ideally we want to be able to plug the device into a wall so we will need to use an AC-DC converter with several available output voltages to power the system.
-A clean water tank with a small water pump to send water to the cleaning mechanism.
-A sensory system that senses when a paintbrush is inserted which tells the machine to start/stop operation.
In order to sense the motion of the paintbrush being inserted to begin the cleaning process, we can use a PIR sensor that will be placed inside of the device that will sense when the paintbrush is inserted. This signal would be processed by a microcontroller, and in order to avoid the PIR sensor continually triggering once the paintbrush has been inserted due to any additional movements, we can set a cleaning cycle duration where the PIR sensor data will be ignored until the cleaning cycle has officially ended.
-A cleaning mechanism such as spinning cleaning brushes that will work with the water to clean the brush.
The spinning brushes used to clean the paintbrush can be driven by a DC gear motor. The brush cleaners will need to spin parallel to the direction of the bristles in order to avoid damaging the paintbrush. In order to not interfere with the water coming from the clean water reservoir, we can have the motorized brush cleaning mechanism initially positioned further back within the device housing so that the paintbrush can first be rinsed by the water coming from the water pump. The brush cleaning mechanism can then move forward to position the cleaning brushes above the paintbrush, and for this we would have to build a belt drive system to move the cleaning mechanism back and forth.
-A drain and disposable tank to collect the dirty water used during the cleaning operation.
Criterion for Success: The main goal of this project will be to create an machine which is capable of sufficiently cleaning a paintbrush which has recently been used. This means that if you were to touch the brush to paper after the cleaning operation, you should see no color. This should all be done without harming the paintbrush, at least no differently then you would when cleaning it with your fingers. This should all be done automatically with the person only having to insert and hold his/her paintbrush.
Overall, the group will need to design an appropriate power distribution layout for the machine, create a working sensory system, waterproof the circuitry, design a harmless cleaning mechanism, properly set up a start/stop water pump system, and design the structure where all the components will be placed to be efficient and visually appealing for the user.
|66||Automatic pill dispenser
|Chi Zhang||Joohyung Kim||design_document1.pdf
|**This project is pitched by Jinal Shah.**
Team: Qingyu Li (qingyul2), Wennan Zhai (wennanz2), Shengyu Ge(shengyu3)
Taking medications as instructed and punctually can be a hard task for patients who need many kinds of pills, especially for senior people with memory problems. Given the number of different types of medicine, it is often difficult and troublesome for the patients to keep track of the correct dose and consumption time for each medicine. It is also very common that many old people need reminders or notifications to remind them of taking the pills. Smartphone alarms can be helpful in this situation but many people don’t use or are not good at smartphones.
# SOLUTION OVERVIEW
We want to design an automatic pill dispenser that can alert the user to take medicine on time and automatically dispense the correct type and dose of the pills. It will also show the instructions for those pills on a screen. To make the dispenser user-friendly, we plan to develop a mobile application for registering/keeping the information of the medicines and setting dose and time. It can also set off alarms at the set time. Ideally, there will be different profiles for different users for easier family usage.
The dispenser will work in the following steps:
1. The user puts different types of pills into unique containers in the dispenser and registers the medicine information such as pill name, dose and time to take, expiration date, etc. in the mobile app.
2. The dispenser will keep track of time (with a real-time clock) and medical information. At the set time, it will automatically choose the correct types and numbers of pills via the instructions previously set, put them into the tray and display the instructions on the screen. An alarm will set off both at the dispenser and on the phone/smart home devices.
3. At other situations such as medicine expired or needs a refill, the dispenser will display a notification and alert the users on their phones.
# SOLUTION COMPONENTS
- Subsystem1: Mechanical system
The mechanical system will be designed by a Mechanical engineering senior design team and the dispenser itself would include motors and gears for dispensing pills. This mechanical system will eventually be integrated with the software and hardware we develop to perform the functionality.
- Subsystem2: Electrical dispenser control/notification system
The control system will be responsible for choosing correct containers/pill numbers, displaying notifications and generating correct signals for the mechanical system. The dispenser is designed to be used at home not portable so an external ac/dc adapter will suffice.
- Subsystem3: Bluetooth/WIFI connection system
There will be a Bluetooth/WIFI module inside the dispenser that can communicate with the mobile app.
- Subsystem4: Mobile app
The mobile app should be simple and easy to use. It needs to properly connect and send medicine information to the dispenser after the user setup. Will add Alexa/Google Home support if possible.
# CRITERION FOR SUCCESS
It will dispense the right pills at the right time via the instructions set by users.
Notifications and instructions will show up on the screen and mobile app.
|67||ChessExpress: The Voice-Controlled, Automatic Chessboard
|Megan Roller||Rakesh Kumar||design_document2.pdf
Adithya Rajan (adithya2), Dean Biskup (dbiskup2)
There are many people who would like to play chess with players online, but would prefer not to stare at a screen and utilize online chess interfaces, instead preferring a physical board. Some people may also want to play on a physical board but cannot physically interact with the board and it's pieces due to disability.
# Solution Overview
We propose an automatic chessboard that can move the pieces automatically using an electromagnet and stepper motors, with WiFi capability that can allow it to connect with another board anywhere in the world. The board will also have speech recognition capabilities that allow a player to move pieces on the board with simple commands such as “A5 to D2.” This will be our primary method of input into the board. We do not plan on implementing more complex speech recognition, such as “knight to e5” or “a2… no, sorry, a3 to d3”, in the interest of keeping things feasible.
# Solution Components
## Existing Design
This project is a continuation of a project from ECE 395. As the project exists currently, it has a physical board (⅛” acrylic) with an electromagnet running underneath the board that is able to move a piece from one square to another. The board has no knowledge of the game “chess”, and only moves things from one (x,y) position to another. Also, the movement needs to be made more consistent, as the magnet tends to "drop" pieces while moving in the current iteration of the design, with no error detection systems. Additionally, there is currently no WiFi or microphones/speech recognition subsystems on the board, and the project currently requires a PC to do all of the logic before sending very basic commands to a microcontroller controlling the motors. We would like to change the microprocessor to a beefier one so that our solution can be self contained, and does not require an external PC connection. We also need to create a PCB incorporating all the components, as right now the basic circuit is just on a breadboard.
## Movement Subsystem
The movement subsystem involves motors and electromagnets. There are two stepper motors that move the electromagnet in the Y direction, and one that moves it in the X direction. The electromagnet will move to position itself under the piece that needs to be moved. Once it is in position, the electromagnet will be powered up and will attract the piece towards it. Then it will drag the piece on the board to its desired X-Y position, before de-energizing.
## Voice Command Subsystem
The voice-command subsystem will consist of at least a microphone. It will take in speech signals and provide them to the processor, which will do basic speech recognition, or send it out to an online library/speech recognition service through the WiFi subsystem for processing.
## WiFi Subsystem
This system allows the board to connect to the internet (using an ESP8266 or ESP32 board), enabling functionality to play with other people around the world or send speech signals out to an external speech processing service.
# Criterion for Success
This chessboard will allow for basic chess games using voice control, over the internet. The piece movement will be consistent (no pieces will fail to move to their destination). The board will also know the rules of chess as to prevent illegal moves from being performed.
# Commercial Solutions
There is a commercial solution that is very similar to ours in that it is an automatically moving chessboard that allows for online play ([SquareOff](https://squareoffnow.com/). Where our project differentiates from SquareOff is the capability for voice commands, allowing players to play without physically interacting with the board.
|Chi Zhang||Arne Fliflet||design_document1.pdf
sometimes when we want to practice PingPong, we need to find a partner to practice with us but what if our friends are busy? In this case, we plan to design a PingPong Ball Firing system which will detect our position and launch the ball toward us.
the plan to design an autonomous PingPong ball launching system which will use bluetooth remote controller to find our location and launch the ball at some frequency.There are some PingPong ball launchers in the market but they are all stationary and we should change the shoot direction manually. Besides, for better experience, our device supports different modes such as random direction mode and acceleration mode.
In this part, we will design a machine to launch the ball. The mechanical system will connect with our control unit and use the motor to shoot the ball at some specific speed and angle once it receives the shot signal.
The first part of the mechanical system will be launching part. We will use a motor to move a mechanical arm, so when the arm hits the ball, the ball will be launched. And by changing the speed of the motor, we could adjust the speed of the ball.
The second part of the mechanical system is about rotating the machine so that it can launch the ball in a different direction.
The first part control unit will be a physical controller with different buttons to control the different models we designed in the Mechanical(launching) system. We will use the bluetooth technology in this control unit subsystem to remotely control the mechanical(launching) system so that the user can easily adjust the speed, launch direction, etc. Depending on the schedule, we may directly buy the bluetooth receiver and sender from arduino or we will build our own receiver and sender.
We would like to list all the possible button here:
#Start/Stop Launch Ball
#Increasing/decreasing frequency of launching Ball
#Increasing/decreasing the speed of the ball
#Adjusting the launching direction (Increasing/decreasing the angle)
The second part of the control unit will be a control circuit to control the mechanical(launching) system so that it can change to a different mode. We will use the microcontroller to adjust the voltage of the motor so that we can adjust the speed of the launching system. And the microcontroller will also adjust the direction of firing direction.
We would like to list all the possible model here:
Stationary Launch (firing ball in same direction)
Random Launch: the launch system will shoot the ball in a random direction.
**PCB design part:**
Overall, we would have two major PCBs.
The first part is on the mechanical system, we will integrate the microcontroller, bluetooth receiver, and the power circuit, which control the speed and frequency of motor.
The second PCB is designed for our remote controller. We will integrate bluetooth module with different buttons on the PCB like the direction button and mode button. The bluetooth module will send signals to the receiver in microcontroller.
**Criterion of Success:**
the launch system can detect our direction.
The launch machine can be controlled by a remote controller.
The machine supports different modes.
We could design an App or other user devices to set the launch frequency, speed, and angle. Also, we can design a method to analyze the performance and give some feedback.
|69||Posture-sensing Smart Chair
|Jonathan Hoff||Joohyung Kim||design_document2.pdf
The problem that we are hoping to address in our senior design project is poor posture when sitting in a chair, particularly in the workplace. Back problems due to slouching have reached record levels as people spend more and more time sitting in cubicles, hunched over a computer for hours upon hours every single day. We think we would especially benefit from this project because all three of us have had experience dealing with kyphotic posture. Brian personally have tried to correct his posture through physical therapy, Geonil's friends and family often told him "You're slouching, you should stop using computer so much", and Steven had anterior pelvic tilt in the past. All three of us use computers all day and night and there is nothing to keep us in check throughout the day. By using our product, we hope to make people more aware of the way they are sitting and allow them to track their progress as they try to break the habit! Although there is a similar project from Spring 2018, we are really passionate about this project and wish to continue with this chair idea.
We propose a solution to this problem by creating a smart chair that analyzes an individual’s sitting posture using a combination of range and pressure sensors.This chair will read data off two systems -- the back posture and the positioning of the seat. Since we want our back to be as straight as possible, we will have the range sensors adjusted accordingly to match the distance to an individual’s back. In addition we will have the pressure sensors detect their position on the seat. The chair will match both of the data to determine whether to blink green or red LEDs on the chair. As a minimum viable product, the user will be alerted when they begin to slouch (via LED, vibration, beep, etc) but we also hope to incorporate some sort of application that would allow them to track their progress overtime.
- Sensors: We plan to use ultrasonic sensors to measure distances between the chair and several points in one’s back. We also plan to use an array of pressure sensors in the seat bottom and back.
- Transmitter: Sensor data will be transmitted to computer for processing via USB (or potentially bluetooth time permitting)
- Power: We are considering a few options to power the sensors/other external components: drawing power through USB, regular AC power, or potentially a rechargeable battery if feasible.
- Software: We plan to receive data from the sensors and send them to a computer software to see if the user’s back is slouched. Once the software analyzes the data, it will send a signal back to the LEDs in the chair. These LEDs will determine whether or not the corresponding position of the sensor is in a good range using green or red.
Criterion for Success:
- Product is able to successfully determine between good and bad posture.
- The user is able to see if his or her posture is acceptable through the LEDs on the chair.
- The user is able to visualize his or her posture data through a web interface.
- Stretch goals: Wireless (bluetooth and battery for power), vibration for alerting the using in addition to LEDs
Since our project is similar to a previous project from Spring 2018, we were proposing to analyze and determine an individual's sitting posture differently. Our chair has two subsystems that will gather data together. The back system will have an array of HC-SR04 ultrasonic sensors vertically aligned to map out an individual's back. The seating position system will have a grid of pressure sensors to determine the positioning. We understand that since everyone has different physiques there would be a calibration system for accurate measuring. With the data that we receive from both systems, we can record and display visually the data through a web app to show gradual progression. The chair will also be programmed to take incremental snapshots of a user's current sitting position to be saved in a database for viewing. Since the previous project's main purpose is to notify the user in live time whether they are in a good/bad position, our project better reads the information for users to improve on the long run.
|70||Podcast Analyzer with AI for Ringer (Sponsored)
|Shuai Tang||Rakesh Kumar||design_document1.pdf
|his project pitch was given by Prof. Patel on the 2/4 lecture. We have met with Prof. Patel about the details for this project and decided on the following.
Names: Elliot Couvignou (esc2), Sai Rajesh (srajesh2), Bhuvan Radj (br3)
For context, Ringr is an app that allows for high-quality recording of multiple clients regardless of their location and compiles the audio into one that sounds like they are in the same room. Their current problem is that analyzing the content of each recorded podcast and picking out the 'good parts' is obviously too time consuming to be done manually so an AI is needed. With this AI they would like to feed in any raw recordings and from that, get an audio recording of only the desired parts. They do want this feature eventually implemented on their app with a reasonable run-time. What is defined as ‘good’ parts is still left up for us to finalize when we meet with Ringr for the first time later this week.
Because this is a software only project we will go over the general flow of the AI from input to output and break that down into components. The overall idea of the AI is to break down the audio input into small pieces of audio and then reconstruct phonemes into words and those into word meaning, etc. up until the AI is aware about the recording's semantics and can pick apart the sections that the user wants. At the moment we plan to create the AI model(s) and save them so we don't need to recreate a new model on each input analysis. This should greatly increase performance to be usable as an app feature.
Solution Components: Listed in order from closest to Input (top) to farthest (bot)
Word Recognition: From the input we need to recognize words by breaking down audio into phonemes and discern the types through formant filters. From there we can combine our phonemes together to form words. Some extra touch up is needed here to make sure phoneme words are spelled correctly. From this we should now have a transcript of the input audio which is much easier to work with.
Semantic Recognition: We need to understand what the people are saying in order to know what's 'good'. This is a section that is covered by ECE 448 so we hope to use similar methods here such as word2vec or n-gram for HMM. This is probably the most challenging and intricate part as this is a rough topic to get right in AI.
’Good Part’ Recognition: Now that we know what the audio is talking about we can we look for portions of audio that closely resemble that of what the user wants and use those pieces in our resulting data. We compile each good slice into one and return this as our result.
The type of AI model we hope to use is some combination of RNNs and classifiers as we found this to be the most successful models in recent years. We want this model saved on the app but are aware that the size might grow too large from intelligence and require remote computing (Google does this).
Criterion For Success:
Our model works if it can correctly slice out unwanted segments and keep relevant segments of the recording data. This feature should be able to run on both mobile and desktop in a reasonable amount of time given the input length. If our model ends up using any libraries/API’s, then it still needs to be economical to use (i.e no paid services/licenses).
|71||Power Rack Occupancy System
|Vassily Petrov||Arne Fliflet||design_document1.pdf
Power racks are often the most important piece of equipment at a gym for an individual’s work out. Due to the popularity of the equipment, it is pretty rare to find an empty rack at either CRCE or the ARC at most convenient workout times. Busy students who can only go to the gym on a tight time schedule are often met with the frustration of just standing around waiting for a rack to open up with no established method for who gets the next available rack. As it stands, there is no system in place to monitor the occupancy of the power racks, no system for being notified when there are empty power racks, and no way of determining how long someone’s been using a power rack.
# Solution Overview
We propose a grid of sensor systems that rests on the bar holders of the power racks in a gym to detect occupancy of the power rack, how long the exercise has been performed and provides an online interface for viewing this information for all the equipped power racks at the gym. By making this information available to students, our system can provide the following quality of life improvements to the gym-going experience:
- Gauge how many racks are available for use
- Be politely notified of when they are using the power rack for an inordinate amount of time
- Have access to a method of being notified when there are available power racks
We believe that by designing a system that can be installed on any power rack with minimal hardware modification that is connected to a central processing unit with an arbitrary amount of connections, modern gyms would find this as a realistic service to implement.
# Solution Components
Our system will consist of two primary design components and a web design: a processing unit (likely a single board computer or microcontroller) to aggregate the data and act as a data source for the web server, a sensor system to collect data on how the power rack is being used, and a web front end that allows individuals to interact with the data and subscribe to a notification system for empty power racks.
## Processing Unit
This subsystem must be able to collect data from each of sensor-equipped power racks (possibly wirelessly over WiFi), process/organize it, and send it to our web database (likely Google’s Firebase).
## Sensor System
This subsystem must be durable enough to survive on the power rack, able to provide information to the individual using the power rack, able to transmit data to the central processing unit, and detect if the power rack is being used. There we divide the system by the three system specifications.
### Detecting rack occupation
We will be using infrared sensors to detect human movement, an accelerometer to detect bar movement, and pressure sensors for detecting rack-use and weight loads on the rack.
### Human interface system
We’d like to be able to provide visual alerts to the individual using the power rack such as how long they’ve been using the power rack. This will likely utilize an LCD with light indicators to denote time information.
### Transmission system
This system must be able to poll the sensors for detecting rack occupation and send it to the central processing unit. There is a good deal of flexibility in the communication protocol that will be used but WiFi seems most likely.
## Web framework
We would like to provide individuals with the ability to access our data regardless of mobile platform, so we believe a web page is the best choice for users to interface with the system. This web system must allow individuals to view the power rack data for a given gym and allow users to enlist into the notification system. When the system detects an empty rack, all users within the notification system will be notified. Likewise, they are able to remove themselves anytime. The notification system can be implemented with SMS using APIs like Twilio.
# Criterion for success
- Successfully transmit power rack usage data from multiple power racks to be displayed on a web page.
- Implement a working notification system to notify users of empty racks
- Accurately identify when a power rack is in use and how long it has been in use
|72||MUSIC DETECTING LIGHT SYSTEM
||Alfredo Sanchez Sanchez
|Ruhao Xia||Jing Jiang||design_document1.pdf
|**Problem**: Home parties often involve playing music. In order for the parties to be more exciting and attractive, a light system that change accordingly to the genre of the music can be added.
Proposed Solution: A system that automatically detects the sounds in a room and filters out the music being played, specify its genre, and controls the light's color, etc., to meet the music.
-**Detection Subsystem**: Sound sensors with an embedded DSP algorithm to detect and recognize if there's music being played in the environment, and send the recorded music to the control part. We are currently thinking about starting with available DSP algorithms for the music detection part. Some researches we are looking into include "Realtime Chord Recognition of Musical Sound: a System Using Common Lisp Music" by Takuya Fujishima, "Efficient Pitch Detection Techniques for Interactive Music" by Patricio de la Cuadra, and "Low-complexity Music Detection Algorithm and System" by Yang Gao.
-**Control Subsystem**: Compare the received data to the music pre-stored in a database, recognize the music's characteristics, and control the light patterns accordingly.
-**Light Subsystem**: Color changing LED light array. Behaves as a normal fluorescent lamp when no music is detected. The music sync LED strip lights that are on the market are designed primarily for specific uses like parties. They need to be powered and manually set up before each use and turn off the room lights, which is inconvenient. Our proposed project is more like an intrinsic setting for the room. It;s supposed to actively detect music in the environment, and when not in use, it will behave like a fluorescent lamp that we use every day. For a personal use example, if the user wants to play some comforting music and have a matching lighting condition, the system will automatically switch to gradually changing warm lights. When hosting parties, the lights can form more complex patterns with "enthusiastic" colors.
-**Power Subsystem**: Power distribution from the wall outlet to all the other subsystems.
**Additional Features**: Time permitting, we would like to add additional features to the control subsystem, including a user interface through which the user can switch between party and personal modes. In party mode, the system will behave more "enthusiastically", using warmer colors and changing color patterns with respect to the music's rhythm, pace and genre. Under personal mode, the system will be single-color, changes more gradually.
|73||Sola Gratia Farm
||Cesar Martin Cachero
Jose Antonio Leiva Orcoyen
|Ruomu Hao||Jonathon Schuh||design_document1.pdf
|Background: Sola Gratia Farm is a community-based farm dedicated to producing locally-grown, high-quality, natural produce. Responding to the Gospel, the farm is committed to helping those who lack adequate food resources by donating a minimum of ten percent of its produce to support regional hunger programs.
Problem: Currently Solar Gratia Farm pulls power from the church located to the south of it. Because of this, the entire farm is powered off a breaker that is connected to the church’s. Sola gratia wants to add a third cooler but it will overload the amperage of the ameren line that connects from the grid to the church. Adding another line will put them on the grid as their own service line, but they don’t prefer it as they would have to pay for their electricity(instead of the church), and it could potentially expensive. It’s our task to help create a plan once they get more money that will allow them to add the third cooler without overloading their line.
Solution: Now the line has a capacity of 75A and found that when running all we reach 34-35A, but when adding a third cooler and turn all on we reach more than 75A overloading the line for a moment. To reduce this peak during the initialization we have thought on using a Variable Frequency Drive to avoid increasing so quickly the demand of power. We have also planned to add solar pannels to reduce the power needed from the line and including some batteries to store the extra energy in order to use it during the night or eventually sell it.
Criterion of success: Develop a theoric design of the current elctrical system to fulfill all the previous requirements and finally be able to include a third cooler to be implemented when economic capital be available.
|74||ECE OpenLab Automated Checkout II
||Dhruv Mathur||Jonathon Schuh|
|Part of Project 12|