|1||Vacuum Tube Amplifier
Qichen Jin, NetID: qjin4 (firstname.lastname@example.org)
Bingqian Ye, NetID: bye3 (email@example.com)
Introduction and motive
Our goal is to mainly tackle on the amplifier portion of audio system. There are basically two types of amplifiers, solid state amplifier and vacuum amplifier. Since this project will only be three or four mouths, the design of solid state amplifier would involve with too much details on the Operational Amplifiers and dozens of feedback loops in high order, we decide to build vacuum tube amplifier, it is less complicated and more achievable in the short deadline. From the market perspective, there is no budget vacuum amplifier available in the US market. Besides, there is some low cost solid state amplifier available and some of them do their job really well (I.E. Onkyo M-5010). In order to test the performance of our amplifier, we would use a pair of cheap speakers for testing. We bought a pair of full range bookshelves drivers for testing. Since this is our first time dealing with vacuum tube circuit, we will mainly choose classical style circuit design. If we have more time, we would study more complicated vacuum tube circuits such as push-pull, and/or class-AB amplifiers. The details of plans for the vacuum tube amplifier is listed below:
Class: Class-A amplifier. Reason: This kind of amplifier is the most simplest form. Though it consumes more power and output less power compare to class-B and class-AB, it might have a lower THD, as well as more friendly for us to design and build. Our designated speakers are bookshelves speakers, so high power is not needed here. We can also avoid any complication for using the cut-off region.
Implementation: Single-end. Reason: Technically a class-A amplifier can be implemented by either single-end or push-pull, but for the usual case and design simplicity, as well as limited bugged, we will use single-end as our system.
Choice of tubes: 6J1*2 as preamp, 6P1*2 as power amp. Reason: 6J1 and 6P1 are pretty cheap tubes that are widely available in Russia and China, and we found this kind of tubes have a typical I-V curve, and manageable voltage requirements (~250V anode, ~6.3V filament). In addition, there are also many successful commercial designs that use 6J1 and 6P1 as part of the amplification circuit.
Wattage: 5 - 10 W per channel for the 4 Ohms speakers, 100 - 300 mW per channel for ~300 Ohms headphones. Depending on the actual quality of the tubes, feedback loops factor, and avoid self-excitement (unstable system), we might adjust the power.
Frequency response: 60-18 kHz minimum.
Signal to noise ratio (SNR): ≥ 70 dB, the background noise should be neglectable compared to the sound.
Total harmonic distortion (THD): ≤ 3% @ 1 kHz.
Stereo isolation: ≥ 50 dB @ 1 kHz. The left and right channels need to visualize people about the sound coming from different distance as well as angles rather than just two speakers.
Good clarity and separation of frequencies of sound, for symphony, as least three different instruments can be heard at the exact time.
Speaker sensitivity: 87 ± 3 dB.
Speaker impedance: 4 Ohms.
Speaker wattage: 15 W maximum.
Speaker frequency response: 50 - 20 kHz.
Headphone sensitivity: 90 dB.
Headphone impedance: 300 ohms.
Headphone wattage: 100 mW.
Headphone frequency response: 50 - 20 kHz.
|2||Robotic Caricature Artist
| We want to make a robotic “caricature artist” consisting of a 2D pen plotter, made from an end-effector (with a pen), some string, and stepper motors, mounted vertically onto an easel. A computer equipped with a camera would capture our subject, vectorize the image, and pass it to our plotter, which would draw the image onto a piece of paper.
This project can be broken into modules which can be designed (or procured) and tested independently: 1) a software module that uses basic image processing operations to apply a cartoon effect to our image 2) a software module that converts a raster image to a vectorized format 3) a software module that converts a vectorized image to G-code instructions 4) and finally, a 2-D plotter that receives G-code instructions.
This project is appropriate for a senior design project because it combines many aspects of the ECE curriculum such as circuit design, mathematics, and algorithms. This project is unique, because we are combining many concepts into a single product -- there exist 2D pen-plotters and software capable of vectorizing images, but the two have never been combined into a single device that achieves our goal of emulating a human caricature artist.
|3||Bone Conduction Lock
|A lock that is unlocked using vibrations conducted through the bones in the user’s hand. The user wears a wristband containing a haptic motor. The haptic motor generates a vibration signal that acts as the "key" to the lock. When the user touches their finger to the lock, the signal is transmitted through the user’s hand and is received at the lock. If the lock receives the correct "key", then it unlocks.|
|4||Electronic Automatic Transmission for Bicycle
Sometimes bikers might not which gear is the optimal one to select. Bicycle changes gears by pulling or releasing a steel cable mechanically. We could potentially automate gear changing by hooking up a servo motor to the gear cable. We could calculate the optimal gear under current condition by using several sensors: two hall effect sensors, one sensing cadence from the paddle and the other one sensing the overall speed from the wheel, we could also use pressure sensors on the paddle to determine how hard the biker is paddling. With these sensors, it would be sufficient enough for use detect different terrains since the biker tend to go slower and pedal slower for uphill or go faster and pedal faster for downhill. With all these information from the sensors, we could definitely find out the optimal gear electronically. We plan to take care of the shifting of rear derailleur, if we have more time we may consider modifying the front as well.
Besides shifting automatically, we plan to add a manual mode to our project as well. With manual mode activated, the rider could override the automatic system and select the gear on its own.
We found out another group did electronic bicycle shifting in Spring 2016, but they didn't have a automatic function and didn't have the sensor set-up like ours. Commercially, both SRAM and SHIMANO have electronic shifting products, but these products integrate the servo motor inside the derailleurs, and they have a price tag over $1000. Only professionals or rich enthusiasts can have a hand on them. As our system could potentially serve as an add-on device to all bicycles with gears, it would be much cheaper.
|5||Facilitated Instrument Learning
|Members: Jiajun Xu (jxu74), Christopher Chen (cwchen4), Theodore Lao (tlao2)
Musicians must spend a substantial amount of time learning the positions of chords/notes on new instruments they are interested in learning. Facilitated instrument learning will allow someone to sing, hum, or play another instrument and the currently played notes will be mapped onto the new instrument in real time. This allows a beginner, with little musical background, to sing a melody they wish to play and learn it on a new instrument and also allows a professional, with an extensive background on musical theory and other instruments, to compose music on new instruments.
The acoustic input is recorded through an analog MEMS microphone and the signal will go through an ADC. This will connect to a DSP chip which handles the frequency analysis. This will connect to a microprocessor which controls the LEDs on the piano keys.
The DSP chip analyzes the notes currently being struck by filtering and removing the harmonics from the spectrum. The frequencies of the notes will correspond to positions on an instrument which can be indicated by LEDs.
A 9V battery connected to voltage regulator will supply the necessary supply voltages to the components.
Other products which help people learn instruments are:
This $2000 product is limited because notes must be pre-selected on the piano for students to learn from later. With the proposed project, users produce the melodies with their voice or an instrument and the notes on the destination instrument will be shown in real time.
This piano learning subscription service is limited to select songs the company has pre-transcribed for its users, so customers are unable to write songs with melodies they have come up with.
|6||Automatic Trumpet Tuner
|A large problem in any musician’s life is tuning. The tuning of an instrument changes every single time you take it out of the case, and even while playing. This can be detrimental at times, especially on longer notes where you can definitely tell whether one member of a band is out of tune or not.
Our project goal is to create an automatic tuner that is self-contained (battery operated), can be placed on the tuning slide, will automatically adjust the tuning based on the note you are playing, and has a simple enough interface to give the musician feedback on whether they are sharp, flat, or in-tune while the auto-tuner is turned on.
Major challenges to be overcome are interference from other sound sources, power delivery to the system using only batteries, using current and previously captured data to enhance the tuner’s accuracy, and researching motors (possibly a linear actuator) that will have enough force to push the slide in and out while still having a low power requirement.
As far as we know, a product like this has not yet been created for a trumpet, and could easily be adapted for other brass instruments that face similar issues.
|7||Electronic Toilet Paper Dispenser and Tracker
|Group Members (Name - NetID):
Kevin Wang - klwang4
William Rick - wrick2
Title: Electronic Toilet Paper Dispenser and Tracker
IDEA post link: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=22549
Living with multiple roommates, one is often thinking, "I just bought toilet paper last week. How are we out already?" Our project is an electronic toilet paper dispenser and tracker that can reduce waste. Each roommate will have their own ID, with multiple ways to sign in to the toilet paper dispenser: RFID card, username and pin button input, and possibly NFC on their phone. Once signed in, we will use an infrared or ultrasound sensor to detect a wave of the hand, which will dispense one “serving” of toilet paper. (The serving size can be adjusted to accommodate different usage levels.) This will be accomplished using a geared DC motor, continuous rotation servo motor, or stepper motor, in a mechanism similar to that of an automatic paper towel dispenser. This will take experimentation to determine the most accurate method. Each roommate's usage will then be saved and displayed on an LCD screen along with other statistics and options.
Additional optional components may include a piezo beeper for sound alerts (an alarm when someone has taken way too much toilet paper in one sitting) and some LED’s to flash for different signals, such as when the toilet paper is low. Another feature we will add is alerting a user if the toilet paper is out at the moment when they sit down on the toilet. This can be done with an additional ultrasound or IR sensor to detect when a person has sat down or is near the toilet.
We intend to prototype using an Arduino, but then move to a PCB with an Atmel AVR ATMEGA* family microcontroller for a more permanent installation. The PCB will also contain the motor driving circuit and DC power regulation.The device will be powered by battery or DC power (6-12V).
Few current solutions exist for toilet paper and are not widespread. The closest product is the automatic paper towel dispenser in public restrooms, which only dispense based on a hand wave, but do not track usage in any way. Furthermore, there are no solutions in this space that track usage, not to mention usage for individual persons.
|8||Laser Tag Glove
Keng Yan Lim
|Our group wants to make a laser tag glove. The idea stems from childhood games where you pretend your hand is a gun. The index finger will emit the laser while the thumb acts as a trigger. We plan on using a contact sensor so the laser beam is shot every time you press your thumb against your index finger with a buzzer sound confirming the shot. The remaining three fingers will have flex sensors, which will ensure the hand has to be in a "finger gun" shape in order to work properly. Also, we will create a vest with 4 sensors in order to detect when each player is hit. At the moment we are planning on implementing the game for two players.
For record keeping, we plan on giving each player's glove an LCD display. The display will show statistics such as kills, deaths, and time remaining in the round. Player one will have 4 buttons to set the amount of lives and time the game will start with. If you are the last person standing or have the most lives once the timer runs out, you win the round and the LCD display will show that. We plan on making this battery powered, with the batteries located on the vest. In order to sync the players, we are planning to use WiFi.
|9||Package Anti-Theft System
|The basic idea of our project would be an anti-theft system for protecting packages from Amazon and other online retailers from being stolen. Instead of being based on locking up the package in a box, which is expensive and hard to use, we would use a weight and alarm based deterrence system, that detects if a package has been taken by the removal of weight and sets off a very loud alarm scaring away thieves, similar to a car alarm. In order to take the package, it would have to be "unlocked" by either a phone or pin code. Included in this project would be developing a durable chassis to protect the package, weight and lift sensors with corresponding code, a solar cell/battery system to make it very easy to use (similar to a calculator that doesn't need a charger, wifi capabilities, and an app that tells if your package has arrived or been stolen, and to shut off the alarm.
The partners would me (John Simonaitis, simonts2), Joe Bianco (jbianco2), and John Graft (graft2).
|10||Assistive Technology for Patients with Medical Face Blindness
|Yuchen He TA||other
|Prosopagnosia is a neurological condition characterized by the inability to recognize faces. After talking a few times with an ECE professor that has this condition, I'd like to work on developing a prototype for a minimally intrusive assist technology designed to help normalize social interactions. The basic idea is to create one piece of wearable tech that captures image data to be sent to a smartphone. The phone can handle facial recognition from a user managed database, and provide the needed information to the user. For example, an ear mounted camera with a subtle activation button might send a photo to the phone, which will identify the largest face in the image and transmit the information to the user through a second piece of wearable tech.
Building on projects FA15_30 and FA12_17, the second wearable is a wristband with a screen, with their WiFi replaced by bluetooth if we keep the phone app, the button I/O for camera activation. Battery and charging for both the camera and wristband prototypes as well. In addition, the wristband buzz once if the face is not in the database, so the user can immediately transition into introductions. A different buzz pattern would indicate that the face was identified, and the information transmitted to the screen.
On the software side, we weren't too interested in duplicating the work already done by so many other labs, and were hoping to just use API calls to any of these http://blog.rapidapi.com/2017/11/10/top-10-facial-recognition-apis-of-2017/ taking advantage of cellular networks and the cloud. This would then free us up to do the more interesting software work of creating the UI to manage the database, and give us time to work on more interesting hardware (like the second piece of wearable tech) which could be more important for this class.
|11||Noninvasive PoC Anemia Detection Device
Anemia is a condition that affects nearly 2 billion people, according to the WHO.Anemia is an entirely preventable disease, and once detected, the patient can take corrective action to restore their iron levels to a healthy state. According to Miller et al, the probability that you are affected by anemia increases five-fold in underdeveloped geographies . Current non-invasive POC detection methods can be relatively expensive, and are difficult to move from place to place which makes them all the more inaccessible to the geographies that need it most. We propose to build a more portable and cost effective non-invasive anemia detection method by combining image and spectroscopy based detection methods in a wearable device that can be taken to regions without adequate medical facilities and used to help diagnose this preventable disease.
The device we build will be required to provide accurate binary diagnosis of anemia based on both the oxygen level from a fingertip pulse oximeter, and the hemoglobin level based on RGB heuristics given by the pallor of the conjunctiva . Data collection hardware will include a low-resolution camera for detecting conjunctiva pallor and wide-band photodiodes for pulse oximetry measurements. The two detection methods will be encapsulated in a single, wearable, fingertip device that delivers at least 9 correct diagnoses out of 10. This will be accompanied by a wristband that carries the power, processing, and diagnosis indication subsystems. The device will deliver all 10 diagnoses on a single charge, and be able to deliver diagnoses even while charging.
The minimum viable product will deliver two complete detection systems for data capture, a processing system for data analysis and detection, a power system for delivering the required capacity and charging needs, and a diagnosis indicator to relay the results to the testing administrator.
The total cost of the assembled product should be less than $50.
Detection System Design
Pulse oximetry is done to non-invasively estimate the concentration of both Hb and HbO2 by measuring the absorption coefficients at two separate wavelengths . We intend to use at least wideband photodiodes, each with a filter for either red (660nm) or near-infrared (940nm), that are activated by two distinct light sources at red and near-infrared, which illuminate the tip of the finger in a 50% duty cycle. The light that perfuses the tissue is then detected by an array of wide-band photodiodes that detect the light which is transmitted through the tissue. This waveform is then offloaded to the processing subsystem, which uses the information of which light source is currently active alongside the incoming waveform to compute the ratio of AC to DC components in the detected waveform. This ratio is taken at both wavelengths, and the ratio of these ratios is used alongside a lookup table to compute an estimate of the percent saturation of O2 in the blood.
The second method of detecting anemia is to look at conjunctival pallor. The conjunctiva is the mucous membrane that covers the front of your eye and lines the under-eyelid. For many patients with anemia, the conjunctiva is distinctly pale and lacks redness . A healthy patient has a distinctly red conjunctiva . A diagnosis for anemia can be made accurately when conjunctival pallor is examined and then combined with other methods of detecting Hb levels, such as the pulse oximetry method described above.
|In a party, the DJ is the guy responsible for supplying everyone an endless stream of entertaining music. But we all know he deeply wants to join the party! So we’ll build a remote gesture controlled DJ console that every DJ can take into the action.
Formal description: Our system comprises two parts,
1. A compact device that straps to one’s hand and collects gesture information. The gesture can be used to navigate a playlist, change various effects, manipulate voice recorded from a microphone etc.
2. A phone app that implements the various signal processing functions and outputs the music. The app is driven by gesture data from the embedded device.
To send gesture data from the device to the app, we use the Bluetooth Low Energy protocol. The embedded device will contain a battery, an accelerometer, a gyroscope, a magnetometer, a barometer, a microphone, and a few buttons for testing. It fuses sensor data to estimate the pose of the hand, in the form of orientation and change in height. The microphone can detect one-shot events such as snapping a finger. We will define a custom protocol to stream these events along with continuously changing gesture data to the phone, which will make use of these data to perform signal processing tasks. In addition, the phone will record the user’s voice through a microphone and mix it into the final audio. The microchip on the embedded device will need to be reasonably powerful to perform sensor fusion and at the same time monitoring the microphone for the characteristic sound of finger snapping.
Here is a list of functionalities we propose (non exhaustive):
1. Vocoder to change the texture of one’s voice.
2. Pitch shifting
3. Looping at the snap of a finger (background beatboxing?)
5. Switching background music, or advancing to the next item in a playlist
7. Wah-wah effect
8. One-shot sound effect (laugh track etc.)
For the sensors we plan to use the MPU-9250, and for the microcontroller we plan to use the LPC1768 Cortex-M3 chip. Of course there will be relevant ADC and regulator circuits for the microphone and battery as well.
|13||IR Tracking NERF Sentry Gun
||Christian Ryan Alvaro
|This project aims to create a deployable NERF sentry gun. This project is unique in the sense of adding an ECE spin to a common toy blaster. While the idea of sentry guns has been done before, what sets this project apart from the others is two-fold. First, its tracking system relies on infrared, versus most other systems that rely on webcams and OpenCV. This leads into the second point, portability. Because of its lighter hardware requirement in tracking, specifically in not needing an entire computer, the system should be redeployable at will.
In terms of hardware, we plan on using a ATMega microcontroller as the brain of the project. For sensing, it'll use an IR receiver from a Wiimote and interface with the microcontroller whenever strong IR beacons are sensed. The microcontroller would command servo motors in a pan-tilt configuration to aim, all before spinning up the flywheel motors and dart pushing motor via transistors connected to the pins of the microcontroller. We'll need to design a control system in order to balance it's responsiveness with it's precision, since fast acquisition may result in overshoot and a missed target from its own momentum, while slower movements may result in missing a shot at the target.
|14||Interactive Climbing Holds
|The primary goal of this project is to introduce interactive climbing holds to improve end-user experience for climbers and climbing gyms.
Climbing gyms normally have set "routes", which is a way of increasing the difficulty and complexity when climbing a wall. These routes only allow certain rocks to be used when climbing the wall, and these rocks are normally denoted by their color or some attached colored tape. One of the main problems in climbing gyms is that there is a high density of holds which often vary in color/ have multiple colored tapes on each one, making it difficult to determine all the valid rocks in the current route being attempted. Furthermore, climbing gyms often replace old routes with new routes every once in a while, which may be frustrating to some climbers that are still trying to finish past routes. Gyms also don't interact with every climber; thus, new routes may not cater to the core audience of the gym.
Our project goals are two-fold. Firstly, we want to allow climbers to easily identify and select from available routes through a web interface, which will light up the relevant route and time the user as they traverse the route. Secondly, the data from climbing times and climber ids can help the climbing gym identify popular routes, analyze data to create routes that most closely match climber needs, and set routes conveniently by scanning rocks.
Create an interface for non-gym staff to set routes
Create an interface for rating routes based on difficulty and user feedback
Creating analytics for routes for feedback to the climbing gym
Each rock will contain a microcontroller, some communication module (wifi?), RGB LED, a battery, and some low-power RFID reciever.
The rocks are interfaced through a small passive RFID chip attached to a low profile wristband worn by the climber. The primary purpose of the wristband is to identify the climber and communicate with the server about the climber's progress throughout the route.
A server that will be able to communicate to all the rocks and will host the web application that the climbers and gym will interface through to control the array of rocks on the wall.
1. Climber receives wristband and browses available routes to climb, filtered by difficulty.
2. After choosing a non occupied route, the route lights up according to specified color.
3. Upon reaching the starting position (wristband in contact with designated starting hold), the timer begins.
4. Upon finishing the route (wristband in contact with designated end hold), the climber's total time is stored in the server and a leaderboard is updated to show best times.
5. Data collected from climbers is used by the gym owners to determine the difficulty, frequency, and style of new routes.
|15||Survivor Identification and Retrieval Robot
|The maze solving robot would attempt to solve mazes in a static environment and implement a learning algorithm to improve performance. It would have to detect obstacles and navigate around them to search for and identify the goal position. It could be extended to retrieving an object somewhere in the environment and return it to the start position. It is a proof of concept for search and rescue operations for autonomous learning systems. We would like to have it be able to quickly learn in a variety of different layouts.
Due to the computational complexity of the image processing algorithms, we would use a Raspberry Pi for algorithmic implementation, but create a circuit/PCB for robotic control.
For the item retrieval and maze solving robots we would need to implement object recognition that is capable of recognizing a specific set of objects in non-static lighting environments
We need to be able to identify the walls/objects of the environment that the robot is operating. We will use laser/sonar sensors in combination with visual data.
It needs to be able to recognize obstacles and understand the possibilities of navigating around it.
Boundary Space Recognition/SLAM
We need to constrain these robots to work in a closed environment and therefore need a method to understanding the position of the robot with respect to the boundary
We could also use some image processing feature to identify the boundary with markers or physical barriers.
For the item retrieving and maze solving robots we would need to be able to manipulate the objects in question
We decided that a high degree of freedom robotic manipulator was out of the question and would prefer to use a simple claw/clamp, or suction/magnetic pull to interact with objects. We feel that to be able to pick up arbitrarily sized objects would be beyond the appropriate complexity of the project, so we would constrain the types of objects that need to be picked up with to work easily with the manipulative system
We would likely build a circuit to automate the control that drives the motors or even moves the robot from point A to point B
We will need some sort of path planning algorithm to explore the environment
We would speak to the appropriate resources about how to implement these algorithms
Prof Girish Choudary
Prof Steve LaValle
In order to improve the performance of the robot with successive iterations navigating the maze, we will need to implement a reinforcement algorithm.
Prof Girish Choudary
Hardware - for much of the hardware component, Yuchen suggested that we speak to the machine shop about fabrication at least in terms of robotic design.
Motor Control Boards - we would be designing this circuitry to control the motors by linking the battery and the control inputs. As an extension of complexity in this area we would design a circuit that given an input and current state of the robot drives the robot to that location. This would allow us to include a microprocessor on the designed board and increase the functional capabilities. (DESIGN)
Raspberry Pi - for high level control of robot and algorithms (USE)
Sensors - for this we need camera(s) whether we do monocular or binocular vision would be an issue to discuss. We could also use laser rangefinders/lidar package to to SLAM for the obstacle detecting and avoiding robots. We could use sonar as well for distance sensing. We would need an IR camera or sensor for the human identifying robot. We could use pressure sensors or a scale to detect that the robot has correctly picked up or put down the objects in question. We would also need a sensor to check that the robot is stuck and burning out its motors.
Since robust image-processing identification of objects in the environment is not the focus of this project, would would likely constrain lighting conditions to standard well lit levels.
We are also not designing a robot that can climb over obstacles, since complex dynamic is not the focus of the project. We would constrain the environment to a flat/drivable surface. Obstacles would be moved around.
|We plan to design a carry-on sized bag that doubles as a motorized ride-able scooter. Instead of dragging your heavy luggage around the airport or across campus, step on the platform and ride it for a quick and convenient commute.
Rideable luggage is not necessarily a new idea. The Micro-Kickboard is a carry-on with a built-in platform to ride as a manual scooter. The Modobag is currently the only motorized luggage on the market--featuring a design that allows the rider to sit on the bag. Products like the Micro-Kickboard lack the convenience of electrical motors to take the strain off of the user. Stricter TSA guidelines and the staggering price point make Modobag a less viable option.
Essentially, we want to smash these ideas together, while conforming to the TSA restrictions and keeping the price of the product at a much more reasonable range. The motors of the Lug-n-GO will be powered by removable Lithium-Ion batteries. Additional features include a manual mode, where the user can pedal themselves forward and charge the batteries; charging ports for USB devices; and maybe a fingerprint scanner as well to prevent person riding away with someone else’s luggage.
Our design will consist of an off-the-shelf DC motor that is capable of a max speed of 10 mph. We will design our own motor controller system such that the user can squeeze the right lever to go and squeeze the left lever to brake. We will use lithium ion batteries that can easily be removed by the user. Our luggage design will also include a charging dock for the user to charge a phone. To do this, we will design voltage regulators that can adjust the the voltage of the lithium ion batteries to produce an acceptable voltage to charge a phone.
|17||Fast Low Cost Swarm Robots
|The project is to create a fleet of low-cost robots capable of moving quickly in a coordinated fashion. Each robot will be relatively small, smaller than a human fist, and the area of movement would be roughly the size of a table. Our goal would be to build 16 of them. Since the cost of each robot is multiplied by the size of the fleet, making a design that allows the robots to coordinate without using too many sensors is highly beneficial.
Currently, swarm robots either are too slow or are too expensive. A prime example would be the Zooids project from Stanford, where the robots each use a high-quality IR sensor to decode their coordinates beamed from an expensive 3000Hz projector. This both limits the granularity of the positioning system and increases cost.
Our method for coordinating the robots would be to use a 1080p webcam mounted overhead and use machine vision to identify the robots locations and orientation. The vision system would then send the current locations, orientations, and destination locations to the robot over WiFi. From there, the robots would utilize the information to move towards the destination location. The vision system will likely run off of a laptop and use a router to send the information over WiFi.
Each robot would be equipped with an ESP8285 SoC which integrates a microcontroller with a WiFi chip, an antenna, motor controllers, two stepper motors for precise movement, and a battery with charging circuitry. The shell of the robot and PCB would both be designed by us. The robot would also feature some variety of a vision target on the top to assist the camera in identifying the robot and its location.
|18||Butter Passing Robot
||Yu Jie Hsiao
Yu Jie Hsiao -- yujiejh2
Yuxiang Sun -- sun76
Yuchen He -- he44
Title: Butter Passing Robot
We want to make a butter passing robot, which can find the butter on the table and bring it back to a certain location. The robot will be able to move on its own, avoid other obstacles, and find the butter on the table.
We believe our project is appropriate for ECE 445 because it utilizes many aspects of knowledge we acquired here. For the hardware platform, we intend to build a vehicle similar to the one from ECE 110. It will have two motors(eg. ROB-11696), a thermal sensor(eg. SEN-10988) to detect motor temperature, an infrared sensor to detect irrelevant(non-butter) obstacles, a microcontroller(eg. ATmega328) to take all the sensors reading and output PWM signal accordingly to drive the motor. We intend to place all these circuits onto a PCB.
Aside from these sensors, we also intend to place a camera module(eg. OV7725) and a wireless module(eg. ESP8266) on our vehicle. The camera will take pictures of the environment the robot faces. And the pictures will get transferred via WiFi to a laptop, where we intend to run some python scripts to detect butter in the photo. We realize the time complexity of the code is crucial. However, since our target has a relatively distinct color and relatively fixed size, we think it’s possible to simplify the recognition process.
If we have enough time, we plan to implement basic speech recognition as add-on functions. The basic function would be for the robot to start operating once it hears a finger snap. We can use an audio sensor and some code on the microcontroller to realize that. For further improvements, we found some existing Arduino Modules that can recognize a fixed set of voice commands. With that, we can make the robot operate only under certain commands.
We found two similar projects by searching “Butter Passing Robot” on Google. The main advantage our project will have is the lower cost. Both these projects were built on existing robotic platform, which cost $70 and $170 respectively. Since we intend to build our hardware platform ourselves and focus on the core functionality of “passing butter”, our project will cost considerably less than the existing ones.
|19||SCARA Drawing Robot
|Members: Bingzhe Wei (bwei6) Tianhao Chi (tchi3) Chenghao Duan (cduan2)
Title: SCARA Drawing Robot
We propose to develop a drawing robot based on an SCARA robot arm https://www.youtube.com/watch?v=vKD20BTkXhk. Overall system processing flow will be as follows:
1. User inputs image to image-processing program on PC.
2. Image processing with program on PC:
a. Style transfer via deep neural networks
b. Clustering of similar colors
c. Pixel fill algorithm to convert to vector strokes
3. Vectors sent to microcontroller program via USB or similar.
4. Microcontroller program does inverse kinematics and commands motors as necessary.
The combination of the SCARA design and stepper motors will enable a very stable and fast drawing platform, while the proposed image processing algorithms enable multiple, arbitrary styles and provide high-quality visual effects. We expect steppers will reduce the need for control.
Proposed circuit will contain an USB-to-Serial converter IC, ATmega644 microcontroller, stepper motor controllers, and optical phototransistors for feedback control, as well as associated support circuitry. We use an ATmega644 with 4K of RAM, double that of ATmega328 as found in regular Arduinos to ensure enough capacity for inverse kinematics, while power will be supplied via a standard wall plug adapter that outputs 12V DC.
Our team also has access to GPUs for training deep neural networks.
Our team members have taken the following courses:
ECE 470 - Robotics
ECE 486 - Control System
SE 423 - Mechatronics
ECE 515 - Control Theory and Design
ECE 547 - Topics in Image Processing - Deep Learning
CS 598 PS - Machine Learning For Signal Processing
|We hope to design a physical dynamic keyboard that can reconfigure into different layouts (i.e: AZERTY, QWERTZ, Dvorak, Colemak, Maltron, etc). We have considered embedding standard keys with LEDs in order to produce an 8x8 set of pixels. By doing this we can alter the layout of the keyboard without removing the keys. This allows the user to modify and assign a keyboard layout that is most suitable for them. They can have multiple layouts set and thus switch between the layouts that are needed with ease. We also want these layouts to be programmable by the user through hardware and the keyboard to have memory so it can retain its various layout even when transferred between computers.
We believe that this could be useful due to the fact that certain languages have difficulty being laid out on the keyboard. We also believe this could help industries and the use of certain computer programs (i.e: photoshop, sony vegas video editing software, animation studio, etc) by programming the keys to perform actions that are commonly used by the industry and/or program.
We are considering using a USB type C control instead of the standard 3.0 and are currently looking into whether there will be any issue obtaining keycaps designs. We hope to use cherry mx design due to the importance of the keyboard providing a feedback. We will further investigate keyboard types, how to obtain LED arrays, and how to get keycap set up.
To clarify that this proposal is significantly different from the original idea post made. We will not be having moving keys. Instead we will implement the variable keyboard configurations via the led embedded in the key caps. Further, we believe our idea is significantly different from the Fall 2017 semesters group 19 project because we will creating a entire fully programmable keyboard whose keys are individually reconfigurable via the hardware. Please let us know if there are any other questions that you'd like answered.
Jeevitesh Juneja: jjuneja2
Nigel Haran: ndharan2
|21||VACANT PARKING DETECTOR 2.0
|Inspired by the vacancy indicator in the modern parking structure and the project 39 in Fall 2017, we want to design and implement an occupancy detection system for outdoor parking.
The system would consist four modules:
- Detection module: ultrasonic proximity detectors (similar to the parking sensor mounted on car's bumper). Each detector has its emitter and receiver. When the wave hits an object nearby, its reflection would be recorded. With certain wave intensity threshold selected, the detector would know if there is a car parked in its duty range. Detectors would be mounted on parking meters or some support stands on the ground.
- Control module: central management system. It would keep track of spot occupancy by constantly communicating with detectors under its control. Inter-device communication is based on WLAN. Besides, the control module is responsible for notifying users (drivers or parking enforcement) about the parking occupancy information.
- Notification module: parking assistant application. We plan to write a mobile application for our detection system, sharing the updated occupancy information upon inquires. The control module would push the detector feedback into an online data storage. When a user starts an inquiry, the application fetches data and display it to the user.
- Power module: power support of our detection system. We plan to use rechargeable solar cells to power the detection module. But for the power-intensive control module, we may need to use extra power from wall plug for demo purpose.
Team members: Qingtao Hu (qhu13), Jiahe Liu (jliu143), Zeyu Zhang (zzhan127)
|22||Real-Time Free Throw Feedback Device
||Joseph Vande Vusse
|A basketball free throw cannot yet be analyzed in real time by an individual practicing alone. One can try different things to attain different results (or even record their attempts), but this process is slow and unscientific. We would like to change this.
Our free throw feedback device would alert the user to their issues in real time based on a history of their made shots. In the training phase, it would gather data via 2-3 lower and upper body sensors to determine the averages forces applied by various parts of an individual’s body in a successful shot. The running averages would be calculated by transferring the data of each sensor to a computer for each attempt. Then, in the testing phase, the machine would either present an acknowledgement of a made shot, or constructive criticism to improve next time. The criticism would be based on which of these 2-3 sensors displayed the largest deviation from it’s average during the training phase. An example message from the machine might read, “More legs next time!”.
The hardware would essentially be the sensors (likely an array of accelerometers) fed to a microcontroller with several UART ports (for simultaneous data transfer) and power circuits to power each of them. The microcontroller would then transfer the data to the computer where a script would perform the higher level functions (running average, training vs. testing, and feedback).
Our project is an innovation in that it combines existing technologies (sensors/microcontroller/computer) with our data compilation and transferring for rewarding user experience. A potential competitor is the ESPN series “Sport Science” which analyzes performance in various sports, including basketball. It appears to be largely reactive while our device is proactive in that in is helping an individual in real time.
|23||Full Movement Gaming Mouse
|The project I would like to work on is a computer mouse that allows the thumb of the mouse hand to control a joystick like device. Some similar devices exist but they use a true joystick that has an awkward feel for directions. My focus is on providing better direction feel on the mouse thumb. The mouse will be built largely from scratch by integrating buttons, positioning laser, the joystick, and USB communication all with an FPGA.
The mouse will be powered through USB, the casing will be 3D printed when the product starts coming together more. The current team is 2 electrical engineers.
The joystick is a slider for analog in game left/right movement, the slider on the mouse will be roughly vertical where thumb down is left (or map-able to anything) and thumb up is right. This slider is combined with a rocking switch which has a central position as well as a pushed forward, pushed backward, and ideally pushed 2x forward for 4 digital positions. Likely project will only have 3 positions for simplicity and ease of part selecting.
Potential Parts list for providing rough idea of design:
Slider: Mouser Part # 652-PTA15432010CIB10 , OR 312-2045F-A100K OR 688-RS15H113CA05
USB Jack and cord, for example Mouser # 474-BOB-12700
Rocker switch ideally is of (ON)-OFF-(ON) type, few cadidates also from Mouser. Also have seen several multi (4+) position slider switches, but these seem less desirably for the feel.
FPGA - have not selected however have experience in Vivado with Artix 7 chip from ECE 437 using an Opal Kelly dev board. Not sure what is allowed as has as if it is ok to start with a basic board and integrate another board of our own design into the project. One possible https://www.xilinx.com/products/boards-and-kits/1-f3zdrn.html
Otherwise my requirements for the FPGA are probably not to difficult. Able to receive a few discrete signals from the rocker switch as well as an analog value from the slider. Will need to confirm voltage levels with the slider are compatable with FPGA. Also FPGA will need to be at lease USB 2.0 capable (as far as speed).
Optical sensing for mouse can be done from scratch or preferably with a board such as https://www.tindie.com/products/jkicklighter/adns-9800-laser-motion-sensor/ where all that is needed is SPI communication to FPGA.
|24||Machine Learning Enabled Wearable Stethoscope
|There have been many recent advances around using several machine learning methods to detect and identify abnormalities in heart beat and lung breath audio. We are proposing a wearable system (to be worn around the chest) which will record audio, and analyze it, looking for abnormalities in the heart beats or lung sounds. This would provide a significant improvement in care for people at risk of issues with heart or lungs because it would be equivalent to having the attention of a doctor at all times. Use cases include examples such as firefighters (who have high rates of heart defects, and are also at risk of smoke inhalation on the job), hospital patients coming out of heart or lung surgery, or people who have history of issues with heart or lungs.
The sensor would comprise of a sensitive microphone (and associated DSP circuitry), which would pick up the sounds from the heart or lungs. This would then get sent to the next sub-unit, which would process the audio. Depending on the complexity of either implementation (and the time and resource limitations) we could either use a microprocessor to implement a k-NN or a CNN algorithm to identify the sounds, or process through a hardware implementation of the k-NN or CNN.
Once a worrisome sound is identified, it is communicated to the relevant party. In the case of hospital patients, it would be communicated to the doctor and nurses station. In the case of at home care, it would be communicated to doctor and emergency services. Finally in the case of emergency personnel (firefighters) it would be communicated to the captain and other emergency personnel. The communication would be implemented through RF. This has many advantages; it can be integrated with existing medical pager system, and since it only communicates when there is an issue it does not always need to be activated (and in the use cases it would only need to activate a handful of times), saving in power requirements.
|25||LED and Spectroscopy System (for Detecting Aflatoxin in corn)
|Introduction: (idea from Prof. Hart)
Aflatoxins is a toxic component in some grains. Based on its special physical property under LED, like the B-group aflatoxins exhibit blue fluorescence; the G-group exhibits yellow-green fluorescence under ultraviolet (UV) light, We would like to work on design a reproducible prototype LED and spectroscopy system which can detect the aflatoxins in corn kernel.
When a kernel is dropped into a tube, the first LED will be turned on, after the kernel emits fluorescence to the photodiode, the current through the diode will change, this signal will be detected, and the current data will be sent to the laptop. The current signal represents the light intensity. Then the first LED will be off and the second to the sixth LED will repeat this process.
1. Printed circuit board:
A ‘start’ signal to start the cycle (turn on the first LED)
Balancing circuit for 6 LEDs
Interface with the Data Acquisition tool (DAQ) which can be connected to LabView
The DAQ can collect data (represents the light intensity) from the photodiodes to laptop
2. Graphic User Interface (LabView)
Auto on-off system for 6 LEDs based on timing: Each of the LEDs will be switched on one at a time. When the neighboring photodiode detects a kernel, the reading of the spectrometer is triggered for data collection.
Calibration Control: The brightness can be adjusted on the GUI.
Pulsing of LED: The frequency of the LEDs can be adjusted to a frequency needed.
Tools will be used for this project:
Spectroscope (glass tube, LEDs, detector), Eagle, microcontroller, Printed Circuit Board, LabView, Data Acquisition (DAQ), laptop
|26||ROBOTIC WAITER FOR RESTAURANTS
Jun Pun Wong
|We want to build a robot that can handle orders & deliver food in restaurants. Patrons would have a alert mechanism (button) to call the waiter (our robot). Our kitchen would have internal transmission network (between robot, kitchen and tables) that would receive this request and then the robot would be dispatched to assist the customer. Patrons would also be able to place orders using the robot (LED screen). The restaurant staff would also be notified of the various orders which they would dispatch through the robot later on.
For the navigation, we have talked to a TA and the machine shop for advice. The TA suggested for this one-semester project, we could use fixed locations for our tables and map that to the micro-controller on the robot to program it on where to go. Greg from the machine shop also helped us suggest what wheels, motors could be used to build the robot. Currently, we are looking at a 4 wheel with 2 wheels that are driving the movement and 2 that are just following (they help support the load on the robot). We also talked with Greg on how to get the distance moved by the robot. He recommended that we can calculate the distance moved using data from the encoders that are attached to the wheels of our robot.
|Our group proposes to create robot drummer which would respond to human voice "beatboxing" input, via conventional dynamic microphone, and translate the input into the corresponding drum hit performance. For example, if the human user issues a bass-kick voice sound, the robot will recognize it and strike the bass drum; and likewise for the hi-hat/snare and clap. Our design will minimally cover 3 different drum hit types (bass hit, snare hit, clap hit), and respond with minimal latency.
This would involve amplifying the analog signal (as dynamic mics drive fairly low gain signals), which would be sampled by a dsPIC33F DSP/MCU (or comparable chipset), and processed for trigger event recognition. This entails applying Short-Time Fourier Transform analysis to provide spectral content data to our event detection algorithm (i.e. recognizing the "control" signal from the human user). The MCU functionality of the dsPIC33F would be used for relaying the trigger commands to the actuator circuits controlling the robot.
The robot in question would be small; about the size of ventriloquist dummy. The "drum set" would be scaled accordingly (think pots and pans, like a child would play with). Actuators would likely be based on solenoids, as opposed to motors.
Beyond these minimal capabilities, we would add analog prefiltering of the input audio signal, and amplification of the drum hits, as bonus features if the development and implementation process goes better than expected.
|28||Wireless Power System for John Deere
Miguel Jimenez Aparicio
|John Deere proposed an open-ended project to develop a power system to replace the hydraulics used in power transfer for their vehicles. They want to replace the need for tubing because it can break or cut in extreme applications. This system would potentially see use in heavy machinery such as logging vehicles. They seek a proof of concept for an alternative method of power transfer without the use of wires, preferably with some ball joint to allow motion.
Our proposed solution is to create a power system that utilizes resonant inductive coupling to transfer power wirelessly through a ball joint. This ball joint will be made of magnetic material to aid the magnetic field that will be key in transferring power. At the input and output of the system, we would implement power converters and their respective control systems, connecting them at the ball joint. Further additions can be made to improve the efficiency and functionality of the system, but the basis of the idea is a power system using a ball joint.
Research in wireless power transfer has been relatively recent, and we are now seeing it used in applications such phone and electric car chargers. Both of the examples, however, only utilize inductive charging. In our project, we will attempt to implement resonant inductive coupling to increase the range of operation of the joint. Furthermore, we will need to interface with the mechanical engineering group in order to design the mechanics surrounding the ball joint rather than rely solely on electronics. This project will focus on the unique application of a ball joint and resonant inductive coupling to create a proof of concept for wireless power transfer in relevant applications.
|29||Solar Water Filtration and Vending System
|Most rural areas don’t have a consistent source of water, forcing people to walk long distances to get it. One of our member’s non-profit organization (Solar Chapter) recently built a solar pumping system, which solves this problem. However, this water requires further treatment to be drinkable. To address this problem, we’ve decided to build a solar distillation system.
Solar panels will charge a lead acid battery (affordable), which will power the vending system. A charge controller will be required to obtain the desired output from a fluctuating input. Additional sun tracking system using photoresistor may be implemented to improve performance. We will then implement a water filtration system. Water will flow in through solar distillation to improve the water clarity and clean the water to high degree of purity. Then water will flow to another tank with purify layers of fibre membrane, active charcoal, and sand. For monitor our machine, Using an ultrasonic sensor, we will determine the amount of water within the storage and distillation tanks. This will be connected to sluice gates and the distillation setup, serving as a control system. The water quality will also be monitored using pH and Turbidity sensors, which will help evaluate the system’s performance. We will then implement a vending system, which disposes a fixed amount of water for money - this will also utilize sluice gates. As a reach goal, we will consider ways to transmit this sensor data to provide for remote monitoring, enabling maintenance.
In our conversations with course staff, we were informed that this project was sufficiently complex. In fact, this is a subset of our original plan to develop a full filtration, monitoring and vending system. We are willing to adjust our goals to achieve adequate complexity - for instance, including the data transmission as an expectation as opposed to a reach goal if required. As socially-conscious engineering students, we intend to put a version of our final product into active use in rural Indonesia and similar environments.
|30||Refrigerator Food Contamination Detection using Electronic Nose
Siddharth Muralidaran (firstname.lastname@example.org)
Simran Patil (email@example.com)
Agnivah Poddar (firstname.lastname@example.org)
Refrigerator Food Contamination Detection using Electronic Nose
Food poisoning is a serious problem that affects thousands of people every year. The pathogen Salmonella along with Listeria and Toxoplasma are implicated in 1500 deaths every year out of approximately 5000 total deaths reported in the United States. The World Health Organization (WHO) reports that salmonellosis caused by Salmonella spp. is the most frequently reported food borne disease worldwide . Poisoning food must be detected early in order to prevent diseases. Contaminated food is usually detected by odor which is composed of molecules of specific sizes and shapes with a corresponding receptor in the human nose. The brain identifies the smell associated with that particular molecule when signaled by the receptor. Electronic nose is an array of sensors that imitates this biological functionality.
The main goal of the project is to build an electronic nose that can detect food contamination inside the refrigerator, before the human nose and notify the user through a UI interface attached to the refrigerator’s external wall. Concentration of certain gases like acetone, ethanol, Ammonia (NH3), Hydrogen Sulfide (H2S) etc. increases because of rotten food and thus can be detected by the array sensors which are the heart of the design. The following sensors are commercially available and can be used to detect certain chemicals, process the data and help with categorization. On looking into the data sheets of the following sensors, the working temperature range is around -10 to 45 °C which works well with our refrigerator’s internal conditions.
TGS 2611.5%1 Methane
TGS 2611.5%2 Methane
TGS 2602 Hydrogen Sulfide
TGS 800 Fumes from food, alcohol, odor
TGS 822 Alcohol, organic solvents
TGS 4160 Carbon dioxide
SHT 11 Relative humidity and temperature
Additionally, we plan to incorporate features of an existing smart refrigerator in this adapter. This includes a barcode scanner to scan in packaged food to be added to the inventory in the refrigerator or feed in data about vegetables and fruits. This would also help in detecting spoilage of packaged food, which otherwise would not be detectable by the electronic nose.
Primary goal: The proof of concept exists in the form of multiple white papers. Our aim is to use the findings from these papers and implement a prototype that works in practical conditions like a refrigerator.
|31||Enhanced Beverage Coaster
|We are want to design a beverage coaster that has the ability to detect when drinks are low and capture data on drinking habits. We believe our project will be useful for restaurant workers because they will be able to spot near empty glasses more easily (especially in a dark environment), and they will be able to track how much of each beverage is usually consumed by customers.
Our enhanced beverage coaster will detect the amount of beverage left in the glass by using a pressure/weight sensor and will light up an LED to a certain color to make it easier to restaurant workers to spot near empty glasses. Also, we will be tracking the amount of beverage consumed by the customer using the pressure/weight sensor. Our coaster will also have a wireless chip that will relay data to a central location (probably a phone or computer for this prototype). We plan on probably using a coin cell battery to power out project.
We believe this project is challenging because it needs to be not too expensive, low power, and have the ability to transfer data wirelessly. Also, detecting the decrease in the amount of beverage in a glass can be tricky due to the glass being picked up in uncertain intervals by the customer.
-Pressure sensor working
-Able to detect a near empty glass
-Replay amount of beverage drunk by customer somewhere (probably phone)
-All of the above working
-Have a central module to transfer information to
-Able to select type of beverage before serving customer
|A traditional Nintendo Entertainment System creates 8-bit game sounds using an Audio Processing Unit known as the RP2A03/RP2A07 chips. The sound composition of tunes that are played by the NES and systems of that era primarily consists of square and triangle waves meant to be output on an analog speaker. Instead of using an analog speaker as our sound output medium, we would like to use the electrical discharge of a Tesla Coil.
Our overall project goal is to create a Tesla Coil that uses solid state devices and is able to modulate its discharge frequency in accordance with the register contents of the NES APU, so that the sound emitted by the electrical discharge matches the sound being output by the APU.
The way that we would get the contents of the NES APU in real time is through an open-source emulator. One such emulator that could work is FakeNES. We would run a modified version of FakeNES on a Raspberry Pi and change the Software sound module, so that it can send sound register contents to the GPIO module. Then we will design another circuit to read the contents of the GPIO module and change that digital signal into the sound corresponding wave that should be emitted by the discharge sounds of the Tesla Coil. The discharge sounds can be controlled by properly interrupting the switching circuit that drives the coil's primary side.
As far as safety is concerned, we will be building the coil at such a scale where the discharge is not large enough to pose a problem.
One major problem I can see us having to overcome in this project is combining the multiple sound channels, so that they can be output on a single coil. The way we will overcome this issue is by playing all of the channels out of the coil in a round-robin format. That way each channel can contribute to the air vibration that we interpret as sound simultaneously. We would make the round-robin switching of channels occur at such a high frequency that the attenuation of sound between the switching is not significant enough to affect the sound.
While musical Tesla coils do exist, none exist such that they seek to model the APU output of the NES directly. In addition, there exists no Tesla Coil drivers that seek to modulate the Triangle wave of the NES's APU, most musical Tesla Coils are only designed to output sounds that are square waves. We will achieve the Triangle Wave output by feeding our switching circuit that produces square waves into an integrator and feeding the output of the integrator into the coils primary.
|33||Gesture-based light design system
|We propose using a pair of gloves to control light design on a stage, which can be used for uses practical to musical performances and theatre shows. The purpose of this project is to accomplish simpler goals in relation to these areas rather than running a full production. This could be controlling a spot light on an actor or, in a musical production it could allow a unique control over the lights that most, if any, groups out there do not have. This means the focus would be more for the flash and performance, or simplicity in the case of theatre, than for large scale utility.
There will be a limitation to keep the complexity within the scope of this course. The limitation is that the light designer cannot walk freely in relation to the lights. The designer must remain behind or in front of the lights. Think of this as the light designer must be on stage directing the lights or off stage. With this limitation, we will not have to solve the problem of keep track of where the designer is. Now with the designer and lights both in fixed locations, we will still have a control unit with all the lights in the system connected to the control unit. All lights in the system will be servo motor based. The designer will still wear a single glove that communicates to the control unit via Bluetooth.
The glove will have its own microcontroller, and bluetooth transmitter, one flex resistor in the pointer finger, and four buttons and four LEDs, and will be powered via 9Volt battery. The designer will select which lights to control via the buttons and the glove will indicate which lights are currently selected via the LEDs. Further, the designer will control which direction the currently selected lights point at by pointing his finger and gesturing the direction he/she wants the lights to point towards. When the finger is not fully extended (flex resistor not active), the lights will not move.
|34||Music-Visualization and Motion-Controlled LED Cube
Hieu Tri Huynh, NetID: hthuynh2 (email@example.com)
Islam Kadri, NetID: ikadri2 (firstname.lastname@example.org)
Zihan Yan, NetID: zyan9 (email@example.com)
Link to discussion: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=22725
Our project’s main inspiration came from a video about an art piece called Kinetic Rain at Singapore’s Changi Airport (https://www.youtube.com/watch?v=jhP9n6WvVfQ). Instead of bronze droplets, we’d like to use a cube of LEDs achieve the same effect and additional features.
Our project goal is to build a LED cube of size 10*10*10 with 2 features. Every 10 LEDs will be on the same wire and the wires will be supported by a board at the bottom. User can switch between these 2 features/modes by using a button on the board.
Feature 1. Music Visualization. This LED device will have microphones attached to it to listen to sound. The sound will then be analyzed and four values will be extracted and used: frequency, amplitude, angle of arrival, and beat per minute (bpm). The LED colors and configuration will adjust based on these values.
Frequency: Frequency will be used to control the color of the LED. To extract the frequency from the sound, we will use the short-time Fourier transform (STFT) algorithm.
Amplitude: Amplitude will be used to control the brightness of the LEDs.
Angle of Arrival: This value will be used to control the orientation of the animation of LEDs. In order to detect the angle of arrival in a 2D plane (0-360 degrees horizontal plane), we will use 3 microphones, and the Generalized Cross Correlation – Phase Transform (GCC-PHAT).
Beat per Minute (BpM): This value will be used to control the speed of the movement of the animation.
Feature 2. 3D Snake game. We would like to implement a 3D Snake game so that a user can play using this LED device. The Snake will be controlled by a user hand's motions.
Display: We will turn off all the LEDs except the Snake (initially a small length of LEDs) and the fruit (one LED with different color) to create the movement of the Snake. The length of the snake will grow larger as the user captures more fruit.
Hand Motion Detection: We will create a pad that has 4 proximity sensors on the board. The user will move his hand above the pad, and we will use the values of those sensors to detect the motion. For example, if the user moves his hand from Left to Right, the sensor on the left will change its value before the sensor on the Right. Based on those difference in value of 4 sensors, we will be able to detect the motion of user’s hand in 6 different directions (Up, Down, Left, Right, Outward, Inward)
We are thinking of using either a Raspberry Pi or an Arduino for the controller unit. We will design the circuit for the LEDs.
Our project is innovative and unique because it serves as an aesthetic project, like the Kinetic Rain project, with the additional use of sound as an input to affect the LED's color and shape. There are a few existing products that can visualize music using a sound's frequency, but none of them extract the previously mentioned 4 categories (frequency, amplitude, angle of arrival, and BpM) to influence the LEDs. Therefore, by extracting all 4 values listed above, we will be able to create a more visually appealing and accurate device.
Furthermore, by implementing the 3D Snake game, we will fully utilize the resource (LEDs) to increase the entertainment factor of the device and will also encourage the user to interact with it, as well as having a more hands-on use.
|35||Acoustic Motion Tracking
|Yuchen He TA||other
Sean Nachnani (nachnan2)
Kevin Chun (hchun8)
The project idea is to use sound rather than video as a means of motion recognition. Current smart devices are limited to only using natural language processing to interpret a user's needs. We want to expand upon this further and allow devices to perform commands using simple gestures.
The current idea is to create a 4-input microphone array with an ADC that allows for at least a 48khz sample rate, and use a speaker that can reproduce sounds up to at least 24khz. We will start off by sending pseudo-random pulses across a large bandwidth and correlating the sent signal with the received input from the microphones. Given time we will switch to using FMCW (Frequency Modulated Continuous Waveform) radar as a basis for this approach. This will allow us to achieve accurate distance and velocity measurements, and potentially transmit in the inaudible range.
I have spent the last month prototyping this device using a raspberry Pi and a speaker array. I've gotten the pseudo random pulse approach to work, coding all the signal processing in Python, mainly with the PyAudio and SciPy libraries. The prototype's speaker array is currently sampling at 44.1khz and using a speaker that can play up to 20khz. I was able to achieve accurate measurements within the range of a normal living room (about the size of a smaller classroom in eceb).
We plan on building the microphone array using 4 MEMS microphones and appropriate ADCs to sample up to 48khz. This will allow us to play sounds up to 24khz, which will give us enough bandwidth to get accurate measurements. We'll also use a micro controller (most likely a raspberry pi) to sample from these microphones and perform the DSP needed. This system will be designed to be plugged into a regular power outlet.
Related Research Papers:
CAT: High-Precision Acoustic Motion Tracking http://www.cs.utexas.edu/~wmao/resources/papers/cat.pdf
FingerIO: Using Active Sonar for Fine-Grained Finger Tracking https://fingerio.cs.washington.edu/fingerio.pdf
|36||BIOAQUARIUM: WATER SENSING WITH INDICATOR FOR SUSTAINABLE FARMING
Shannon Kuruc (kuruc2)
Emily Wang (eswang3)
Tony Xiao (tsxiao2)
Inspired by the sustainable and low-cost farming efforts in Kenya (http://livingpositivekenya.org/chicken-project/), we consulted with Professor Brian Lilly in order to tackle to issue of sustainable fish farming in Kenya. The intention is to create a low-cost tank monitoring kit to aid in the monitoring and efficiency of running a sustainable small-scale aquaponic farm.
We are looking to sense the pH, ammonia, nitrite, nitrate, or oxygen levels (or some combination of) levels of a small-scale fish farming tank. Currently, this farm has no electronic monitoring equipment. Our goal is to be able to interface together an LED display, solar panel with small battery for power, and the necessary sensors to improve the efficiency of the small-scale farm. Our design would be useful for assisting the farmers in maintaining multiple tanks and improving their profit margins. The ultimate goal of the project would be to create a very low-cost kit for fish farmers.
We are proposing to use a pH sensor (conductivity-sensing), dissolved oxygen sensor, microcontroller, battery pack for when the sun goes down, solar panels, and controls for the pre-existing pump (acts as oxygenator). The battery would power the system for about 4 hours total overnight and we have estimated a 60 WHr battery will be used.
As a potential stretch goal, Professor Lilly is interested in gathering the sensor data and uploading it to a server so he can track the tanks while here in the United States. We may implement this extra feature with Arduino.
Link to discussion post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=23069
|37||Wireless Laptop Charging System
|With the advent of wireless charging products for low-powered devices (phones, tablets etc.), we wonder if we could charge higher powered devices like laptops, by combining them. Laptops in class are very common due to their note taking efficiency. However, economical laptops preferred by students have low battery life, which causes them to rely on their chargers. The prevalence of these laptops causes an excessive amount of cable traffic. We believe that our project will help regulate cable traffic and thus create a more organized classroom.
What makes our project unique is that we are expanding on the concept of wireless inductive charging by connecting multiple low power wireless receiver to create a wireless adapter that plugs in to your laptop's power jack.
Based on our research, there is only one product on the market made by Dell, which retails for $200 and only works with one laptop also produced by Dell. Their laptop has an internal inductive charging receiver, and a transmitter pad.
In our project, we are trading convenience for universality; instead of requiring the purchase of a new laptop for access to wireless charging, you would only need to buy the external adapter and the corresponding transmitter. Our product will target two different markets: academic organizations and individuals. The Qi 1.1 transmitters would be implanted in classroom tables and our receivers will replace the charging blocks.
What we will completely design and build:
4 x Receiver coils
4 x AC to DC Converters -> includes rectifier, filter and regulator circuits.
1 x DC to DC converter-> filter, regulator circuits
1 x Feedback Circuit for DC to DC converter-> includes Error Generator and PI controller
Design thought process:
The charging pad(receiver), will be completely designed by us. It will consist of 4 coils that we will build ourselves.
Our coils will be designed according to the electrical requirements of our AC-DC converter output. The coil should cover at least 75% of the 5W Qi transmitter so that we achieve acceptable efficiency and coupling. By following the WPC(Wireless Power Consortium) standard we will experiment on number of turns and coil dimensions and the gap between them to be able to produce a satisfactory coil that is as small as possible.
Each coil will be connected to its own AC-DC converter. This AC-DC converter will consist a full-wave rectifier, a filter and a regulator to output our goal voltage which is 5V with 5W power. Our 4 AC-DC converters will be serially connected to supply 20V to our self-designed DC-DC converter. This DC-DC converter will step-down the 20V it receives to output 12V and 3.33A DC for powering our laptop.
In order to receive consistent power output from the DC-DC converter, we will implement a feedback system that will regulate the output voltage to the laptop jack. The feedback system will include a error generator, the proportional integral (PI) circuit and a comparator that can change the gate drive input that helps maintain a steady output.
We’ll be powering 4 Qi 1.1 transmitters independently to generate an electromagnetic field for each individual receiver coil, positioned corresponding to our coils in the charger pad. We want our project to be compatible with standard on-the-market transmitters, so we will not be designing the transmitter ourselves.
Our planned design diagram:
Previous rejected RFA:
Product in Market:
Dell Wireless Charging Mat - PM30W17:
|38||Automatic Ball Borrowing System
|At the ARC of our school, we can only borrow basketballs or other balls from staff. As a result, we want to build a machine to dispense those balls. By scanning their IDs, students can get those balls from the machine. Also, they can return the balls to the machine by putting back the balls and scanning their IDs again.
We use a RFID as students' ID and we have two buttons which are “borrow” and “return”. In the machine, we put balls in a channel. The ball comes out at the beginning of the channel and people return balls at the end of the channel. At the bottom of channel, there are pressure sensors to measure the weight of balls. As a result, we can know whether students return the ball or not. We will use LCD to display the remaining number of balls inside the channel. We will put a motor in the machine to push the ball out of the machine. The channel has a slope at the exit to prevent the balls from leaving the channel. In addition, there will be an IR sensor at the exit. If the sensor detects the ball, the motor will stop to make sure that exactly one ball will leave the channel. If a person returns other things instead of balls, the pressure sensor will detect it and turn on the alarm.
For our project, we adjust our scalability that we build a machine that takes golf balls or baseballs. However, we can apply our machine to take basketballs in real life in the future.
|39||Photocell Music Board based on Eli Fieldsteel’s Project Pitch
|Our project involves creating an improved version of Eli Fieldsteel’s prototype music board. The music board consists of an array of 256 photoresistors connected via USB to a computer. The computer runs a program written in the Supercollider programming language to collect and interpret data from the music board. Each photoresistor detects the intensity of light shining on it. When a drop in light intensity on a photoresistor is detected, the computer plays a note. The music board is capable of playing any combination of notes simultaneously.
The improved music board will feature modular photoresistor boards and execute internal component failure checks. 256 photoresistors will be placed on 16 identical PCBs with 16 photoresistors on each board. If a photoresistor fails, a single PCB can be replaced easily without affecting the rest of the music board.
To add to Eli’s original design, we will also implement:
A 16x16 LED display board that will mirror the hand motions to provide a matching visual for demonstration purposes.
An algorithm to smooth the data to account for effects of inconsistent light sources including interference from spotlights and low light environments.
A user interface to switch between multiple instrument sounds and adjust board characteristics (pitch, volume, sensitivity, calibration)
Create a generic design that can use different types of sensors (touch sensors, flex sensors, distance sensors, color sensors)
Design a small, hand held, self contained version with battery power
|40||Remote Area Clearance Device (RACE)
|People drop small items such as earrings, needles etc. These can sometimes be hard to find for the naked eye, or can be in a hard to reach position. We want to build upon the ECE 110 project, and build a car that can detect metal, and pick the object up. The car will have an autonomous mode and a manual mode. In the manual mode, it will be controlled remotely by the user, through Bluetooth protocol. This car, with the metal detection circuit, has additional applications outside the home as well. It can be used as a low cost alternative to look for landmines in war torn regions. Despite the United States having the world’s largest army, IEDs and mines still pose significant difficulties for the Army with regard to engineering operations and maneuver support. A department of defense lab as shown a strong interest in this project and have offered to provide support to our team in the form of robots, processors, sensors, etc.
They have offered to allow us to use one of their “mini-bots” which we may use instead of the ECE 110 car.
We will use the chassis and the motor drivers from the ECE 110 class. We will build a metal detection circuit, and the detecting coil will be mounted in front of the car, facing downwards. When metal is detected, the car will take a step back, and use TTL logic to swipe the possible area with a small vacuum to pick up the object. We will use TTL chips to implement navigation logic, and integrate Bluetooth so that the car can receive and send signals. We will build the software that will allow the user to move the car using a laptop, and control the vacuum.
In the autonomous mode, the car will be able to navigate itself (only in a fixed, chosen room). We will fill prior information such as the dimensions of the room, and the location of the door of the ECE 445 lab. There will be a fixed base position of the car, and we will have Bluetooth beacons around the room to act as markers for recalibrating the position. The car will be equipped with wheel encoders, compass, and accelerometers. We want to give the user the ability to pick a spot where he has dropped an object (such as desk 5), and the car will go there from the base and look for the metal object near that desk.
Our base goal is to implement the metal detection circuit along with the manual operation mode of the car. Our reach goal is to implement the autonomous mode of operation.
|41||Wind turbine phone charger
|Group members: Charles Hummel chummel2
Emre Ercikti ercikti2
Sachin Reddy ssreddy2
A small wind turbine with a total height of half a meter will be used to generate AC power which will be converted to DC power. The DC power will charge (ideally) a removable battery pack that would have an output for charging a phone.
Solar phone chargers are great but they do not work when there is no direct sunlight. This turbine will fill that gap.
For this project we will:
1. Create power electronics that will convert the AC power to DC.
a) This circuitry will also include protection measures such as short-circuit protection
2. Implement sensors that detect wind speed and direction and software that will suggest optimal turbine turbine orientation.
3. Using the sensors implement, a cut-off mechanism to keep the generator and/or power electronics from being damaged from excessive wind speeds.
4. To scale the project's difficulty we would design our own battery pack using SLA's and implement a battery management system as well.
The mounting system will be a removable strap that would let the user secure the turbine as well as holes in the base that pegs could fit through to secure to the ground if need be.
|42||Open source and cheap radiosonde
|We would like to make a cheap, open source weather radiosonde. Currently, radiosondes are launched in 92 different locations 365 days a year and cost upwards of $250. When they drop back to the surface, the radiosonde is rarely ever recovered and reused.
We would like to develop a prototype that offers a locating functionality. The radiosonde will consist of temperature sensors, barometer, humidity sensor, transmitter and receiver with a quarter wave antenna. In addition to transmitting pressure, temperature and humidity readings, the prototype will offer an option to add an additional sensor to measure research data such as carbon dioxide levels in the air to determine air quality and track variations.
|43||Real-Time Sound Visualization
|We plan to design a sound visualization model by using a pitch detector to detect pitch and output with musical notation on the screen. Furthermore, we are going to store the melody and mimic piano sound on chips.
1. Detect Pitch.
We plan to make a pitch detector in hardware to detect sound in real time at a 10k sampling rate, and a LED light to indicate when it is ON or OFF. An autocorrelation analysis, center clipping, infinite peak clipping will be used to build up the detector.
2. Output music notation in real time.
Once a note has been detected, it will show on the screen at the right position. The previous notes on the music notation will move right. It will look like flowing music. The screen will be connected on a black board with detector in which we could display the sound in real time.
3. Store the melody.
We are going to store the melody in pitch into registers for future replay.
4. Mimic instruments sound.
we will use instrument sounds package, like guitar, piano, and violin to replay the melody on arduino. The mimic will not be in real time and only for replay mode.
|44||Electronic Sound Generator
|Teammates: Kedong Shao (kshao5)
Jeremy Hutnak (hutnak2)
Parikshit Kapadia (pkapadi2)
We discussed it and we decided that we would like to build a relatively inexpensive, they can range from $200 to $4000, analog synthesizer that is simple to use for those that would like to make interesting effects with their music. Analog synthesizers can become very large and complex in how to work with them to create the sounds wanted. We would like to create one that would be more intuitive on how to use it as well by using a simple manual switch and dial control scheme.
We would need to build a power supply for the system, which we are thinking of using battery power to allow it to be easily transported and used, but we may have to build a power convert to use the standard wall power. We would need to design and build two oscillator circuits one voltage controlled oscillator and a low frequency oscillator. We will need one low pass filter circuit and a envelope generator circuit to modulate and filter the signal. We will need to build two amplifier circuits, one voltage controlled amplifier to boost the signals amplitude and another for the audio output. For the audio output we would also need to allow for an external amplifier,typical amplifier used for musical instruments, to be plugged in and bypass the built in output. We will also need to design and build a white noise generator to produce the white noise sound used to make sounds like wind.
The bulk of the work on the project will be designing, building, and interconnecting the circuits all together. We will also have to layout a PCB design and make some kind of container to hold the analog synthesizer.
Some challenges we will face are figuring out how to map voltages to different notes as well as making sure the circuits will all work together properly. Some of the group also has no experience working with music or analog signal processing, so we will have to work together to research and understand how each circuit will work and how they will effect each other to produce different effects.
|45||Assistive Digital Piano
Jae Young Kwak
|PROBLEM: Learning to play piano over a longer duration can be tedious, boring (for some) and above all very expensive . The average cost of piano lessons is between $15 and $40 for a 30-minute lesson in the United States. However not taking lesson has potential drawbacks as well. For instance wrong fingering, playing too fast or to too slow, no performance evaluation metrics or evaluators to rate your improvement in piano playing, improper key tapping and pressure exerted on keys( affecting and changing the melody of the music) and not practicing enough. In addition it has been commonly found among young children losing motivation to play piano due to either differences with their piano teachers or just not fun to practice piano regularly.
PROPOSAL: With our project, the assistive digital piano, we aim to reduce the requirement for professional guidance in piano learning, at least at the beginner level, and aim to develop methods to make piano practicing more fun for younger kids.
INNOVATIONS: Many learning tools exist as toys for kids to play around with. What will set our project apart is that the keyboard will light up with the correct key to press in the sequence of the song in a specific color, and the player will wear gloves with color coordinated fingers that correspond to the key that is supposed to be played. Otherwise, the sound from the key will not play. Another feature that will make our keyboard unique is that we will use the note, dynamic, and rhythm data of the played song to save the player's performance and create computer visualizations to see whether the player is playing too soft, too loud, or off rhythm (when played alongside a built-in metronome).
SENSORS: --Color Sensor Info(https://www.adafruit.com/product/1334, https://www.businesswire.com/news/home/20061017005039/en/Avago-Technologies-Introduces-Industrys-Smallest-Digital-Color, ) In addition to the color sensor, we will have shock impact sensors measuring key dynamics. Digital pianos often use velocity measurements to find the speed at which the key is pressed to determine the volume. We will attempt to use both methods.
HARDWARE: To put this piano together, we will have to be cognizant of 1. How each individual key will act as a pressure sensitive switch that produces a signal for the speaker, 2. How each key will use a color sensor to detect which finger is pressing the key, 3. How the circuit will determine, from a file input, which key should be played next by which finger and prevent any other sounds from being played by keys and the song from moving on until those keys have been pressed, and 4. How the data will be recorded.
software part: from the data collected from the hardware part we will design a software to measure the rhythmic accuracy(will use a metronome to determine the speed and beat accuracy) ,errors in key and finger placements and evaluate the learners performance.
Since building a full 88 key digital piano is an extreme endeavor, we will focus on a proof of concept build that consists of just a few fully constructed keys.
|46||Cat Selective Automated Food Dispenser
|Yuchen He TA||other
People with two or more cats often have one cat eating way too much food leaving the other pet starved. The chubby cat tends to eat all the food before the other one gets to it. It is also tedious to control how much food each cat gets.
Solution and features:
We aim to solve these problems by designing an automated system which controls how much food is given to each cat. The system stops the cats from eating each others food and controls how much food and at what time the cats are being fed.
A desired quantity (weight) for food can be entered. The dispenser, will have a flap door that opens/closes based on feedback from a weight sensor placed under the area/bowl the food is dispensed. Times the food is dispensed when a cat approaches can also be controlled to control when the cat eats.
The food container will have IR sensors/LEDs to indicate to the person if the level of food.
Methods to detect which cat is approaching:
RFID detection: most existing dispensers / gates use RFID concerned about IP
Cat collar color detection: The part of the collar on the cats back will be distinctly colored and will be detected by a downward facing camera.The camera will take a picture as the cat approaches the device. A motion sensor can be used to activate the camera.
A sliding door to close the access to the food bowl if the wrong cat tries to eat the food.
All the above functionality can additionally be controlled by the owner of the cats using an attached screen.
The device will be powered through the wall socket and we will implement a voltage regulator to appropriately power our PCB.
|I am a member of the U of I Rowing team and an issue we often have is an inability to see our output on the water. When we row indoors the machines give us real time feedback on our output and let us know if we’re on track.I would like to develop a device that would take the force output of the oar on water and relay this information to the users phone.
While devices do exist that track the distance traveled and pace of the boat, these readings are based only off GPS. While useful, this does not solve the problem of finding out individuals output in a boat with more then one person.
The oar passes through an oar lock prior to going into the water. All of the force that moves the boat forward is applied at the pivot point of the oar lock. I figured placing some kind of pressure sensor in-between the oar and the lock would capture the input force.
As for other sensors our group has discussed including gyroscopes to capture the cadence of the boat and a gps unit to track distance and speed.
|48||Bike Navigation Assistant
The motivation behind this project is that often when riding around campus or in a large city where we are not too familiar with specific locations, it can be hard to having Google Maps (or similar) open while riding and results in us stopping frequently. Our project aims to counter this issue.
Our idea is to have create a smart bike that helps you navigate to your destination. For this, we would have LEDs on the bike handles that light up based on what direction you need to turn in. The rider would have an application open on their phone that integrates Google Maps and sends directions to a microcontroller on the bike via bluetooth. In addition to this basic feature, we have a few other features and potential additional features (time permitting) that we plan to implement as part of our project. to make sure we are riding in the correct direction.
-> LED lights on the bike handle that blink in the direction that you need to turn
-> Increased frequency of the blinking light as you get closer to the turn
-> Turn indicators/blinkers that light up automatically when you turn
-> A speedometer on the bike
Additional Features (time permitting)
->Alerts when you get close to another vehicle
->Headlight that turns on automatically when it gets dark and adjusts the brightness according to the surrounding brightness
->Vibration on the bike handle before turns during the day because the LEDs might not get your attention
|49||Neonatal Vitals Monitoring and Phototherapy Device
|Jaundice is the number 1 reason newborns are readmitted to hospitals worldwide . 5-10% of newborn mortality worldwide is due to jaundice and every year over 6 million babies with severe jaundice are not receiving adequate treatment [2,1]. Phototherapy is a known treatment for jaundice and works by emitting blue light over the patient’s skin and, through photo-oxidation and photoisomerization, converts bilirubin molecules to a less toxic, isomeric form . Here we propose building a system which uses phototherapy to treat jaundice and takes vitals important to neonatal health (i.e., temperature, weight, and heart rate) to be applied for healthcare in developing countries.
While simplistic phototherapy systems currently exist in low-resource hospitals, an inexpensive system of treating neonatal jaundice and monitoring vital signs simultaneously does not exist. The added vitals component enables healthcare workers (doctors and nurses) to be able to spend more time in actually treating patients as opposed to having to take the time to measure and take temperature, heart rate, and weight. This is especially useful for hospitals in developing countries, wherein nurses and doctors are continuously severely understaffed.
The inspiration for this project comes from one of our team members actually visiting a developing nation (Nicaragua), working in the pediatrics wing of rural hospitals (mainly Somoto and Granada, Nicaragua) and analyzing their needs both through interviews and observations. Several doctors and nurses described that some sort of device like this would significantly aid them during neonatal care, especially if er are successful in making this cost-efficient as is planned.
The device will appear in a box shape with overhead lighting; moreover, the device can be best visualized as a mobile incubator. This device can essentially be broken down into 3 major components:
* Phototherapy device (for jaundice treatment)
* Vitals monitoring system (anklet with pulse oximeter, temperature sensor, weight measuring system)
* Temperature regulation (heated mattress + feedback loop)
For the purposes of creating this device for application in low-resource settings, the phototherapy device will just involve a panel of blue LEDs (preferably 450-465 nm wavelength range). Exposing the patient’s skin to blue light is a simple and cost friendly solution to treating jaundice and is a long-proven concept . This panel of LEDs would be above the infant and facing towards the infant. The efficacy of this device will be validated by comparing the total irradiance or light intensity of the LED panel to currently-existing phototherapy systems [1,2].
Vitals Monitoring System:
The vitals monitoring system consists of a pulse oximeter, temperature sensor, and a force sensor to monitor the baby’s weight. These sensors will ideally be attached to the baby’s fingers and ankle. We will build the pulse oximeter. The pulse oximeter will measure the reflectance of injected red and infrared LEDs of the infant's skin; furthermore, using this signal we will be able to extrapolate the baby’s pulse. The temperature sensors will monitor the baby’s temperature. We will be automating the task of weighing the baby via the force sensor as well using the trend data to inform doctors when babies are potentially in critical condition (due to water loss, etc.). Pulse, temperature data, and weight tracking will be displayed on the device on a basic external hex display. These sensors will be controlled and programed by a standard microcontroller unit.
A large part of neonatal incubation involves regulating the temperature of the environment the baby is in. The accepted range of temperatures for an ideal condition for these infants is between 33-37 degrees Celsius. Temperature sensors will be placed near the bottom of the bassinet structure to measure the ambient temperature of the environment. Using a feedback control loop using the the heating component within this designed “mattress” will either turn on or off. The design that we’re planning on implementing for the actual heating component is simple system using wire, carbon heater tape, resistors and a power supply.
|50||Home Energy Administration Tool
Jae Min Song
Jee Haeng Yoo
|How much do you pay for the electricity at home? Have you ever thought of saving energy? But do you even know how much energy is consumed by each appliance?
In order to spend electricity efficiently, I came up with the idea of developing an administrative tool that not only controls the power, but also keeps track of each device's energy consumption. I am trying to implement this on simplified house model for this project. The tool will enable user to (1) remote control power at home on web application using smartphone and (2) monitor detailed information- time, amount, and usage pattern- on energy consumption of electronics or lighting at home through charts- also on the web application. I expect this tool to allow people to be aware of their spending and to cut down electricity bill.
I am planning to construct a house model with snap-in receptacles so that I can plug in actual electronic devices that use 120V AC. I would also need components to measure power dissipation of each receptacle.
There will be a designed PCB that runs all components in the house model. PCB will also include esp32- this allows hosting a web server- and a micro-controller to display information on web server so that it allows user to remote control power and monitor energy consumption.
|51||Rolling Pet Toy
|Yuchen He TA||other
|Owners often feel bad leaving their pets at home for long periods of time because they get bored, may be more destructive to items around the house, and do not get enough exercise throughout the day. This is a problem, especially for puppies, because if they do not get enough exercise they have a harder time with training and obedience.
Our vision for a solution to this problem is a small, durable, sphere-shaped toy to engage the dog throughout the day. There would be a rechargeable battery and power converter for the power supply. The control system would incorporate user settings from an app via Bluetooth relaying which times of the day the owner would like the dog to be active. To make the toy more interactive, custom infrared motion detectors on all sides of the ball would activate movement of the ball when the dog comes near it. The sounds from a custom speaker/amplifier/filter system could also attract the pet to the toy (through doorbell or other tones) at certain times set by the app.
Once the pet is interested, the toy will roll away, avoiding walls and objects using feedback from four mounted IR sensors around the edge of the ball.
The motion of the toy will be controlled by two wheels that will be screwed into the sides of the machine shopped ball. These wheels will allow an axis to be adhered to a stepper motor that is independent of the other side. The wheels will be able to freely turn but due to their attachment to the sphere it will cause the sphere to roll.This will allow the system to perform turns and correct itself if need be. The design choice of a stepper motor is to allow for measurement of the angle of rotation without the usage of a position sensor to be used in correction feedback loops. The design of the locomotion of the sphere was based off of this project: https://www.youtube.com/watch?v=LmvUkbdXNbM
The rest of the control and power supply components would be attached to the central axis through the middle of the ball, held in a box that swings freely as the ball rolls to act as a counterweight.
As a reach goal, a camera module could be incorporated so the owner can see how often the pet was engaged during the day.
The difficulty in this project comes in the effort to design the controls PCB, and package it in a small and durable way so the toy can fit under furniture. We also must have low power consumption to make the play time last longer so the dog can be exercised all day.
|52||Air steering wheel to control a robot car
|We want to design an "air steering wheel" to control the turning of a robot car. We want the user to wear a special designed glove on one hand, and rotate his/her hand to control the turning of the car. The glove is connected to an encoder fixed at the user's elbow which will barely move when the hand rotates. The encoder will detect the angle of the rotation of the glove and send signals to the microprocessor on the car, The microprocessor will process the signals and control the motors of the car accordingly. The turning angle of the hand is proportional to the turning angle of the car and they are highly synchronized.
The electrical system on the user's arm is detached from the rest of the systems.
A potential modification is to add an additional gesture control mode. When the user's palm is flattened out, the car will drive. If the hand clenches into a fist, the car will stop.
Wireless Communications (within 10m):
In order to achieve the wireless communication between the system on the arm and the system on the car, we plan to install:
- An Arduino microprocessor on each system
- One HC-05 Bluetooth Module on the system on the arm, as the Master
- Another HC-05 Bluetooth Module on the system on the car, as the Slave
We will use batteries to power both systems.
- A rotary encoder around the elbow to detect the angle of rotation of the hand
- There exists a mechanical system linking the rotation of glove to the input of the encoder.
In order to achieve the design modifications, we plan to use:
- Flex sensors on fingers of the glove
- We plan to build our own robot car.
|53||Portable Bluetooth Amp for Home Speakers
|Anthony Pham - anpham2
Nicholas Jew - njew2
Austin Palanca - palanca2
The Idea -
Our project looks to create a battery powered bluetooth amplifier for regular everyday home speakers. While there are bluetooth speakers out in the market, for those who have bookshelf speakers, or towers in their home setup, they can repurpose their speakers on the go by using this device. For speakers with banana plugs, you can easily unplug your speakers and connect this device to use on the go.
The Inspiration -
I thought of this idea when I was going to dance practice and we used a $80 bluetooth speaker, 2x10w woofers. Although it is small, the downside is that there is not a big enough cabinet to reproduce low end sound. Also, most bluetooth speakers I have listened to from Bose, JBL, and Harmon Kardon have very muddy mids or highs due to them not using tweeters.
“Portability” comes to mind when developing this idea. However, for this case, portability just entails that we can move the device without hassle and leave it in place once it is setup. For example, carrying the speaker to the gym, park, or dance practice room.
So I thought, my speakers at home are pretty nice, however I’d have to remove my receiver which requires my to disconnect my entire setup. So I thought about how easy it is to unplug and replug in banana plugs and thought it would be interesting to make a portable amp that supports these connections. Of course for older speakers without banana plugs, they can remove the banana plugs and connect the cable directly to the speaker clips.
The device consists of a bluetooth chip, amp/dac chip, charging controller, battery, charging port, banana plug female to pcb, and the chassis.
The bluetooth chip we are looking into is the CC2564MODN or the CC2564MODA from TI. The difference between the two is that one has an integrated chip antenna and the other allows us to use our own antenna, for example a pcb printed antenna. However, this selection requires more research depending on the chassis.
The amp/dac that we are looking for is one that can operate with 20~40w @8ohm mono with an i2S, Inter-IC Sound, bus to communicate with the bluetooth chip. We would use the bluetooth chip as the master clock, and send its clock to the dac as a slave. We are also looking at a Class D amplifier due to it having a better efficiency than AB amplifiers. Although this causes more distortion, it should be above frequency ranges above audible levels. We are looking to ask TI if we can get 5 evaluation modules for their next gen series chip as they currently are in pre-production, but in the meantime we will purchase something like a TAS5731PHP as we don't know the turnaround time/cost for a new chip.
The Battery -
The battery needs to have a decent capacity and maximum output power, while also not being too heavy to carry around. Small sealed lead acid batteries (SLA) typically found in uninterrupted power supplies (UPS) would work, although they are usually very heavy for their capacity. Instead, we would use lithium batteries, which have good energy density and output power while also being lightweight. We will also integrate charging of the battery into the device using a lithium charging circuit, like TI’s BQ24616 chip.
The Housing -
For the chassis, we are looking at having a plastic housing for the device to reduce capturing noise. However, we do notice that although class D amps are efficient, there is still heat that needs to be dissipated. Assuming 94% efficiency according to Texas Instruments’ TPA3244 Amp chip, we can expect about 2.4 watts of heat from the chip itself. If using the PCB as a heatsink is not sufficient with passive cooling (slits through the chassis), we can look at creating a metal chassis with an external bluetooth antenna, or have a low rpm fan to move airflow inside of the chassis.
|54||LED Rubik's Cube
|Michael Rupp netid: mrupp2
Meghan LeMay netid: mmlemay2
For this project we will be making an LED Rubik's Cube. Each sticker on the outside of the Rubix's Cube will be replaced by an LED to signal the color.
1. Get multicolor LEDs to represent the sides of the Cube.
2. Implement a reset such that the LEDs will revert to a solved Rubix's Cube state.
3. Implementation of six rotation sensors to check for rotations of the Cube
4. Determine an algorithm that can coach the user to solve the LED Rubix's Cube, then implement algorithm in our Cube.
This project should be considered for Senior Design because it is multi-faceted combining design circuitry and a complex puzzle algorithm. There are similar products on the market, however they are touch screen and the cube cannot be manually rotated. The other Cubes also do not implement a solving algorithm.
|55||Solar Powered Rechargeable Battery Pack with Controllable Voltage Output
Zhuohang Cheng (zcheng14)
Zihao Zhang (zzhng130)
This project involves a rechargeable battery pack with following features:
1. Will have a solar panel as power source which can constantly provide dc power to the battery pack when the user is doing outdoor activities.
2. Can also be charged by the power grid with 110V.
3. Can discharge AC or DC power with adjustable voltage level, from 5V to 25V DC, and from 20V to 120V AC.
4. Will have battery management system to indicate the voltage level, output current and charge/discharge condition.
The rechargeable battery packs are not user-friendly today. Most of them can only output one voltage level, and users need to buy extra voltage converter to fit their device. We would like our battery to output adjustable voltage level which could satisfy every device for our users. Also, it can provide constant power in daytime without concerns of out of battery.
For this project, we need to build a storage battery pack using several parallel-connected rechargeable batteries. Converters and inverters will be needed to regulate the input and output voltage. A portable box with surface-mounted solar panel provides the flexibility of outdoor activities. To monitor and control the discharging process, a microcontroller with display screen will be necessary for user interaction.
|56||Conductive Fabric Gesture-Control Sleeve
|Project name: Conductive Fabric Gesture-Control Sleeve
Team members: Guneev Lamba (glamba2), Mrunmayi Deshmukh (mdeshmu2), Stephanie Wang (swang166)
Introduction: Our project idea is inspired by a couple things. First is our curiosity in wearable tech. Second is Jacquard, a project by Google and Levi’s (https://atap.google.com/jacquard/).
Our project idea is to integrate the gesture control into a fabric sleeve/wristband. Inspired by Jacquard, this wristband will have a capacitive or resistive touch sensor system designed on fabric using a conductive thread pattern that can detect simple gestures. These would be communicated through an RF module to a receiving end that would be able to perform certain actions depending on the gesture pattern. The goal is to develop a product whose functionalities and ease-of-use can be naturally integrated into daily life.
Target use case: This product can be potentially used by bikers to control simple functions on their smartphones like music and phone calling.
Gestures to implement: 1) Swipe up, 2) Swipe down, 3) Single tap, 4) Double tap
Our touch sensor system will be required to distinguish between the above gestures. We will also implement activation/deactivation feature which can be used to turn off the sleeve when not in use.
1) Touch sensor system (capacitive or resistive) designed on fabric using conductive thread,
2) RF module - Bluetooth or Wifi,
3) Flex PCB to integrate with the touch sensor system
4) Control module - Microcontroller & PCB
Resources: We have consulted with Skot Wiedmann who has experience with touch sensing systems and can be a great resource. He has also offered to be our mentor for this project. All three team members are EE’s with backgrounds in signal processing and power systems, and have experience with other applications of conductive thread.
|57||Wireless MIDI Controller Glove
|Glove that creates MIDI signals which can be processed by hardware or software to play/modify MIDI music
Uses flex sensors in fingers to add effects on a linear scale
-Bend finger past threshold to trigger effect
-Further bending of fingers will alter the effect linearly
Uses accelerometer to track tilt of hand to adjust other effects
-Could control volume with up/down tilt and pan position with left/right tilt
-Will smooth signal from sensor as well as have a tilt threshold to prevent unintended changes
Sensors will be connected to an MCU that encodes the MIDI signal. The MIDI signal is broken into 3 bytes that identify the signal type and data, which corresponds to notes and effects.
Will work with computer DAW (Digital Audio Workstation) or sequencer by sending MIDI signals over usb or MIDI cable. We plan on adding bluetooth integration in order to use the glove wirelessly, if time permits.
Power will come from 5V USB initially when physically connected and a battery if bluetooth integration can be achieved.
What makes our project unique? There are other motion tracking devices and gloves out there but they don’t use flex sensors to control effects like we intend or accelerometers for tilt control. We also aim to achieve low latency for quicker effects which is not available from any other similar device.
Pablo Corral Vila
Bad posture is a serious issue that is prevalent in society. It feels natural to hunch over when doing work, studying, or even playing games. Along with this, bad posture is exhibited while sitting. Since these bad poses may happen subconsciously, it may be helpful to know when bad posture is being used.
Our project idea is to build a couch/chair that provides orthopedic feedback to the user. We spend a great portion of our day sitting down whether it be for work or leisure time. We figured that we might as well create some sort of system that informs the user of posture habits and potential orthopedic hazards. Our idea is in its preliminary stages, so we are open to further developing our scope for the project. The chair will be embedded with pressure sensors on its seat and its back. The pressure readings from the glutes, lumbar and upper back will be used to evaluate the posture of the user. Along with this, we can mount a range sensor on the back of the chair to measure the distance between the user and the back since slouching tends to increase that distance. Our project is an innovation, and the most similar thing we could find is a product called Axia Smart Chair. It is a chair that provides the user “direct feedback by means of a vibration signal in the seat cushion and a personal App”.
|59||Bluetooth Controlled Ouija Board
|Luke Staunton - stauntn2
Oluwatobi Ijose - okijose2
My partner and I are trying to develop a Ouija board magic trick, where someone can control the position of the Ouija board planchette. The panchette would contain hidden pieces of metal, and the board itself would have an electromagnet inside that is repositioned by two moving rails. This electromagnet would be able to stay hidden while moving the panchette "magically". This electromagnet will need to have its power controlled by a bjt circuit in order to provided the needed current. The rails would each be controlled by linear feedback control systems using IR displacement sensors. These rail control systems and the magnet BJT circuit would be controlled by a micro controller connected to a bluetooth receiver (all on PCB we design).
The receiver (we plan to purpose the HC-05 Bluetooth Module) would connect to an iPhone app that we would develop and send a PWM to the controller that would tell it to turn on the magnet and move up, down, left or right, or turn off the magnet. As of now we are imagining it to be powered from an electrical outlet as opposed to batteries for simplicity.
We would love to hear feedback on the idea, especially related to bluetooth experiences people have had. We are currently considering the HC-05 Bluetooth Module as our means of BT control and researching microcontrollers (possibly MSP430G2 MCU) to use on the final design. But we plan on using an arduino for the development phase.
|60||Automated Tea Brewing Thermos
|Our project for this class will be an automatic tea making thermos. This thermos will have two different mechanisms to control tea brewing. One of which controls the steeping temperature of the tea you would like to brew. Using a simple switch or a Bluetooth connected device, we can choose to steep at two different temperatures, since there are only two major temperature points used in brewing the common teas*. In addition to this there will be another control for switching between weak, medium, and strong tea. These features are built referencing a guide for tea*. To achieve these basic features a RTD temperature sensor will be added to the inside of the thermos between the outer wall and inner wall of the thermos to check if the steeping temperature has been reached. There is no need to regulate this temperature either, since most steeping temperatures come with a wide enough range where thermal loss considerations are not needed. In addition, a timer will be pre-set, based on user input, for how long to steep the tea based on the saturation level desired. To automate this process these sensor inputs will be conveyed to a motor that will raise and lower the teabag into the water like an anchor. The motor will attach itself to the teabag string with a clip to account for the variations of tea bags. The microcontroller, PCB, and motor will be mounted onto/into the handle to keep the water and heat away from the electronics. With the centralized location to the side of the thermos we plan on building a waterproof enclosure with the use of rubber gaskets, and epoxy to allow for partial submersion under water for hand washing. This design also takes mobility into consideration and allows you to carry the mug wherever you go. We are going to add a heating unit to the bottom of the thermos which will contain a pair of nickel-chromium wires which will serve as heating coils. Their heating output (voltage across coils) will be determined by the micro controller. Once the selected tea and strength have been selected by either the switches or your Bluetooth connected device the thermos will be turned on and heat up to the desired temperature then shut off its heating coil and begin brewing. Once it has finished brewing the controller will provide an audible ding for the user. In addition, the user will be able to follow three LED lights to see what stage of the brewing process the tea is in. (heating up, steeping, and finished). Lastly the user can view a temperature readout display to see what the current temperature in the thermos is. All of this will run on a Li-ion battery which will also be mounted onto the side of the thermos and be rechargeable.
|61||Beverage Coaster with Sensing Capabilities
|We would like to build a beverage coaster that has weight sensing capabilities and the ability to transmit the data to some sort of central node. The use case for this idea is to transmit the data to a centralized location, allowing restaurant owners/servers to keep track of how much of a drink has been consumed thus far and how much is left to be consumed. This creates opportunities for analytics to be done on this data set for the restaurant owners to learn/function more optimally; whether it be by optimizing the frequency of service or any other avenue.
The challenges for this idea are getting both the sensor and RFID microchip incorporated into a usable coaster, not only in terms of appearance, but also size. Besides the challenge of scale, we need a reliable power source that can power the RFID microchip, weight sensor & logic board.
In terms of functionality, a challenge we may face can come in form of data we receive and how to process it. Since various cups have difference weights and densities, figuring out a method to detect accurately, through our weight sensor, volume of liquid or lack of can be challenging.
Our baseline expectation of this project will be to have a functional pressure sensor that streams data readings at a reasonable (based on use case) frequency to a hub that allows for some level of analytics/wiser decision making. Assuming we successfully build this out, we would like to add a button feature that can serve as a waiter/waitress caller system.
|62||Autonomous Pothole Detection and Cataloging for Bikes
|Title: Autonomous Pothole Detection and Cataloging for Bikers
Link to original idea thread: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=23005
Potholes are an issue which plague cities all over the country and over the world. While damaging to cars, potholes can also be particularly dangerous, even fatal, for bikers and can lead to millions of dollars in lawsuits for a city if not patched .
This product would attach onto a bike's handlebars and utilize both computer vision techniques and accelerometer/ultrasonic sensor data to detect potholes and automatically record their location (via GPS) to a database that can be used by cities to determine problem areas. In addition, there would a button that a biker could press to add a pothole if it isn't automatically detected. There would also be a local copy of the database on the device which would enable the device to alert the user if they were approaching an area with potholes reported. An armband that the user wears would issue both a haptic and hearable alert.
Current embedded computer vision techniques have been tested at around 70-80% accuracy in broad daylight conditions , therefore we cannot entirely rely upon them. However, it does provide a means of cataloging potholes that bikers do not run over and can even serve as a an early warning system if we are able to detect a pothole early enough. The accelerometer and ultrasonic sensors can serve to catalogue potholes at night and hard-to-see potholes during the day, and in general have much better accuracy. In addition, we can try to catalogue the potholes based upon their severity, which we would estimate based upon our sensor data. We may need to have two MCU's: one dedicated for image processing and the another handling everything else.
There are somewhat similar products out there for cars (no commercial products from what I can tell, only devices created for research),, but no such products for bikes and bike paths. A pothole can be much more devastating for a biker than for a vehicle driver, so we believe this is a good problem to focus on.
|63||Educational Development Board for RoboThink
|Project Members are:
Anthony Shvets : shvets2
Zhe Tang: zhetang2
The purpose and focus of this project will be to develop a control board for a flexible robotics building system. The system as a whole would allow young students to easily create their own robotic designs outfitted with servo motors, DC motors, and Sensors while also being easily programmable using color (where unique colors would result in unique actions).
An example I would like to bring up is of a student that builds a robot with DC motors controlling the wheels for movement, a servo motor for a claw at the front of the robot and a few color sensors to detect a drawn path and spots near the path to issue an action. This robot would start traveling straight and its color sensor detects the color blue (let’s say this means turn left) and the robot turns left roughly 90 degrees. Then it continues until the sensor detects orange (let’s say this means close servo) and the robot grips an object in front of it, turns around and then continues traveling until it detects another actionable color or a stop condition, which could be a unique color or combination of colors across multiple color sensors.
The benefit this provides young kids is to work with programmable robots from a very early age, due to the easy to understand color based coding system, and also be able adapt those robots to different shapes and functions.
Again, the focus of the project will be around the control board. Not the development of the plastics and motors used in the robots as those are already developed. The control board will be able to control DC motors, Servo motors, and will take input from at least 4 color sensors (2 for path detection and ~2 for action detection).
|64||Virtual Grand Piano
|We are proposing to build an electronic system that behaves just like a grand piano without there being any physical object to receive key presses. We are planning to build the entire piano with 88 keys with sustain and touch pedals to authentically reflect the characteristics of a grand piano.
[Idea Post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=25719]
To isolate the location and motion characteristics of a key press we plan to use multiple camera modules facing the player’s fingers which would be equipped with reflective material at the fingertips and accelerometers attached to each finger to provide information about the touch force. The input from each of the camera modules would be processed in real time using an FPGA and relayed to an audio synthesizer that would play the note on a speaker with the appropriate note and amplitude. We are planning to build the audio synthesizer to control aspects of the produced sound but only when we are done implementing the controller module.
We believe this would be a challenging project for senior design due to the complexities involved in processing and isolating each of the user’s finger locations in three dimensions in real-time and incorporating readings from the wireless accelerometers and sustain pedals. We have not encountered a virtual piano implementation similar to ours that uses camera sensors and accelerometers to isolate the user’s hand movements.
|65||Bike Safety Sensor
|One common problem when biking on campus is that you can't always see what is going on behind you. Sometimes it can be dangerous to turn your head around and take your focus off of what is in front of you. We plan on creating a device that can alert you when other bikes or cars are coming up from behind you, and want to pass. There is a product online that can do this with radar, but it is very expensive at around 300 dollars. It could be made cheaper using another type of sensor.
The object will be a belt with either ultrasonic or LiDAR sensors attached to it. The sensors will be used to measure distance and alert the wearer when an object exists in a blind spot or approaches too quickly. The belt will vibrate on the side that the object is so the person is alerted. This device will probably need to be battery powered, and have a long life.
Yingquan Yu, NetID: yyu47 (firstname.lastname@example.org)
Huiyuan Liu, NetID: hliu88 (email@example.com)
Introduce and Motives:
Leaving temporality in public but you don’t want other people to look or touch your stuff? Alarming Coverage has you covered! Simply cover your stuff, arm your blanket though phones and you are ready to leave. If anyone moves anything, it will alarm the people with sounds and send you a notification though phones.
Design and component:
The Alarming Coverage consists of two part: the control board and the blanket itself.
The main functionality of board is to alarm people around it and notify user when the blanket is moved. The board is attached to the blanket and it contains a micro controller, bluetooth module, a buzzer and a battery to collect and process all the signal from sensors. People could arm the Alarming blanket through buttons on the board or through bluetooth on the phone.
The main functionality of the blanket is to detect the malicious movement from other people. To detect, we may use the combination of but not limited to IR proximity sensor, flexible sensor or photon sensor. We planned to use flexible sensor to detect the shapes changes of the blanket, IR proximity and photon sensor to detect the surrounding object distance and light changes. To make blanket works more stably and reliably, we may add magnets or suction pads to the edge of blanket so that it could mitigate environmental effects such as wind or fans.
The blanket could be controlled remotely by the mobile phone through app and connected by Bluetooth (or potentially through wifi). Using the app, user can view the status (the shape) of the blanket by using data from the sensor.
Original Title: Security Blanket
After talking with TA, we think the current title might be more proper to describe our project. The blanket is mainly used to draw other people's attention when the sensors and controllers on the blanket detects the anomaly move against it or the stuff under it. Also
|67||Trail Mix Dispenser
|Trail Mix Dispenser
Original thread: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=25864
Sometimes you don't have time make a snack. Sometimes you just want trail mix.
We would like to build a fully customizable trail mix solution that’s completely controlled through your smartphone. The dispenser will be able to deliver a wide range of customizable and pre-made trail mix recipes in a bag, stapled and ready to go.
Individual ingredients would be stored in separate bins that would be brought to a dispensing position to be poured into a mixing bowl. Once the desired weights and proportions are achieved, the ingredients are then mixed for a more homogeneous mix. We are also considering mixing the ingredients simultaneously to eliminate the need for the mixer.
We would then pull a paper bag from a tray using a vacuum, inflate it, and fill it. We will then fold the top of the bag over and staple the bag to seal it.
This video is the inspiration for filling and sealing the bags: https://www.youtube.com/watch?v=lIwsmAEzgTc&feature=youtu.be
We would have one container for each ingredient in a dispenser similar to the dispensers for cereals in dining halls. We plan on having 4 total such containers.
They will look similar to these: https://secure.img2-ag.wfcdn.com/im/37019053/resize-h800%5Ecompr-r85/4588/45889352/Double+2+Container+Cereal+Dispenser.jpg
Each container module would contain a stepper motor that would dispense the ingredient onto a bowl with a weight sensor. Once the appropriate weights have been reached the ingredients will be mixed with some sort of actuator. We will use small fins to ensure that we have smaller dispensing increments. We will also begin by dispensing quickly and slow down as we approach the target weight to minimize error. In addition we will set design constraints on the error by mass to ensure that the final product is of sufficient quality. Finally, the completed trail mix will be dropped into a bag and sealed by an electric stapler. We could also potentially incorporate weight sensors in the individual container modules and alert the user via the app when they are low on ingredients.
Overall we would require stepper motors with encoders, vacuum pumps, blower pumps, weight sensors, limit switches, an actuator to mix the ingredients, a microcontroller with wifi connectivity, a stepper motor driver, and a stapler.
|68||Educational Smart Breadboard
Mostafa Elkabir (Elkabir2)
When kids or even college students first learn circuits, they almost always meet with breadboards. But anyone having used breadboards would find them very hard to debug. There is no good way to check whether wires and gates are working properly, and if there are too many wires in the breadboard - both in case there are actual visible wires or programmed wires - it is hard to see where wire connections are messed up.
We would like make debugging easier for educational uses of breadboards, so that students can focus on crucial debugging skills and circuit logic than pain of going through wires.
One way we could make things easier is by having each row of pins of same voltage LED-lighted with some color, with any row that connects to a row assigned the same color. This allows for visual cues to understand how the circuit operates - also, in case wire is broken, it allows us to see the effect of this broken wire.
The second way to help students is by having each pin display output values in a small LED light illuminating at the bottom of the user breadboard. One can though extend this idea so that the right side of a mini-screen actually prints logic function for an output pin of the main inputs of the circuit, with labels assigned by user instructions. (an example of logic function is f = AB+not(BC), where A and B and C are main inputs of the circuit, and f is the output of some output pin of the breadboard we are examining.) This requires individual chip testing based on user-given information regarding which pin is input and which pin is output.
Chip testing is done by having a relay/switch between an actual pin and a wire/user-side breadboard, with switch being turned off when the processor is testing chips. We basically test all possible configurations for input pins to generate function/truth tables for each output pin, which allows the processor to write down the logic function of chip's input pins for output pins.
On the left side of a mini-screen, we print the logic function for each chip based on chip inputs, not main inputs of the circuit. This allows students to use the mini-screen to see what the individual gate does regardless of wiring connections on its (screen) left side, and what gate's output pins should logically evaluate to, based on main inputs of the circuit on its right side.
Because of size limitation of the breadboard, we have to pick which pin we wish to print out to the mini-screen. Thus a user has to provide which pin they wish to see.
The third way is to protecting students from high-voltage and high-current scenarios that can burn the breadboard and can hurt them. This is done by relays that cut-off the connection and the wire of the breadboard in such circumstances.
The fourth way is to alert users of the case that two different voltage sources are connected to the same voltage line to another mini-screen. This can be done easily, as the processor has access to voltage of pins, so in case different voltages connecting to each pin is detected, the processor can cut off a connection that puts together two pins in the same voltage line.
The processor will be either Arduino or Raspberry pi, and the processor is connected to every pin minus redundant pins that share the same voltage, so that it gets relevant information. The connections are done at the bottom of the user breadboard, so that the processing unit does not clutter with the user interface.
|69||Face Identification Door Lock
|There are situations when you are back home from supermarkets with lots of bags in your hands, or when you are holding foods that you could not free any of your hand to reach the door key in your pocket, or when you are locked out helpless. We will develop such facial recognition door lock that frees both of your hands and avoid locked-out situations.
We will design our own PCB that holds a input image sensor (OV7725), a microphone, a voltage regulator, a controller and DSP processor (ATMega328P, or seperate microcontroller and DSP chip) , USB port , and this will ouput the PWM to the motor driver which will drive the motor inside the lock.
We will use PYNQ board to run the identification model. It takes the DSP input from our PCB board and outputs the identification result back to the controller.