|1||Dynamic Legged Robot
|David Hanley||Casey Smith||design_document1.pdf
|We plan to create a dynamic robot with one to two legs stabilized in one or two dimensions in order to demonstrate jumping and forward/backward walking. This project will demonstrate the feasibility of inexpensive walking robots and provide the starting point for a novel quadrupedal robot. We will write a hybrid position-force task space controller for each leg. We will use a modified version of the ODrive open source motor controller to control the torque of the joints. The joints will be driven with high torque off-the-shelf brushless DC motors. We will use high precision magnetic encoders such as the AS5048A to read the angles of each joint. The inverse dynamics calculations and system controller will run on a TI F28335 processor.
We feel that this project appropriately brings together knowledge from our previous coursework as well as our extracurricular, research, and professional experiences. It allows each one of us to apply our strengths to an exciting and novel project. We plan to use the legs, software, and simulation that we develop in this class to create a fully functional quadruped in the future and release our work so that others can build off of our project. This project will be very time intensive but we are very passionate about this project and confident that we are up for the challenge.
While dynamically stable quadrupeds exist— Boston Dynamics’ Spot mini, Unitree’s Laikago, Ghost Robotics’ Vision, etc— all of these robots use custom motors and/or proprietary control algorithms which are not conducive to the increase of legged robotics development. With a well documented affordable quadruped platform we believe more engineers will be motivated and able to contribute to development of legged robotics.
More specifics detailed here:
|2||Midi Sequencer with Linear Motorized Potentiometers
|Christopher Horn||Casey Smith||other4.pdf
|Nathan Zychal (nzycha2)
Devin Alexander (dbalexa2)
Martin Lamping (mdl3)
Skot Wiedmann offered to mentor us.
A sequencer that provides musicians with a new, fast way to prototype melodies and chords. User inputs will be used to control sixteen potentiometers position. The potentiometer position and voltage (read by an ADC) correspond to frequencies (output in MIDI data). Discrete positions will be encoded so that the potentiometers physically move to a position of the nearest frequency corresponding to a note. Another quantization parameter could allow for the selection of notes in a certain key (e.x. C Major, F Minor, Chromatic, etc.). These quantization parameters give additional user feedback to compose the melody.
Our project is an innovation to a pre-existing idea. We will add motorized potentiometers to control sequences of notes that can be played either sequentially or together as chords. Currently the market does not offer a similarly configured device.
The project will include but is not limited to 16 motorized linear potentiometers. Rotary encoders and push buttons, will set initial pitches, set root notes of a key, and set the scale. Relevant setting information will be shown on an LCD display. Each motor will be paired with a motor controller (dual H-bridge design). The sequencer will need to have a specific voltage (depending on the logic), additionally each motor will need to be supplied 12-15 Volts.
Sound will output when the MIDI output of the sequencer is input to either a hardware synthesizer MIDI Input or a software synthesizer in a DAW (Digital Audio Workstation). The clock frequency of the sequencer could be set locally or by an external device. When not generating its own clock signal, it would have to be synced with any external devices or DAWs via a MIDI input on the sequencer to maintain the same tempo to avoid synchronization issues.
|3||Standalone Steering Wheel for Solar Racing Vehicle
|Christopher Horn||Jing Jiang||proposal1.docx
|Illini Solar Car|
Illini Solar Car (ISC) annually competes in week-long endurance races, where we race on national highways for about 1800 miles in a caravan with a lead vehicle and a chase vehicle. These competitions require a rapid design and testing cycle, which necessitates a more barebones design with a mix of features that cannot realistically be tested completely before a race. Safely and effectively racing a solar vehicle requires the driver to have immediate access to not only detailed vehicle system information, but also the right information at the right time. In the past, when this information has been lacking, we have incurred large delays during competition as the driver could not safely continue without access to vehicle performance-related information. The goal of this project is to eliminate those unnecessary stops by providing the driver with all potentially needed information so they can continue to a scheduled stop.
Our electrical system is an 8-part distributed system, and the current steering wheel is a hardware-controlled slave that only displays basic information to the driver (speed, battery temp., total current draw). Our project is a complete redesign of both steering wheel hardware and software as a completely independent vehicle system capable of displaying detailed, customized information to the driver, as well as functioning as a diagnostic tool for the driver to have more immediate situational awareness of potential malfunctions in the vehicle systems. This should result in a more-useful steering wheel and will also decouple the development from the main vehicle computer to allow it to be updated more easily and safely.
The hardware portion of this project will have a redesigned steering wheel PCB that can interface with the vehicle CANBUS system via our custom CAN API. We will implement an LED display with dynamic display of real-time system data for the driver. This screen will be on a separate PCB to allow it to be replaced separately as race environments tend to significantly reduce screen lifespan. We will also implement paddle controls for the first time, which will require picking and characterizing an appropriate sensor, and implementing it into our control scheme. This will require extensive planning and coordination with the ISC Mechanical Team to ensure all hardware can work within the physical constraints of the steering wheel.
The software portion of this project will require an overhaul of the vehicle firmware set-up.On the steering wheel, we will need to implement drivers for the new sensors, create a display driver including a navigable menu and pop-up capability; and design and implement a standard for the driver to trigger actions via the steering wheel. In addition, the main computer’s firmware will need to be configured to receive steering input via CAN. As the steering wheel is removable it is important that the main computer knows what to do when the steering wheel is removed.
|4||Motorized Imaging System for Plant Root Research
|Kyle Michal||Jing Jiang||design_document1.pdf
|This proposal covers a motorized imaging system that takes photos of a tall plant's root by transporting a camera down a standard, transparent observation tube into the soil, taking multiple pictures from the ground up, and outputting a panoramic image of the entire root, which is used for scientific research in the field of agriculture. The imager device is comprised of a base station on the ground resembling a hoist, and a suspended camera placed into the observation tube, which itself has motors that centers it laterally in the tube so it always takes images facing up. A central control server serves to control, manage and collect images from a fleet of imagers, and also to present a GUI to the user with live progress and diagnostics data from each imager.
This project is done in collaboration with the College of ACES and SoyFACE Farm (The collaboration has been confirmed.) The SoyFACE Farm contains a corn research facility, where an observation tube is installed by each of the over 1000 corn plants, that goes 5ft deep. Each week, researchers would collect a panoramic image of the roots of each plant to access its health condition. The imaging process is implemented with a bulky camera mounted on a 5ft-long rigid stick. The operator mounts the base of the stick on the observation tube on a fixed mounting point, and inserts the stick deep into the tube. The camera is connected to an equally bulky control box consisting of a laptop, a large car battery and control circuitry of the camera crammed inside of a Pelican case. The camera depth can be read from a ruler on the stick, which the operator needs to input to the laptop, and invoke the "Start" command. The laptop will then verbally instruct the operator to pull the camera up centimeter-by-centimeter at a set interval (usually 1 second), taking a picture at each instruction until the camera is completely out. Any non-compliance of the verbal instruction will ruin the image and require a restart. The set of images are then taken to an external program to be stitched into a panoramic image. The same time-consuming and strength-demanding exercise is carried out over each of the 1000+ corn each week, and the research group demands for an automated solution.
## Overview of Improvements
The solution calls for building a system that is able to traverse down the tube, taking all the required images and stitching the images without human intervention except for mounting it on observation tubes. Furthermore, the system should be inexpensive and scalable such that multiple imaging appliances can operate at the same time in different tubes, while centrally managed by an operator. This also requires the solution to be cost-effective, small and easy to mass-produce.
The larger system can be split into two components, a Management Platform (MP) and an Imaging Appliance (IA). The IA can be further split into the Camera Assembly and the Base Station. The following discussion about the components is centered around the block diagram.
The main logic of IA runs on top of an ESP32 SoC inside the camera assembly. It accepts control signal from the operator from the MP, through a MQTT connection over its built-in Wi-Fi. It also has a serial (RS-485) link to the control board in the Base Station, in order to receive battery condition data and transmit signals to control the hoist motor, which in turn manipulates the physical depth of the camera in the tube. The Camera Assembly also has a alignment stepper motor with wheels, that centers the camera view in the tube as the camera could have shifted during the up/down motion. This shift is sensed from the built-in gyroscope and corrected by stepping the alignment motor. The ESP32 SoC is connected to a OV2640 camera, that captures a 2 mega-pixel image at every run. It is fixed-focus since the distance from the camera to the plant root is known. When imaging, the camera first travels up by a fixed distance (\~2cm) such that the view overlaps with the previous, and takes a image. It then compresses the image into JPEG and transmits it to the MP.
The base station of the IA is placed above ground, preferably mounted on top of the observation tube. It contains a battery to power the entire IA, and a stepper motor that hoists the camera assembly into the tube. The camera assembly sends control signal in terms of distance to move, and the ATmega328 MCU in the ground station translates it into angular motion before commanding the stepper motor. It also has an endstop switch to indicate the home (Z=0) point. The MCU is also connected to a battery charging / monitoring IC that can charge the battery and report battery level to the camera assembly, such that it can home itself and refuse imaging when the battery level drops low.
The Management Platform (MP) runs on a physical machine of any platform (such as an x86 Linux server) and optionally also act as the Wi-Fi AP for each IA to connect to. The heart of the MP is an MQTT broker collecting telemetry and images from and emitting control signals to one or more IA's. Upon reception of images, it processes the image set and stores a panoramic image into its image storage. Upon reception of telemetry, it stores the telemetry in a volatile database for tracking. It exposes a Web UI to the frontend users, such that the users can view and download the images, as well as monitor and control the IA's. When used, the operator enters the name of the image (matching the current date and ID of the observation tube) and presses "Start", starting the automatic imaging sequence of the selected IA. The operator then monitors the Z depth of the camera in real time as an indicator of the imaging process. When done, the operator may download the panoramic image already stitched from the camera.
## Originality and Use of External Solutions
We propose that all mechanical parts of the base station, the casing of the camera and the motion mechanisms will be custom designed. The circuit boards of the entire imager will be custom made and assembled. There are no known solution to the problem of taking and no other alternative to the original camera-on-stick approach, according to the researchers.
Of course, considering the complexity of the circuitry and the infeasibility of integration in the lab environment (soldering of a CMOS sensor, etc), the supporting circuitry of the camera and the accelerometer will be purchased as a module. The controlling software requires several open-source projects used as libraries to provide APIs to established protocols, including Eclipse Mosquito for MQTT broker, Paho-MQTT for the MQTT Client implementation, and Flask for the HTTP framework. The main controlling logic and camera-handling code running on top of a ESP32 SoC uses Espressif's ESP-IDF SDK/HAL suite, and if necessary, the motor and battery manager board uses the Arduino SDK for its convenient stepper implementation. The interface of OV2640 to ESP32 is known and a library will be used whenever appropriate.
## Criterion for Success
The overall effectiveness of the project can be assessed in three aspects: Functionality, Repeatability and Effectiveness.
Functionality can be measured by having an imager device take a panoramic picture from a real plant, and checking the following. First, a valid panorama should be returned, and each image comprising the panorama must not be yawed more than +/-15 degrees from each other to certify the lateral motion compensation. Second, the central management system should report real-time progress from the imager at all times.
Repeatability can be measured by comparing two consecutive images taken from the same plant, and there should not be significant differences in the geometry of the images and features.
Effectiveness can be measured by having the entire imager cost less than $200, without a complex manufacturing procedure that consumes more than 2 man-hours in assembly.
Success of the project can be certified if the above criteria are met.
|Kyle Michal||Michael Oelze||design_document1.pdf
|Team Members: Rushik Desai (rhdesai3), Benjamin Du (bldu2), Kyle Rogers (krroger2)
We will be working on the Passive Radar project idea proposed by Professor Levchenko.
General Description: Radar technology has existed for many decades. Primarily, radars have been implemented using active transceiver systems. Passive radars are a supplement to active radars. The benefits of passive radars are they don't transmit, and they are low-powered devices. Current commercial applications and services of passive radars are expensive, while hobbyist designs vary in degree in terms of cost and performance.
Solution: We propose to create an affordable (<$100) and accurate (<10m), community based, passive radar system consisting of a network of 4+ receivers which are connected to a central server.
The complexity of this project can be broken down into three main challenges:
1. Creating a valid time stamp for a correctly received transponder signal.
2. The integration of different interfaces and systems: power over ethernet, RF front end, GPS.
3. Network communication between the receivers and a central server, which uses trilateration in order to compute the aircraft’s position.
RF Front End (Receives transponder signal, decodes it, converts it to digital data)
GPS (GPSDO for time granularity/accuracy, time stamp generator)
Microcontroller (processes GPS time data and verifies transponder data before sending it to the)
POE (provides power and allows for central server communication)
Network communication (uses a hosted website to compute position as well as display data)
More specifications/details about the project can be found in our original post:
|6||Automated Specialized Coffee Machine
Sachin Kasyap Parsa
|Channing Philbrick||Michael Oelze||design_document2.pdf
Many individuals prefer specialized coffee, such as French Press, Aeropress, or pour over. We propose to solve the problem of making this Aeropress coffee to reduce the effort in its preparation.
# Solution Overview
We propose making an automated Aeropress coffee maker, which will be done via three subsystems: boiling the water, grinding the beans, and extracting the coffee.
# Temperature-Controlled Water Boiler Subsystem
To extract the coffee we will need hot (or boiling) water. We will create a variable temperature control system that will allow the user to define a temperature between 175 °F and 212 °F using 5-degree increments, using digital logic controlled by a knob and measured using a food-grade digital thermometer. Controls will shut the system off when the temperature reaches a threshold.
# Coffee Beans Grinder Subsystem
To grind the coffee we will use a motorized hand grinder, which will apply torque at the handle. Prior to usage, the user will insert their desired amount of coffee beans into the grinder, to be ground when the process is started.
# Hot Water Extraction Subsystem
The coffee grounds and hot water are to be added inside the Aeropress and extracted. This subsystem will have a user-defined press time to help them get their ideal cup of coffee. There will be a pressure sensor.
# Criterion for Success
Our solution can accurately produce a cup of coffee, which has been heated up and processed for the defined temperature and time. As a reach goal, we will add functionality to produce different sizes of cups or add a voice interface (e.g., Alexa or Google Assistant).
While there are many existing coffee machines on the market, our solution will be focused on making high-quality Aeropress coffee. We seek to create a low-cost product that can produce coffee with the desired settings.
|Mengze Sha||Arne Fliflet||design_document1.pdf
|Problem: During the summer, when we have a long drive or go for a picnic, we usually bring a box full of ice and put our drinks in it. The problem is that the ice in the box will not last long in the hot summer especially under the sun, and when we go for a picnic, we can't really find a gas station to get more ice.
Solution Overview: We want to build a solar-powered cooler so our drinks stay cool much longer. The big hot sun in the summer will not be a disadvantage to us---our drinks will not get warm, but will actually get cooler for a longer duration of time.
Battery to store excess power and prolong device life
The solar panel will be the main power source to the system and it will charge the battery. Both battery and solar panel can be used to power the system.
Voltage Regulator to supply constant voltage and current to the thermoelectric cooler
Charge/micro Controller to protect the battery from overcharging and prolong battery life. Also used to collect data from the temperature sensor and display temperature on our 9 segment display.
To control the temperature, we decided to use the thermoelectric cooling system over the compressor cooling system, since the cooler will get moved very often, and thermoelectric cooling system has advantages such as small in size, lack of moving parts or circulating a liquid, very long life, invulnerability to leaks.
The temperature sensor will be used to detect the internal temperature of the cooler.
Temperature Display to show the current temperature within the cooler.
Criterion for Success
The cooler will have to be able to safely supply temperature of 20 C below ambient temperature and keep food and drinks fresh (target under 10 Celsius). The Solar panel is able to charge the battery safely. The system should be able to simultaneously charge and cool the space. When solar power is not available, the system can be powered by a (non-toxic) battery. The overall rig should be light enough to carry to a picnic.
|8||Wirelessly Synchronizing LED Mickey Mouse Ears
|Hershel Rege||Casey Smith||design_document1.pdf
We want to create a set of affordable and entertaining light up Mickey Mouse ears that families can use to create their own mini light show. Users will be able to choose from a variety of colors and light patterns to customize their experience. Disney has incorporated a similar concept using Glow with the Show/Made with Magic ears that interact with the nightly performances in several of the parks. We feel that we can explore this concept from a new angle. Within the parks, the ears are controlled by IR emitters. We would like to use the design as an opportunity to learn about wireless communication systems, specifically a wireless ad hoc network, as well as basic mobile app design.
Our solution is to create a design for LED Mickey Mouse ears that can be controlled and synchronized within a group. The ears, lined with lights, will be mounted on a headband and contain the necessary circuitry. This will include a WiFi component for communication between the ears and an app. The app, when used on a cell phone, will allow the user to control the color of the lights, or choose from a set of lighting patterns and effects. Overall, the solution is intended to make a family's experience at a Disney Park more enjoyable, or recreate that magic anywhere.
Hardware – The first area of focus for the project is the thing people will first notice, the lights, and how to power them as well. We plan to wrap each ear in strips of LED's like the ALITOVE WS2812B. We estimate that for a headband with four inch diameter ears there will be about two feet of light strips needed, and that it will cost around fifteen dollars per headband in lights. These lights offer a high density of LED's at low power consumption to provide an aesthetically pleasing experience without draining batteries too fast. On that note, the lights and components in the headband will be powered by AA batteries hidden in the ears along with the rest of the circuitry. The ears will be made of a frosted plastic that the lights will illuminate from inside the ears, this will also be used to mount components like the battery pack and controller board.
Mobile App – We intend to design a simple Android app that is capable of changing the lights. Users will be able to choose from a variety of solid colors and preset lighting effects such as a rainbow gradient or rippling colors. It will be possible to control a single headband, or unite several headbands within a group. The leader of the group will be able to add and delete member headbands, and control all of the headbands within the group, provided that they are in range of the controller.
Controller and network – Each set of ears will contain a Wifi chip capable of both transmitting and receiving. When they are within range, they will be able to form a wireless ad hoc network with each other and the controlling cell phone. The payload of the packets will contain data that specifies the colors and patterns for the desired lighting effects, which can be done because the LEDs are individually addressable. The phone will communicate with the master headband, which will then relay the information to the other headbands within the group so that they are all synchronized.
Criterion for Success:
A successful design would be one that is both functional and comfortable. The headband should be lightweight and the weight should be balanced between both ears. All of the circuitry will be contained within the ears, so no extra wires or components are required. It should be waterproof to some extent, because it rains often at some of the Disney parks. The ears should also be controllable via a mobile app and form their own closed wifi network.
Team members: Ian Napp (iannapp2) and Kaitlin Vlasaty (vlasaty2)
|Anthony Caton||Michael Oelze||design_document2.pdf
|Team Members: Daniel Kalinin (dak2), Jacob Taylor (jataylo3), Daniel Zhang (dzhang54)
Ryan Corey’s research group does research where sounds are played to a human test subject at a wide variety angles, typically 5 degrees apart. The lab they currently use is quite small so using a moving speaker is not possible. It is also undesirable for the human test subject to move during the tests.
Create a turntable that a human can stand on that can be rotated using a computer. The user will use the computer to specify when and how far each rotation should be. The software should allow the research group to have the rotations occur at specified times. The turntable should be silent when not moving as to not interfere with the audio testing. The turntable will be able to hold up to 300 pounds.
The turntable will take in 120 VAC as its power source. This voltage would be converted to power a microcontroller and a stepper motor. The microcontroller will use a software interface to include the control system and a place to input commands and when to spin the turntable and how far the rotation should be.
Motor will be mounted laterally to the input shaft as the motor itself should never directly bear the load from the human. Will be driven by a sprocket ( think of how the pedals of the bicycle interact with the rear wheel). If the shaft is well balanced and rides smoothly on bearings, a weak motor can rotate the table with ease. Changing gear ratios between the motor and the shaft allows for torque and angle accuracy manipulation.
Turntable mount will be made from plywood as it should hold the person safely and absorb sound reflections. Turntable base will be made from plywood as well with an intermediate metal mount to the input shaft. Will be carpeted on top for aesthetic and acoustic reasons.
UI for the turntable that allows it to spin clockwise and counter-clockwise, as well as start and stop on command. On the interface, we also want to make it so that the user can program a routine for the rotation so that you can have programmable steps customized for the user. We will also incorporate Alexa into the project so that using voice commands, the user can also rotate the turntable.
Power Unit: Will take in standard 120 V AC from a wall plug and convert into desirable voltages for the the other subsystems. We will use a transformer that will step the voltage down closer to the motor and microcontroller voltages. We will use a full wave rectifier with buck converters to achieve the DC voltages for the motor and microcontroller.
Microcontroller Unit: Will probably settle for an ATmega328p ( same one Arduino uno uses so has a lot of support). Will need a way to program the controller, will probably settle for USB to TTL programmer pcb or some other programmer. Building one ourselves is plausible but nonessential.
Motor Driver: Use an ULN20003 IC which is just NPN transistors to power the motor coils from the microcontroller output.
|10||Solar-Powered Streetlights with Doppler Control
|Anthony Caton||Arne Fliflet||design_document1.pdf
|Group Members: Corey Weil (cweil2), Josh Song (jssong3), Brian Keegan (bekeega2)
According to the Department of Energy, there are an estimated 26.5 million streetlights in the United States, consuming the amount of electricity equivalent to 1.9 million households and generating greenhouse gas emissions equivalent to that of 2.6 million cars. While approximately half of these are owned and operated by the public sector, nearly all are paid for with public dollars, to the tune of more than $2 billion in annual energy costs alone.
The hours that motor vehicles are required to have their headlights on (called lighting-up time, 30 minutes after sunset to 30 minutes before sunrise) are the peak hours for street lights but as road traffic slows, there is no need for streetlights to operate at their peak intensity. In many cases, there isn't a need for every streetlight to be turned on along a specific stretch of road.
To reduce the costs associated with street lights we propose:
Building a down-scaled solar-powered street lamp with built-in battery storage
Controlling the light intensity based on Doppler radar vehicle detection
Solar-powered street lights reduce both the initial setup as well as cost-over-time. This makes for an economic advantage while also preventing safety issues associated with blackouts. Vehicle/pedestrian detection from Doppler radar or LIDAR speed detection (used by law enforcement) will control intensity. Lights can either remain at a very low base intensity or be off until a vehicle is detected; low intensity provides more safety for pedestrians who still benefit from having all street lights on.
For more discussion between our team and Anthony Caton, please see: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30348
Doppler radar or LIDAR sensors to detect oncoming vehicles or pedestrians
DC-DC conversion from solar to battery storage
Criterion for Success:
Our solar-powered streetlight will have a battery storage system and will be able to detect oncoming vehicles in order to control light intensity.
|11||Modules for Safe Power Distribution in an Electric Vehicle
|Christopher Horn||Casey Smith||design_document1.pdf
|Illini Solar Car|
|Background: The solar electric cars built by the Illini Solar Car team compete across continents and require stability as well as efficiency. The Power Distribution System (PDS) on these electric cars controls the connections to the high voltage (~100V) bus and includes the low voltage (12V) bus, but does not control the low voltage connections at present.
Motivation/Purpose: The idea is to significantly increase efficiency of the car by being able to switch each connection on the +12V bus on or off individually in response to signals from the driver or other PCBs on the car. The main work for this project is to 1) design a new 12V bus board with a sufficient number of connectors that are the appropriate size 2) implement the switching and communication hardware on the 12V busboard, and 3) establish the communication between the PDS control board and 12V bus board.
- Attach FETs to at least half of the connections on the 12V bus board.
- PDS control can switch each of the FETs independently of each other.
- Additional I2C connection on PDS control board to allow communication with 12V bus board
- Current-monitoring module on each of the connections from the 12V bus to determine which connected boards consume more power than others
- PDS control board firmware switches 12V outputs in response to CAN bus messages.
- PDS control board firmware sends 12V output status (current, on or off) to CAN bus.
- PDS control board firmware stability tested, running over 4 hours with no switching abnormalities
- Update firmware on the PDS control PCB, so that it can request and receive the ADC differential voltage information from the precharge PCB over I2C instead of using a timer to assume precharge is completed
- Precise voltage monitoring on 12V bus, reported to PDS control board and to CAN bus
|12||Shoe Sorting Robot
|Zhen Qin||Jing Jiang||design_document1.pdf
|Have you ever get tripped over a shoe in the doorway? Disorganized shoes can be a mess but sometimes people just don’t get time to organize after they take shoes off. Therefore, Our group aims to create a robot that helps people sort shoes and save their time. So far there is no such robot on the market, therefore we believe this project will be novel and it will become useful to keep house clean.
In our project, we will connect a CMOS camera module, a robot arm, a load cell amplifier ADC weight sensor and a power control to our own microcontroller. Our robot will be built based on the scenario that shoes are scattered on a 60*60cm entrance mat with a rectangular shoe organizer next to it. The camera will be held above at certain height to capture the image of whole mat. The 6 degree of freedom robot arm will be attached to a mobile car that is rested next to the shelf if there is no shoes on the mat. After the camera detects more than two individual objects (one pair of shoe), the car will start to move and the robot will organize the shoes.
Our robot will distinguish each shoe by color and weight, and place each pair on the organizer from the leftmost corner. From camera we can know the size (how many pixels) of the object, if it lies in certain range (from size 5 shoe to size 9 for example), we assign it as a "shoe". Then we use the RGB value to assign the color of the shoe. Among all the white shoes for example, we can decide if two shoes are a pair depend on their size and weight. As a result, those shoes will be arranged in pairs and pairs in similar color will be placed next to each other.
Considering about situations that objects other than shoes may appear on the map, we will implement color filtering and weight filtering. If the size of the objects appear to be too large or too small, we just ignore them. If the objects have similar size but the robot cannot find another paired object, the robot will still grab the umbrella and put it on the shelf in the end in order to keep the doorway clean.
The shoes sorting algorithm is described as follows:
1.Camera locating on top of the mat scans all the objects(shoes) and then marks down each object.
2. Calculate the coordinates of the center of each object with respect to the world frame coordinates, where we build the world frame using the robot’s center.
3.Assign shoes with same color a specific number according to RGB value.
4.Robot arm randomly picks a shoe with a certain color, for example white, and places it in the position #1 of the shelf. At the same time, the weight of that shoe will be detected by the weight sensor connected to the gripper and saved.
5.Robot arm will look for the other white shoes and pick them up one by one. If the white shoe has the same weight as a shoe that is already on the shelf, then these two shoes are considered to be a pair and will be placed together. If the white shoe has different weight, then we will place this shoe in the subsequent position on the shelf. The process continues until all the white shoes are ordered on the shelf. After the white shoes have been sorted, the robot arm will proceed to sort the remaining shoes according to their colors.
We think that even if no shoes are exactly the same weight, two shoes in a pair will be very similar. Our robot arm is able to lift something at least 500 grams and we decide to assign an offset of +/-5%. If two shoes has approximately the same weight then we assign them as a pair.
The end effector of the robot arm is indeed a consider of our project, we are planning to order few different types of end effector to see which one grabs the shoe the best. Our group have measured the weight of 5 different pairs of shoes, including sneakers and boots, and the heaviest boot is less than 400g, and most of the robot arm on market is able to lift about 500g, therefore weight may not be our biggest concern, but at which location of the shoe should the end effector grab the shoe so the shoe won't fall on the robot's way back to the shelf. We believe we will need a lot of trials to come up with a good solution.
|14||Master Bus Processor
|Zhen Qin||Casey Smith||design_document1.pdf
We will design a Master Bus Processor (MBP) for music production in home studios. The MBP will use a hybrid analog/digital approach to provide both the desirable non-linearities of analog processing and the flexibility of digital control. Our design will be less costly than other audio bus processors so that it is more accessible to our target market of home studio owners. The MBP will be unique in its low cost as well as in its incorporation of a digital hardware control system. This allows for more flexibility and more intuitive controls when compared to other products on the market.
Our design would contain a core functionality with scalability in added functionality. It would be designed to fit in a 2U rack mount enclosure with distinct boards for digital and analog circuits to allow for easier unit testings and account for digital/analog interference.
The audio processing signal chain would be composed of analog processing 'blocks’--like steps in the signal chain.
The basic analog blocks we would integrate are:
EQ with shelf/bell modes
Saturation with symmetrical/asymmetrical modes
Each block’s multiple modes would be controlled by a digital circuit to allow for intuitive mode selection.
The digital circuit will be responsible for:
Analog block sequence
DSP feedback and monitoring of each analog block (REACH GOAL)
The digital circuit will entail a series of buttons to allow the user to easily select which analog block to control and another button to allow the user to scroll between different modes and presets. Another button will allow the user to control sequence of the analog blocks. An LCD display will be used to give the user feedback of the current state of the system when scrolling and selecting particular modes.
added DSP functionality such as monitoring of the analog functions
Replace Arduino boards for DSP with custom digital control boards using ATmega328 microcontrollers (same as arduino board)
Rack mounted enclosure/marketable design
We will qualify the success of the project by how closely its processing performance matches the design intent. Since audio 'quality’ can be highly subjective, we will rely on objective metrics such as Gain Reduction (GR [dB]), Total Harmonic Distortion (THD [%]), and Noise [V] to qualify the analog processing blocks. The digital controls will be qualified by their ability to actuate the correct analog blocks consistently without causing disruptions to the signal chain or interference. Additionally, the hardware user interface will be qualified by ease of use and intuitiveness.
|15||Theremixer - Theremin DJ Controller
|Zhen Qin||Michael Oelze||design_document1.pdf
|Motivation: This project is motivated by our curiosity in unique musical instruments such as the theremin and our desire to use them in a novel way. We thought it would be a fun challenge to use a functioning analog theremin’s output not as a standalone instrument, but as a controller to manipulate sounds and graphics that a theremin cannot produce on its own.
We will create a theremin DJ mixer that provides users with a unique way to express their musical creativity. By operating the analog theremin controller the same way you would play a normal theremin, users can modify and mix preloaded songs. Our solution also incorporates a connected display which provides a visualization of current audio levels and the oscillating wave created by the theremin.
We plan to construct an analog theremin and use the audio output to control graphics and music. Our theremin will be built on a custom PCB that will include space for necessary hardware components, such as a Teensy 3.2 microprocessor. We will also take into consideration the placement of parts detailed in Bob Moog's Etherwave diagram (which is made completely with through-hole components) to minimize stray capacitances and unwanted interference between parts. We will use the Teensy 3.2 to transmit the audio output from the theremin to a Raspberry Pi, which will analyze the output and display the visualization on a VGA/HDMI monitor. The monitor will show current audio levels and the user created oscillating waveforms.
Criteria For Success:
Our project consists of three interdependent parts. In hardware, we will be successful if we can build a functional theremin that creates an audio output. Our HW/SW interface will be considered successful if we are able to transmit the audio output correctly to our software interface. The software interface should be considered successful if we are able to properly mix songs and create a visualization from the output of the theremin.
Original Post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30980
|16||TRAFFIC CONTROL SMART SYSTEM
||Maria Pilar Galainena Marin
|Dongwei Shi||Michael Oelze||design_document1.pdf
|At busy intersections you often see traffic controllers directing traffic using hand and arm gestures while holding large wands that are lit up. The issue with this is that the gestures can often be unclear (especially since traffic control gestures are not even taught in driving school) and this causes confusion at the intersection which poses a threat to both the traffic controller and other drivers. On top of this, the wands can be very heavy and tend to fatigue the controller after hours of constant use.
Our solution is to give the traffic controllers smart gloves that are lit by LEDs to replace the cumbersome traffic wands. These smart gloves will be able to toggle the LED colors between red and green (denoting stop and go) by closing and opening the fingers. This is a lightweight alternative to traffic wands that we also believe will be easier to understand than the gestures currently being used. With these gloves the controller will not need to move nearly as much as when they use traffic wands so they are fatigued less. In addition to the gloves we will provide an LED panel that is attached to the chest and the back of the officer which will display either STOP or GO to make it very clear to the drivers the intention of the traffic controller. This feature should ensure that the driver the controller is facing knows what to do and removes any ambiguity.
All the LEDs will be in the form of LED strips to ensure flexibility for the controller so that it is not a burden to wear. These strips are also easily programmable and will be controlled by microcontrollers such as the ATmega328, one on the chest and one for each hand. Each of these microcontrollers will also have wireless transceivers such as the NRF24L01 so that button presses from the gloves will be able to control the LED panel on the body. We will detect the fingers closing and opening using conductive fabric placed between each finger so that when they are all connected to close the circuit the LEDs on the glove will turn red. The chest panel will be controlled by a button on the side of the index finger so that you can easily press it with one hand where the button will toggle the panel between stop and go displays.
The gloves will be powered by lithium polymer rechargeable batteries because they are more lightweight and compact as well as the fact that we do not need very high power whereas the chest will need lithium ion rechargeable batteries for higher power since there are much more LEDs.
There are other gloves made specifically for traffic control on the market, but they are simply gloves with very reflective material on the outside so that they are more noticeable. This does not solve the issue we are trying to address with our solution and on top of this, there are not really any other solutions to our problem on the market.
|17||Automatic Parking Monitoring and Assistance for city of Champaign
|John Kan||Jing Jiang||other1.pdf
The city of Champaign would miss a lot of parking fee when some drivers forget or intentionally not pay the parking fee, or park in spaces they are not allowed to park in. Many of these drivers won’t be ticketed because each car has to be manually checked. Also, drivers might park in reserved spots by accident because they might have missed the sign or it’s too dark outside to read it.
We propose to build a smart ‘meter’ that can recognize the car plate numbers and automatically associate the plate with a violation (if one is committed), and charge a fee to that car plate. Individuals can set up accounts (linked to a license plate) to automatically be charged for parking fees, so it removes the need to carry around coins. If someone attempts to park with an empty balance, or in a spot they’re not allowed to park in, University Parking will be notified of the violation. Our smart meter would also feature an LED light that changes to different colors to indicate if a violation has been committed (e.g. Red can indicate an empty balance, time limit exceeded, too close to the meter/other car).
-------In the hardware part, for the smart meter, we would mainly handle electronic parts like a timer for calculating the fee, a camera for taking pictures of plate, led lights for signaling, several sensors for car detection, wifi for network communication between meters and the server, camera flash for taking picture at night, LCD screen displaying the time remained, proximity sensor to help car park.
-------In the software part, a server to hold user accounts and receive updates from the parking meters, and we shall work on the applications like driver registration and plate number recognition system.
For meters, we are using some basic electronic parts. So, the cost should be under $50 each.
|18||Electronically Enhanced Blind Probing Cane
|Anthony Caton||Arne Fliflet||design_document1.pdf
|Names: Christian Reyes, Angela Park, Yu Xiao Zhang
NetIDs: creyes32, aspark4, xyzhang2
Title: Electronically Enhanced Blind Probing Cane
References: Prof. Viktor Gruev, Skot Wiedmann (Electronics Service Shop)
Description: Currently, blind walking canes (also known as ‘probing canes’ and ‘white canes’) are a practical solution for assisting the visually impaired which allows them to better understand the environment that surrounds them. By grazing the end against horizontal surfaces and tapping against vertical ones, the user is able to determine through differing textures what the terrain is like ahead of them. While effective at its most current form, the traditional cane is still unable to necessarily determine a rapidly changing environment to alert the user of abrupt obstructions as well as the size of the object. One project solution for this posed in FA18 titled “ProxiPole” was successful in detecting objects at a large range; however, the spatial resolution that they had was very broad and would be inefficient in environments where multiple objects were present. Additionally, their design did not take into consideration the size of an object that was posed in front of them, as it was meant to replace the original intended function of a probing cane completely with the use of sensors. For our project, we wanted to maintain the original functionality of the probing cane but also enhance the design using sensors and haptic feedback for the user in order to gain a better understanding of how close a particular object is and its location.
For the sensors, we would like to use a number of LIDAR sensors. These LIDAR sensors will be placed vertically along the cane such that when the cane is held, each will have an associated height with respect to the object being scanned in front of it. With this sensor information, we plan to relay the signals to a microcontroller which would process them and determine an appropriate haptic feedback for the user through a wearable that exhibits vibration. This vibration would be in the form of a wearable bracelet for the user and based on the distance of the closest sensor, varying vibration intensities will be given. In addition to varying intensities, unlike last semester’s project we would like to have better size recognition of the object which would be indicated by certain vibration patterns. For example, if we choose to use four LIDAR sensors, if only one detects an object within its range then there will be a single pulse with an associated intensity. If two sensors detect an object then there will be two pulses and so on. Based on these vibration patterns the user can identify how tall an object is in front of them and can decide whether or not it is passable. This design can also be used for identifying stairs.
We would also like to be able to improve upon the spatial resolution that is detected by making a much tighter operating window for object detection in order to solve the issue of constant haptic feedback that the previous project had. By changing the design of the sensor placement so that they are “stacked” vertically instead of a horizontal fan array that the previous group had, there will be a much finer spatial resolution.
We noticed that the FA18 group powered their walking stick by plugging it into a wall outlet. To improve mobility, we plan to implement a rechargeable Li-ion battery in the handle of the walking stick that we will connect using a T-clasp wire connector. This will allow the user to easily remove the battery and recharge it.
The overall construction of the cane we plan to have made of either fiberglass or aluminum, as that is the current standard for basic blind walking canes. The major overall subsystems that will be present are the detection sensors, microcontroller for processing, PCBs, and the haptic feedback.
For our group’s reach goals, we want to try to implement a wireless feature that interfaces the bracelet to the walking cane in order to reduce the wiring and therefore weight/size of the cane. In order to accomplish this, we have considered using Bluetooth modules on the bracelet and the cane itself. Another reach goal we considered was to implement IMUs in the case that the user is holding the stick in such a way that the sensors need to be properly oriented to be straight and not angled too far up or down.
|19||FOAM PRESSURE-SENSOR BASED CONTROL METHOD FOR CONTROLLING PROSTHETIC HANDS
|Amr Martini||Michael Oelze||design_document4.pdf
|*We are collabarating with PSYONIC Inc. (http://www.psyonic.co/#ourstory) and they have agreed to give technical support and cover the cost for extra PCB orders (Separated from the courses timeline). One of our team member is a previous member in PSYONIC Inc.
Nowadays, prosthetic hands are commonly controlled by Electromyographic (EMG) method which evaluates the electrical activity produced by skeletal muscles. However, the traditional EMG method is not accurate enough, because the measurements of that electrical signal suffer from high level noises. In addition, due to the physical layout and high cost of EMG sensors, the number of sensors is insufficient to acquire enough data to track the muscle movements precisely.
In this senior design project, our goal is to develop a foam pressure-sensor based method as an alternative of the EMG method for controlling prosthetic hands, which includes a design of PCB to carry the electrodes array with its corresponding communication peripherals and programing of the communication protocol. The pressure sensor method is more accurate, less noisy and cheaper, and preliminary research (1) shows promising result to the method.
In response to the question posted by professor Oelze, we have assembled some initial technical details.
Our project will consist of at most 10 pressure sensor modules and a master device. Each pressure sensor is based on a resistive working principle in which the interface resistivity between two surfaces changes according to the applied load. We will use metal trail on PCB as electrodes and use conductive foam as the sensor material. When load is applied the resistance between the electrodes will be changed and we can use the resistance change to sense pressure change.
Each pressure sensor module/PCB, as Professor Oelze suggested, will now contain 32 pressure sensors. However, exact number of sensors may be changed because of, for example time limitation, PCB limitation, or microcontroller limitation. Each pressure sensor module will also have a microcontroller (stm32, in order to interface with Psyonic and our team has worked with it before) to acquire and transmit the data. The analog voltage signals from the pressure sensors will be digitized by the microcontrollers’ embedded ADC. Since most of microcontrollers don’t have 32 ADC channels, we are going to use analog signal selectors or MUXs to select between data from different sensors.
The data from each module will be transmitted to master device using a communication protocol. In each reading of sensors, assume each digitized pressure sensor reading is 16 bits (12 bit data represented by 16 bit number), 320*16=5120 bits of data will be transmitted to the master. Therefore, we currently plan to use Controller Area Network (CAN) protocol for communicating between pressure sensor modules and master device. The CAN protocol can transmit as fast as 1 Mbit/s with high fault tolerance. Therefore, each transmission can be completed in about 5ms. In addition, SPI or I2C are also feasible choice for the communication protocol and both have their advantage and disadvantages.
The master device will mainly receive CAN message containing pressure sensor reading and process those data. On the master device, we are going to evaluate the data and classify them into command to the hand.
Although having lots of components, the cost of the project won’t be high. We are going to use the electronic packaging foam as the sensor material (already tested and meet our requirement), which is basically free. The cost mostly falls on PCBs, which we can get 10 pieces for $5 from PCBWay, and micro controllers. We expect cost of the components to be reasonable.
Answers to additional questions from Professor/TA:
Q: How are we going to the demo?
A: We will perform the demo on our own arm, here’s our plan:
1. The most basic goal for the demo is that we want to build a heat map of pressure for the muscle movement of different hand motion. We will classify those pattern into corresponding hand motion.
2. If things go well, we plan to control a real prosthetic hand borrowed from Psyonic Inc. With the method we developed, we hope to have the prosthetic hand mimic the movement of our hand.
Q: How do we know the solution applies to people without hand from the demo?
A: From previous research (1), they have shown that the method can achieve similar accuracy on both people with or without hand with the same system and our demo can prove its own effectiveness on people with hands.
Q: Muscle and tendon motions are much different when people do not have a hand. How do we account for that?
A: Even without tendons, the muscle will still bulge (1). As long as there exists a correlation between groups of hand motions and collected data, the amputees will be able control their specific prosthetic hand.
Even though our idea is inspired by the paper (1), we are doing more than what they have done by designing our own PCB and developing corresponding software and communication protocols. In addition, in that paper, the researchers are demoing using computer simulation but we are going to attempt to demo by controlling a real prosthetic hand.
(1) Castellini, C.; Kõiva, R.; Pasluosta, C.; Viegas, C.; Eskofier, B.M. Tactile Myography: An Off-Line Assessment of Able-Bodied Subjects and One Upper-Limb Amputee. Technologies 2018, 6, 38.
project idea: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30680
rejected project: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31316
|20||Safe And Sound: A Precision First Base Umpire (updated)
|Thomas Furlong||Arne Fliflet||design_document2.pdf
|(updated to include use of both RFID and pressure sensors)
Anyone familiar with baseball is aware that all umpires -- regardless of experience -- are prone to human error. There is a lot of precision required to make a correct safe or out call, and an incorrect call can jeopardize the entire game. Our group will solve this issue by designing a base that “knows” with perfect precision whether a runner is safe or out.
Safe And Sound will utilize a smart base, shoe and glove system to correctly call a runner as safe or out based on feedback from RFID and pressure sensors. Two RFID scanners and two pressure sensors will be located in the base (one for the runner and one for the baseman). RFID cards/chips will be placed in the shoes of both the runner and baseman, and pressure sensors will be placed in the baseman's glove. Either player's foot will be detected on the base when both their corresponding RFID reader and base pressure sensor are triggered from contact made with the base. When the runner’s foot is detected on the base, Safe and Sound will utilize a two factor verification process to make the correct call. The baseman’s foot must already be touching the base, and a “catch” event must have been detected from the pressure sensors in the baseman’s glove. If both events have happened, a red LED will turn on indicating the runner is out. Otherwise, a green LED will indicate the runner is safe.
All pressure and RFID sensors in the glove and base will communicate with a microcontroller (ie Raspberry Pi, Arduino, etc) to indicate that an event has occurred at that base. It will compare the timestamps of each event, and make a correct judgement based on the information it receives; sending the call decision back to the base. Upon receiving the decision, the base will light up the proper LED to call the runner as either safe or out.
The base will be able to distinguish between the runner and baseman's feet by utilizing it's built in RFID readers. One reader will only accept a signal from the runner, and the other will only accept a signal from the baseman.
So far, we think this has been our strongest idea because it would be relatively easy to test and cheap to develop. We'll be happy to address any feedback that hasn't already been brought to our attention.
Paloma Contreras Porras
|Anthony Caton||Arne Fliflet||design_document1.pdf
|There exists an inconsistency to signalling for personal transportation. For bicycles there are the common arm signals which many riders do not use and many drivers may not recognize. For other modes of transportation such as skateboards and scooters there is no standard. There exist solutions both wearble and bike attached that allow for light based signalling more akin to a motor vehicle. However, many don’t use movement to change signal modes and those that do attach to a bicycle making them impractical for other modes of transportation.
We propose using a 9DOF Inertial Measurement Unit integrated on the lower back of a vest with built in lighting. It would be used to track rider movements in 3D space and in combination with manual button controls, this would make signalling more akin to a motor vehicle. Controlling the system would be a low power microcontroller and the vest would be powered by a rechargeable LiPoly battery and would all at a price comparable to existing solutions wearable signal systems (~50 $USD).
Unique Features of Note:
1. The automation of turning off turn signals using rider motion as well as braking
2. Forward illumination on the chest, to be street legal at night
3. 360 visibility in the form of RGB LED strips that go around the shoulders and back of the vest
4. A High visibility “Crash Mode” to keep the rider visible in case of an accident
5. Riding modes for other forms of personal transport such as skateboards (both footedness supported), scooters, and rollerskates.
1. Bluetooth connectivity to a smartphone.
2. Integration of a sound signal.
3. User customization such as different light patterns so groups can easily differentiate between each other at night.
|22||Intuitive and Ergonomic Gesture-Based Drone Controller
|Channing Philbrick||Arne Fliflet||design_document1.pdf
|Problem: Current market available RF drone controllers are not intuitive to use for the average consumer.
Solution: Design an ergonomic, gesture-based control glove that would allow novice users to quickly and easily learn to control a drone.
Gyroscope: Controlling for tilt and yaw
Buttons: Trim controls, camera/video trigger, beginner/expert mode trigger, trick button
Analog button: Power/Thrust control
Accelerometer: Kill switch indicator - Over X G motions are ignored for Y period of time.
Arduino: Signal processing to convert sensor signals to control signals
RF Subsystem: The RF subsystem would come in two parts using XBee devices. The first would be in the glove which sends the control signals to a “base station”. The “base station” would receive the signals, translate them to the drone’s control system using the original controller’s hardware, and then project those signals to the drone.
Power Subsystem: Thin, flexible rechargeable batteries on the glove and a standard AA power pack for the “base station”
Processing Subsystem: One microcontroller on the glove and one in the “base station”.
Criteria for Success: Our project would be considered successful if we are able to control a drone from a gesture based control system with enough precision to navigate a basic obstacle course. This course would features turns, ‘gates’ at different heights, and a landing area. If a new user ( someone with limited experience flying drones) can successfully navigate the course faster with the glove than a standard controller with less errors then the glove would have succeeded in being more intuitive for a general user.
A reach goal would be further develop the signals used to be able to control several different types of drones from the same glove versus the only one predetermined model.
|John Kan||Arne Fliflet||design_document1.pdf
|Description: Many students like to wear hats or caps. In summer, caps can ward off strong sunlight. However, this function is quite limited since we have to keep adjusting the orientation of the cap as the direction of the sunlight changes. A cap that automatically adjusts its orientation based on the direction of the sunlight is a good solution to this problem.
Uniqueness: There are no such products in the market now, nor were there any past projects on this topic.
System Build Up (Basic function)
Subsystem #1: Sensing and controlling
The sensors consist of about 5-6 light sensors around the edge of the hat and accelerometers. The light sensors provide information about the direction and intensity of the sunshine. All the collected data are processed by the controller (TI) and calculate the optimal orientation of the shield.
Subsystem # 2: Mechanical movement
Actuator 1: The brim of the hat is driven by a motion servo motor. As the brim rotates around the edge of the hat according to the sunlight position, it provides proper shadow to adjust according to the sunshine to provide the correct shadow over the eyes.
Subsystem # 3: Solar power system management
One essential part of our design is the power management. The user should at least be able to use the hat for a full day. However, the motor will consume a lot of power. One solution is to add solar power. On the shield of the hat, we will attach a flexible solar panel to provide the power to our mechanical system. The lithium rechargeable battery will be used to store the energy generated.
Extension for riders
We plan to integrate the personal transportation signaling into the cap design. Though there are ideas about design a vest or belt with transportation signals, our cap is controlled by the head movement which is more convenient for the rider.
An accelerometer will detect the left or right rotation of the head to signify to the possible action of turning. A second accelerometer is used to detect vertical motion - the users need to nod their heads to confirm the turning action.
Actuator 2: led light stripes located on edge of our hat indicate a left or right turn.
|24||Power Board for Illini-Sat3
|Channing Philbrick||Michael Oelze||design_document1.pdf
|We intend to implement a power supply board for the Illini-sat3 to perform among others the following functions:
a) Provide means for on-board battery charging, monitoring and temperature control.
b) Provide regulated dc bus power rails to support on-board instrumentation and control equipment.
c) Provide protective features for the system to maximize resiliency of the power supply in case of fault conditions, this will include a watchdog timer which can reset the controller in the even of a fault.
d) Implement CAN-bus protocol for control aspects of the system using a Texas Instruments Hercules series TMS570LS1227. *(this component was chosen to be utilized in a previous iteration of the project and will be used (as recommended by our point of contact) unless we discover a reason to specifically change this.
**Note we have been coordinating with Mr. Channing Philbrick to develop this project and have been instructed to proceed directly to the RFA process in lieu of web board discussion. Additionally, we have tentatively scheduled an appointment to clarify more specific requirements for the project.
|25||Fast Towel Disinfecting Cabinet
|Kyle Michal||Jing Jiang||design_document1.pdf
|Chris Willenborg(cwillen2), Harsh Agrawal(hagrawa3), Yujie Wang(ywang504)
According to research, a bath towel should be washed every 3-4 days. Many people do not have the time to wash it often enough, so the bath towel becomes a perfect place for bacteria to grow and develop. Soon enough, bacteria grows in size and quantity, producing pungent odor.
We propose to design a space-efficient cabinet, in which people can hang one towel at a time. In the cabinet, there are going to be UV-C LEDs on both side (front and back of the towel) of the cabinet. These UV-C LEDs will disinfect the towel in a matter of minutes.
1.Lights : Our current model uses two UVC LEDs to kill the bacteria. We are currently planning on using two lights because the ones that we have found are quite expensive at approximately $20 per light.
2.Carousel : We will need a means of moving the towel such that we can achieve full coverage of the towel. We will be using a carousel that is composed of either belts or pulleys with clips attached that the towel will be hanging from. We are not sure whether we will choose to use a DC or AC motor as of yet to achieve the best cost and space efficiency.
3.Microcontroller : We will need a microcontroller to control the speed of our motor and hence the rotation speed of our towel. We also do not want the UVC lights to be on all of the time because after a certain amount of time the effects of having the lights on will not justify their energy consumption. We will also want to use the microcontroller for our user interface. We currently envision a couple of options enable by pushbuttons.
4.Chassis : With our current model we will need a chassis that is slightly taller and wider than an average towel, which is 54” x 30”. The anticipated depth is about 18” because the viewing angle of most UVC lights is 130 degrees. We will anticipate the worst case scenario for the viewing angle, which is 120 degrees. We will need to decide which motor we are using and create a mount for it that will allow us to turn the pulley and support the weight of the motor without vibrating or making too much noise. We are hoping to find a low cost reflective material to construct the chassis out of or line the inside of our chassis that will allow any light that misses the towel directly to reflect back and strike the towel. We will need something to securely and accurately mount the UVC lights to the front and rear walls of our chassis as well.
5.Power Source : We will need to decide which motor we are using before we can determine what size of power source we will use, but we know that most UVC lights have a range between 5V and 9V.
Criterion for success:
The most common bacteria on bath towels is coliform bacteria, or E.coli. A single E.coli is 2 microns long and about 0.5 microns in diameter. We will sample the same spots on the testing towel before and after we put it in our cabinet for 10 minutes. We will then put our samples on petri dishes and use a microscope to determine the size. At the end we will check how much smaller are the colonies on average than they are prior to the disinfection process. If the colonies at the end are approximately 50% smaller or fewer, we will conclude our success.
1.Ventilation : We would like to automate a ventilation system with our microcontroller if we have sufficient time. This would include adding a fan or two at the bottom of our chassis and vents that physically open and close on top of the chassis. The vents on top would be dependent upon which cleaning state the towel is in.
2.2.Door Lock : The door lock would also need to be controlled by the microcontroller and would keep people from opening the door while the UVC lights or carousel are in use.
3.3.Self-Sanitizing Mode : Ideally we could arrange the UVC lights such that the inside of our chassis can be fully covered by the lights. We could add an additional option to the user interface to sanitize the chassis in an attempt to keep the chassis itself clean.
More information can be found in our original post:
|26||Enkidu Bike Locker
|Mengze Sha||Michael Oelze||design_document1.pdf
|team members: Zhengyu Ji(zji5) , Shijie He(she19), Jiahao Chen(jchen237)
Problem: Bicycle thefts becomes the most rampant theft happening around campus. Despite students ask for the help from police and
buy insurance for their beloved bicycles, they still got stolen and there is nothing to stop the bad feeling of losing the cherished.
Solution Overview: An anti-theft device that will automatically lock the front tire when the back tire is not properly unlocked. The lock on the front tire can only be unlocked using facial recognition of the owner.
Modular design and hardware details:
The system will be mainly consists of three subsystems. The back tire lock system (which will be called as back system for short), the front tire lock system(which will be called as front system for short) and the facial recognition system. The back system will contain a circuitry used to detect if the back lock is properly unlocked. It consists of two switches. The first switch represents the locking part of the back tire lock. Therefore when the first switch is turned on, the whole circuit will still function properly. The second switch represents the other breakable parts of the back tire lock, which means that the front tire lock will be locked if the second switch of the back tire lock is turned on(i.e the back tire is not properly unlocked). A current signal will be sent through wire to the front system if the second switch of the back system is turned on. The front tire lock will lock itself when it receives the signal from the back system. Currently, I want to achieve that by using electromagnet. I will design a mechanism that consists of two plugs that is pulled by the electromagnet and a string to store the potential energy. The second plug pulled by the string is constrained by the first plug pulled by the electromagnet. When the signal from the back system is received, the electromagnet starts to function and pull the first plug out so that the string could stretch to push the second plug out to lock the system. Meanwhile, the electromagnet will start to stop functioning in a very short amount of time(around 1 sec to minimize energy consumption) so that the first plug will start to constrain the second plug so that the system will be securely locked.(This is my current plan for solving the problem asked by Mr.Sha that the thieve might just destroy the whole circuit to break the front lock. In this case, even if there is no current flow, the system will stay locked once the signal is triggered.) The elementary hand sketched diagram of the front system will be provided here. (https://drive.google.com/file/d/1okX_1jU9o1-EYlTbFp2eDW51kRrBFcCn/view?usp=sharing) The front system could only be unlocked by passing the facial recognition system that will be described below. If the facial recognition system is passed, then a signal will be sent to the front system. The electromagnet will be activated again and stay functioning in a short amount of time(around 5~8 second to minimize energy consumption). Now the first plug will be pulled out so that the owner of the bike can manually unlock the front tire lock by compressing the string. Then when the electromagnet stops functioning, the first plug will then be pushed out to constrain the second plug, so that the front system is securely unlocked. We will provide light source for the night use of facial recognition.
Criterion for Success:
The overall design should achieve an effect as followed: If the back system is properly unlocked(the first switch is turned on), nothing will happen. If the back system is not properly unlocked(the second switch is turned on), the front system will lock itself by the mechanism consists of electromagnet, spring and plugs. By now, even the battery is disintegrated from the circuitry, the front lock will stay locked. If the facial recognition system is passed, the plug contained within the front system will be pulled out so that the owner could manually unlock the system.
Features that could be potentially added to the project:
Solar charger that will be used to charge the battery when the bike is parked under daylight.
A proximity sensor that will be used to detect if the tire is disintegrated from the body of the bicycle.
Link to the posted idea: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30319
|27||Traffic Sensing Bicycle Light
|Anthony Caton||Michael Oelze||design_document1.pdf
There are hundreds of cyclists killed in accidents involving a motor vehicle every year. Despite the stronger awareness of cyclists and more bike-only lanes, we are looking for what can be done to help cyclists be more aware of the incoming danger.
Our solution for reducing the number of incidents of cyclists being crashed by vehicles is to carry out related warnings to the vehicles that are behind the bicycle according to the distance and the speed of the vehicle relative to the cyclist. The device uses ultrasonic sensors to calculate the distance of the vehicle and a control subsystem to take the data in a period of time to determine the speed so that it can determine when to trigger the related modes.
1. Multiple sensors used for measuring distance including ultrasonic sensors and doppler radar sensors
1. Using the distance of the sensed object from the sensor subsystem to calculate the distance of the vehicle. The regular timestamp will be activated to take the distance data to calculate the speed of the approaching vehicle.
2. Determining which mode of operation (normal, warning, and danger) to be triggered based on rear distance and relative speed.
3. Choosing the type of alarm according to the mode of operation.
1. Bluetooth to send data from the control system to a user’s device for users to view the safety level of riding the bicycle. And users can use the web interface to adjust the parameters of detection as they desire.
2. Internal controller for signal processing between the control subsystem. (such as Raspberry Pi)
1. 15V (maybe changed later) rechargeable power supply for the sensor, standard flashing red, and high-intensity white strobe light, and a small horn.
#Criterion for Success
The device can alarm the cyclist accordingly when a vehicle is approaching. And fellow cyclist or pedestrian would not trigger the alarms. Parked cars and other objects that are not approaching to the back of the cyclist would not trigger the alarms falsely either.
|28||Automatic Secure Locker
|Thomas Furlong||Jing Jiang||design_document4.pdf
|General Description of Idea:
Although many modernized apartments have already been equipped with package lockers or receptionists to take care of residents’ mail and packages, there are still a lot of apartments where residents’ packages are placed on the hallways. This not only occupies the public space, but also increases the risk of package being stolen. In order to solve this problem, we propose to design an automatic security locker that receives both package and food delivery.
Different from the traditional locker that uses a combination lock, our locker will be open only when the delivery guy enters the correct shipping number or password by pressing buttons on the panel. In the case of food delivery, the locker can automatically tip the driver with the preset amount of money. The locker also comes with LCD display and speaker to instruct the delivery guy how to place the package or food into the locker. In addition to that, in order to prevent random people from violently accessing the locker, a security system with camera and alarm will be triggered whenever the locker is open without entering the correct password.
High Level Technical Overview:
The locker will use an electromagnetic lock controlled by a microcontroller.
2. Control Panel:
The control panel includes an LCD display, a speaker, buttons controlled by a microcontroller. The controller is also responsible for sending electric signal to unlock the locker when the user provides the correct password/shipping number.
3. Tipping Module:
This module will be used to tip the correct amount of money to the driver. The detailed realization of this module has not been determined.
4. Security Module:
A security module with alarm and camera will be triggered to capture the person’s face and scare the person away when the locker is unexpectedly open. The logic will be controlled by the microcontroller.
Since this project does not have any power-intensive part, batteries should be sufficient to provide power to our locker.
Link to Idea Post:
|29||Interactive Mirror Display
||Hiraal Doshi Milankumar
|Nicholas Ratajczyk||Michael Oelze||design_document1.pdf
|We plan to create an interactive mirror display which provides context sensitive information and media access to the user through a discreet and unobtrusive device. We intend to integrate gesture recognition, through the use of OpenCV, and voice commands, through the use of Alexa integration, which will allow the user to interact with the device and access further information. All of the software will be run on a Raspberry Pi and we will integrate a camera, microphone, and speakers.
Additionally, as a reach goal, we plan on incorporating machine learning to recognize specific facial features, such as blemishes and wrinkles, in order to make relevant recommendations to the user. An example of this would be to train a model to recognize unusual facial characteristics and then recommend products that are on the market to address those unusual characteristics. This would be a completely optional feature that the user can choose to turn on or off to address the controversy that this mirror is used as a “beautifier”. This mechanic could also extend beyond the realm of cosmetics to fashion and other appearance based products. For example, a model could be trained to suggest items to complement the clothes you are wearing as well as appropriate colors and patterns. This feature could also be used to improve the online retail experience by recommending products , as described above, and displaying retail websites directly.
We believe that this project will allow us to employ the skills that we have gained in our undergraduate coursework including embedded system development, computer vision and speech recognition techniques, user interface design, as well as other artificial intelligence and machine learning algorithms. We also feel that the flexibility presented by this project makes it a good fit, as we can easily shuffle features and functionalities and set reach goals to ensure the project fits within the scope of this course.
There are many examples of DIY projects for smart mirrors, however most of them are little more than informational displays. There are several similar products on the market, although none are well established and most of them are little more than large android tablets installed behind a two-way mirror. The biggest flaw in these products which we hope to address is the lack of unique functionality and intuitive controls that differentiate the mirror from any number of existing devices, such as TVs or tablets. By providing a unique interactive experience which recognizes and takes advantage of the mirror form factor we hope to make the mirror more than just a hidden display.
See our idea post for more details and discussion.
|30||Gesture controlled robot
|David Hanley||Jing Jiang||design_document1.pdf
Team members: Bofan Yang(byang28), Arvind Vijaykumar(avijayk2), Qinlun Luan(qluan3)
Problem: The application of robotics in the military as well as law enforcement has become more and more common with the inevitable advances in the technologies that develop it. However, many police and army personnel in action wear heavy gloves and heavy combat equipment, making precise control of robots through traditional controllers a bit difficult.
Solution: We propose a hand-gesture-controlled robot equipped with a metal detector for military and law enforcement applications. Hand gesture control would make operating a robot in heavy load much easier, intuitive, with almost no learning costs.
Main components: We intend to primarily develop the Python-based software application used by the client to interact with the robot. For hand gesture recognition, we believe that utilizing a convolutional neural network based design would be optimal since there are many resources available at our disposal. There are also resources for hand gesture recognition that purely utilize computer vision algorithms without any machine learning components involved that we would like to utilize as well.
Facilitating communication between our control center (i.e. the laptop), and the robot would require the use of a Bluetooth module for wireless data transmission. The data would be processed by our microcontroller to control the motors of the vehicle. Aside from the motors, the microcontroller would separately control the arm attached to the metal detecting unit, allowing it to move laterally.
The vehicle itself will be modeled off the cart from ECE110 lab, with minor revisions to ensure the optimal placement of the microcontroller relative to the motors relative to the metal detector arm. Lastly, the metal detecting circuit itself will be designed by us on a PCB.
Final goal: The design of a robot that can be controlled through hand gestures captured by our laptop camera. The robot should at least be able to perform the following six actions based on gestures: stop moving, move straight, accelerate, decelerate, turn right and turn left, as well as the latter 5 actions in reverse. The cart must also be able to detect any metal objects placed in front of it and alert the client accordingly.
|31||Virtually Trained Self-Balancing Pole System
|Amr Martini||Jing Jiang||design_document1.pdf
|Teammates: Kishora Adimulam (adimula2), Henry Thompson (hcthomp2), Mason Ryan (mjryan5)
There is a growing use for virtual reality as a training environment for AI for applications in the real world. Game engines like Unity have even released machine learning tool-kits for researchers and developers to experiment training AI inside games and simulations. There has been past work in translating these simulation-trained models to physical systems, such as the project done by OpenAI which taught a robot to stack different colored blocks in a specific order only after seeing it once in a virtual simulation. However, there has been no past projects translating AI models trained in Unity to real physical systems.
Our project would be to create a self-balancing pole system, which would be trained as a simulation in Unity and uploaded into a physical system which then would gain the ability to balance the pole. Specifically, we would create a 3D simulation replicating all of the attributes of the physical system itself, and train the agent to learn to balance the pole using the Python API and Tensorflow. The trained Tensorflow model would then be uploaded into a Raspberry Pi, which would then use the control signals of the agent to operate motors to balance the real physical pole.
There will be four major subsystems inside the entire project:
1. Unity Engine Simulation/Training
The Unity engine will train a 3D simulation of a replica of our hardware system. The design of the system will be a one-dimensional track, with a motor that can travel in either direction as well as a pole will be attached to the motor by a cylindrical joint. The system will be trained to swing up a pole which is initially hanging down from the joint, and calibrate itself to keep the pole in a vertical position. We will be using the Proximal Policy Optimization algorithm to train the agent, since it has been shown by OpenAI to be one of the best performing and easy to tune reinforcement learning algorithms. The resulting trained Tensorflow model will be used as the input into the Raspberry Pi Interface.
2. Unity to Raspberry Pi Interface
This interface will take the trained Tensorflow model output by the Unity Engine to create a mapping between the state (angle, angular velocity, and position) to the output (voltage applied to motor). The Raspberry Pi will store this mapping so it can communicate with the hardware system.
3. Raspberry Pi to Hardware System Interface
The hardware system is going to need to communicate the position of the cart, and the angle and angular velocity of the pole so that the Raspberry Pi can determine the output (voltage applied to the motor) to keep the pole balanced. The Raspberry Pi will output a voltage to the to the motor on the cart to set its velocity.
4. Hardware System
The hardware system will be a replica of our 3D simulated model, involving a movable cart on one-dimensional platform and a pole connected to the motor by a cylindrical joint. It will take inputs from the Raspberry Pi to move the cart in order to balance the pole. The hardware system will need to be able to send its current state back to the Raspberry Pi so that it can determine the its velocity. The system will also contain a gyro sensor that will determines its angular position and velocity and relay it to the controller.
Link to Original Post:
|32||Parents of the Future (revised)
|David Null||Arne Fliflet||design_document1.pdf
|Net ID/Name of all group members: Sasan Erfan (serfan2), Damian Komosa (dkomos2), Yarshun Jayakumar (yarshun2)
Parents tend to spend more time than they need trying to get their kids to complete their chores. Our project aims to give parents an efficient way to manage their children’s chores. It is an IoT tool that allows parents to remotely monitor whether chores have been completed by their children. Via attaching sensors to the trash can, the sink and the laundry bin and creating a local network between the sensors and the parents’ computer, chores can be easily monitored. The weight sensor would be able to measure the weight of the load, which would be the first indication that the chore needs to be done. The weight sensor will be paired with a ultrasonic sensor which will measure distance and indicate when the specified bin is filled. The network is interfaced with a web application that allows the parents to monitor the status of the home objects and allows the children to see how they match up against each other.
What makes our project unique?
Most IoT applications focus on enhancing the capabilities of a single home peripheral, and often implement statistical tracking so that the average consumer can gain more insight into their daily lives. In our case, we use a broad IoT approach to solve a lifestyle problem as opposed to enhancing the capabilities of a single device.
There are no competitors to this specific problem. The closest competitors we have are smart faucets and smart trash cans, such as the Delta Touch20 Faucet and the Bruno smartcan, but these are created with the intention of creating convenience via IoT.
Brief technical overview provide high level understanding.
Brief Technical Overview:
Via three micro controllers, a linux compatible microprocessor, weight sensors, ultrasonic sensors and a host linux Docker container, a local network is created tracking the status of each home utility. Linux provides a variety of networking packages that allow a client server to be established. After instantiating a Docker container to run the server, the web interface connects with Docker’s bridge network to visualize the data. The microprocessor and microcontroller will be designed via PCB, with the microprocessor at minimum supporting ethernet and Linux functionality. The microcontroller will be connected directly to the ultrasonic sensor and weight sensor with the goal of data acquisition. The design will be powered with a 9V battery, which would have the capability to power the microprocessor, as well as the low- power sensor components.
|33||Temperature sensor network for thermostat control
|Dongwei Shi||Jing Jiang||design_document1.pdf
Haige Chen (haigec2) , Heming Wang (hwang236), Ryan Finley (rafinle2)
Traditional thermostats collect temperature from one location. This may be insufficient in a place such as a multi-room apartment where different rooms, or different corners of the same room do not get heated/cooled evenly. While some modern HVAC systems can check for these imbalances, it’s not practical for older buildings to replace existing systems. Also, replacing the whole HVAC system could be very costly. Regardless, incidents such as forgetting to close a door or window may cause dramatic disparities in temperature - hiking heating/cooling bills if not warned early.
We seek to build a scalable temperature aggregation system as a cheaper add-on (than replacing with newest zoning HVAC) to older HVAC systems to collect and interpret temperature data across multiple rooms in any internal environment. The design would require temperature sensors, Wifi chips, and MCUs integrated on PCBs, and a central hub that gathers all the sensor data.
The user can monitor the temperatures and receive alerts through a phone app in real time. Some alerts may include dramatic changes in a rooms temperature and, depending on the timeframe, “next steps” will be suggested to the customer to assist in fixing the disparity. For example, if the timeframe has been short-term, it may suggest checking for open windows. If long-term, it may suggest checking for obstructions in the heating duct.
We can use two types of actuators to help maintaining the desired temperature better. We can design a fit-all control box that users can install over the HVAC controller that can push buttons to turn up and down temperature setting. This device would require wireless connectivity and control (e.g. pressure sensor at the tip of button-pushing mechanism for feedback). To ensure the box fits most standard controllers, we plan to create an exoskeleton that clips onto the controller box. Also, as a reach goal we plan on automating the opening and closing of a standard 4in x 10in floor register to regulate air-flow, an idea suggested by Prof. Jiang.
An additional reach goal may be to sense movement in a room. This could predict if someone is present and push priority to keep that room at optimal temperature. This could be done using a passive IR sensor that detects human motions.
Central hub (MCU, Wifi chip): acts as the server that gathers data from sensors and sends command to actuators. Power supplied by wall.
Sensors (temperature sensor, MCU, wifi chip): measures the temperature in a particular spot in the house and send data back to hub (sparsely, e.g. every 10 minutes). Powered by batteries.
HVAC controller actuator (fit-all case 3D printed, servo motor, wifi chip, MCU): receive command from hub and change HVAC settings by pushing buttons. The mechanical design makes sure it clamps onto all types of HVAC controller, and upon installation, the user can slide the actuators above the buttons and lock them.
Air vent actuator (motor, wifi chip, MCU): receives commands from hub and turn the air vent using a servo.
Modular design/distribution of work
- Communications functionality: make sure different device can talk to each other correctly via wifi
- Sensor: make sure temperature sensor can read correct value
- Actuator: mechanical design; make sure motors and servos work correctly
- Control algorithm: the hub interprets the data and decides what actions to take
- Phone app: user interface for user to see real time data and change settings
Link to idea post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31443
|34||ThereminFreaks - Theremin Rhythm Game
Yhoas Olivas Hernandez
|Amr Martini||Michael Oelze||design_document1.pdf
|We are planning on creating a theremin-based rhythm game for the PC platform. The electrical component is a PC peripheral theremin where the capacitance is controlled by moving one's hands closer or farther from the plates on the theremin controller. We plan on using this capacitance to affect an oscillator's frequency and capture this frequency as a variable on the PC program using USB. Thus the theremin circuit will be connected to a USB controller which will connect to the PC. On the software side, we will make a simple driver for this controller and a video game written in C++ using OpenGL. The way the game works is there is a stream of notes coming at the player. The vertical axis represents the volume to correspond with the loop antenna on the theremin. The horizontal axis represents pitch to correspond with the straight antenna.
Other rhythm games typically involve the player pressing a button or a touch screen and/or simulating a conventional rock instrument like a guitar or a drum set. Some examples are beatmania IIDX, which is a “DJ simulator” and uses a turntable and 7 rectangular buttons and DrumMania which uses an electric drum set to simulate playing a drum. What makes this project unique is therefore the fact that it uses an unconventional instrument where there is no contact between the player and the instrument. And the pitch and volume are continuous rather than being discrete like pressing a piano. A game like Rock Band has a singing mode that is somewhat similar to what we are doing, but it only takes pitch into account and not volume.
|35||Variable Speed Sump Pump
|Amr Martini||Arne Fliflet||design_document2.pdf
cnpeter2, (Carolyn Petersen)
villasr2 (Edward Villasenor)
manuelf2 (Manuel Florez)
The goal of our project is to create a variable speed sump pump. Current sump pumps on the market function just by turning a motor on when a float switch is triggered and the sump is filled.
Unique Solution: Our project aims to save energy and increase the longevity of the pump by making the process more efficient. We plan to do this by detecting how fast the water level is rising and adjusting the speed of the motor accordingly. If the water is rising quickly, the motor should move faster and if the water is increasing slowly, the motor should run slower.
## Motor/Impeller System
We plan to do this by using a DC motor with a motor controller attached to batteries for power and an arduino for programming. The goal is for the pump to pump as slowly as possible while maintaining a similar water level. Any equipment in the actual sump will obviously have to be waterproof. We plan for our sump pump to be a pedestal sump pump so the motor, motor controller and arduino will be outside of the sump, and the sensors and impeller system will have to be in the sump. We hope to buy an impeller system or use one from an existing sump pump that we will attach to our motor.
## Water Level Sensing
We have a couple ideas for detecting the rate water fills the sump. One idea is using multiple floater switches (potentially 3). One will be an emergency switch that flips when the sump is filled and turns the motor on its highest speed. The others will be used to calculate how fast the water is rising by flipping when at a set height and calculating the time it takes between the two flips. Another sensor we were thinking about using is a water level depth sensor for an arduino if it works well enough. (https://www.amazon.com/gp/product/B06XHDZ3Q4/ref=ppx_yo_dt_b_asin_title_o00__o00_s00?ie=UTF8&psc=1).
We plan for this to be a battery operated backup pump as it is most important to save energy in emergency situations where a backup would be used.
We plan to build a DC-DC converter to adjust the current and voltage from the battery to the motor controller.
# Criteria for success
For a sump pump to pump water faster when the water flow rate is higher and slower when the water flow rate is lower.
|36||Thermally Activated Display
|Hershel Rege||Casey Smith||design_document1.pdf
|Project Name: Thermally Activated Display
Team members: Santiago Puértolas (sp24) and Joey Espino (espino2)
Problem: The IQUIST wants to develop an eye-catching custom made reconfigurable display that is not a monitor.
Solution: In order to make this display our team has decided to use thermo-chromic paper as it creates beautiful colors when it is activated by heat. When working with heat activated elements it is important to keep in mind that the heat needs to be dissipated fast enough so that the display can be reconfigured, in order to do this we chose to use aluminum as the support for the thermo-chromic paper along with a heatsink to cool down the display and reconfigure it. The words will be printed pixel by pixel by using resistance wire to heat up the pixels at a temperature of around 86º which would allow the material to be fully activated. A programmable circuit will be used to control all of the pixels for the display, each pixel is “on” when a voltage potential is applied across a resistance, heating the element. We will also make the display interactive by adding an array of Ultrasonic Sensors and showing an animation when a user gets close to the display.
Finally, our proof of concept will be to print out a single letter on a 6 inch x 6 inch sheet of thermo-chromic material display.
We initially posted this on the Web Board and received some questions. Here are our responses to the questions:
"How many "heat pixels" are you thinking of having?"
Depending on how the spread of the heated element affects an area, this will vary. Initially we will start with roughly ~20 heating elements for our 6 inch by 6 inch display and test how well that works. Adjustment will be made to take into consideration the resolution of the output displayed on the display.
"How large are your pixels? "
Our pixels will be resistors that heat up when power is delivered to it. The spread of how much area is covered by an activated heating element will depend on testing.
"I've never dealt with this material before and I don't know how large your pixels are, but if you are talking about 86 degrees Celsius (you only said 86 degrees) I wonder if heat may "drift" from one pixel to the next. Is this something you have already thought about?"
Yes. The temperature although will most likely vary around 90 degrees Fahrenheit (this is something we will have to test, the thermo-chromic surface we are using sells at different activation temperatures, so we will find one that works at an ideal range). If one heated element effects an area that is supposed to be covered by a different element, we will space them out until we get a desired output.
|37||Smart Electric Toothpaste Dispenser
|David Hanley||Jing Jiang||design_document1.pdf
The majority of the toothpaste dispensers in the market are manual. Those electrical ones generally dispense a same amount toothpaste which cannot be easily adjusted. Also, the dispenser does not have any interaction with the users for them to know the time and the amount dispensed. Besides, the existing devices may not be powered by batteries or display battery level.
# Solution Overview
The toothpaste we design will be powered by a battery and be fully automatic. It has two modes of operation: the user can dispense a preset quantity using user interface or simply push a button and dispense as much as the user wants. Also, the device uses a screen to interact with the user: showing time, telling the weather, and showing the settings of the dispenser. An app will also be created for the device so that everything can be controlled in a smart terminal.
## Dispensing Mechanism and Drive
A small peristaltic pump will be used to get the toothpaste out of the package and the amount can be accurately traced by counting the number of rotations of the pump. Such pump can achieve the relatively high precision we need and high pressure to dispense the toothpaste. Because most of the toothpaste tubes have a similar size thread on their neck, a 3D-printed base will be design to fit most of the packagings.
## Data Collection and Analysis
We will collect data in two areas. One is related to the battery usage such as the battery level and the amount of time that the battery can support, and the other is related to the toothpaste usage such as the time since the the tube was first used and the amount of toothpaste dispensed so far.
## User Interface
We will design an android app which has a control page and an analysis page. The control page is for users to define the amount of toothpaste to be dispensed or simply press the start button and stop button on the app to control the dispensing process. The analysis page is for users to view the data related to battery and toothpaste usage.
# Criterion for Success
Our final product should be able to
Dispense toothpaste with push button
Record the amount dispensed and automatically dispense a preset amount
Display all the settings in the screen and battery condition
Have all the setting parameters adjusted with the user interface
Record the amount, the time of the toothpaste usage and store the data. The data should be manageable anytime using the smart terminal.
Record video upon request by the user and store the video in an SD card or USB drive.
|38||Automatic Toothpaste Dispenser
|Soumithri Bala||Jing Jiang||design_document2.pdf
|Previous idea posts:
Most of the toothpaste dispensers in the market so far are operated manually. Some of them are electrical but they still only support squeezing the fixed amount of toothpaste each time the user pushes a button. There are other major problems related to the existing toothpaste dispensers. For example, users are unable to set the amount of toothpaste being dispensed, and there are no supporting smartphone apps to collect the products’ metrics and visualize the data for users.
# Solution Overview
We propose to implement a new automatic toothpaste dispenser supported with a smartphone app, based on the PSoC 4 BLE board, sensor programming, RFID, and Android development. The dispenser will have the following distinct features from current products:
1. Multiple tubes of toothpaste can be put into the dispensers at the same time and the dispenser can choose which toothpaste to be dispensed by identifying the RFID sticker on the toothbrush.
2. Using RFID to identify users（In detail, we put different RFIDs on different toothbrushes for kids or adults and the machine can identify which one is using the dispenser and dispense a specific amount of toothpaste that has been previously set in the app. ）
3. The user is able to set the amount of dispensing toothpaste of each RFID on the app.
4. Tracking the amount used by each user and visualizing the corresponding short-term(day/week) and long-term(month/year) usage on the smartphone app.
5. Powered by the external source (wall socket). Two components require electricity: the board and the motor.
# Squeezing Mechanism
we plan to implement the function by measuring the distance between the squeezing component and the head of the tube. As the squeezer is powered by a motor, we could use an encoder to monitor the rotation of the motor to achieve the goal. In our blueprint, the squeezer would be a cylindrical object rolling from the tail to the head of the toothpaste tube.
Apart from the squeezer, we also would use a "buffer" to control the amount of toothpaste dispensing. Basically, toothpaste coming out from toothpaste tube will be firstly stored in the buffer and then be dispensed out to the toothbrush. The exit of buffer is smaller than that of the toothpaste tube. Our design for squeezing mechanism does not need to control the amount of toothpaste coming out of the tube, we just need to set the mechanism to a safe value and every time when our buffer(reservoir) is empty, the mechanism will push some toothpaste into the buffer. We only need to control how many toothpaste coming out from the buffer.
# Sensor Subsystems
-Ultrasonic sensor for detecting toothbrush
-RFID attached to toothbrushes. Passive RFID reader embedded in the dispenser to determine which toothbrush is being used to hold toothpaste.
# Processing Subsystems
App/Dispenser interface: Embedded Bluetooth protocol.
Mobile Database for app: WCDB (mobile database framework)
# Power Subsystems
PSoC BLE board powered by USB.
# Criterion for Success
Our final product is an automatic toothpaste dispenser that is capable of containing more than one tube of toothpaste, identifying RFID tag from the toothbrush and distributing predetermined amount and type of toothpaste once the toothbrush is put inside the dispenser. We can use smartphone to set the amount and type of toothpaste. We could also store and calculate each user’s usage of toothpaste for some intervals and parents may used it to monitor the child’s correct usage of toothpaste.
|39||Bird Box Project
Maria Nacenta Fernandez
|Christopher Horn||Arne Fliflet||design_document1.pdf
Communication amongst animals is incredibly valuable for us to observe and analyze. Research on these auditory behaviors and responses are unknown for birds and would be extremely valuable for the case of enhancing our current communication technologies.
#What Constitutes Success/Objectives
Create a functional “Bird Box” system where researchers can use and adjust to perform various trials for conditioning of Birds
A working project would be able to sustain one full trial with the bird after receiving sound inputs from the researcher and at the end output an excel sheet and a sound file for future use
The system will make record of 4 different responses from the bird.
Target-response: The system will reward with food
Target-miss: The system will record but not respond
Sham-response: The system will punish with lights out
Sham-miss: The system will record but not respond
For the purpose of the conditioning process, sham audio sounds would be chosen periods in which the background sound is repeated and projected as if it were a target sound. This is to prevent the bird from repeating
The system would record each individual instance and format it into an excel document/spreadsheet for the researcher to analyze.
#Modular Design of Project -- Minimum Viable Product
AC/DC Power converter
Power supply for all hardware components
##Food Dispensing Mechanism
Silence (so no audio from trial is playing)
Consistent food distribution
Timed to allow the bird to eat the entire amount instead of having to gulp everything and return to the trial
Design follows form of 2 part process
1) While the bird is trialing, the dispenser's main chamber fills up to the required quota to the second / output chamber. If the bird succeeds, the 2nd chamber releases the allocated food.
2) Then the device re-allocates some volume of seed to the 2nd chamber before reclosing
Clock cycle for time
##Bird Response Mechanism
Lightweight buttons, a beak can only do so much
Different color for Bird to distinguish the purpose
Sends data to software interface
Has a timer installed to be able to alert for time-out feature (if the bird is idle for too long)
Wired to hub that sends data to software interface
##Audio Output/Speaker System
Adafruit speaker that Mike Suggested (https://www.adafruit.com/product/1314)
Receives .wav files to play from the software interface
##Camera for observation
Probably a camera add on
Needs to be able to capture live footage so that the researcher can observe during the trial
##Light for Box/Punishment mechanic
Basic LED light
Driven by Arduino circuit
Has clock to time out
##UI for Research/Parameter input
The interface would allow for researchers to submit .wav files for the audio sounds that they choose to use for the trials. This submission also will perform calculations for the researcher to hone specific time requirements for their test. To specify, because the general structure of a test consists of a repeated background sound. To perform conditioning, the sequence would deviate from the normal background, for a specific trial “interval”, the sequence would yield
##Outputs CSV/Excel data for Researcher
##Perch for Bird to sit on during experiment
|40||Hands Free Drink Mixer
|Channing Philbrick||Arne Fliflet||design_document1.pdf
We plan to make an automated drink mixer system that mixers based on pre-loaded recipes. This device would have preselected drink supplies that uses a pump to make various drinks. This automation will solve long lines at bars.
The user would scan a RFID card on a RFID sensor to provide identity, then use a controller to navigate through the mixed drinks on LCD screen and select one. The recipes will be pre-loaded onto microcontroller memory as a file. Through this user interface, our microcontroller will store who ordered which drink and keep a tab. A solo cup would be positioned on a rotating disc (using a step motor) below pouring nozzles evenly spaced out in a circular formation. Peristaltic pumps would pour out the right amount into the cup as required by the recipe. When different liquids need to be poured, the disk will rotate and position the cup below the needed liquid. The positioning will be precise through using a stepper motor, preventing movement of the disc beyond desired angular movement. The pouring will be done with high precision through using flow gauge sensors. Additional features include a warning system to identify which ingredients are getting low (using weight sensors and LEDs), sensors to verify that there is a cup in place to avoid spills, and wifi/bluetooth enabled. While using water may be a concern with electronics, our implementation does not use high pressure (minimizing potential sprays) and can be tested/demoed keeping water components a safe distance away. The machine shop confirmed our mechanical structure is easy to build.
The Complexities of the project:
Pouring Mechanism with the sensors
RFID Tab Mechanism
|41||Water Contamination Detection and Alerting System for a Boat
|David Null||Jing Jiang||design_document1.pdf
|Team Members: Nelson Lao (nlao2), Junik Kim (jkim664), Samuel Hung (shung5)
Title: Water Contamination Detection and Alerting System for a Boat
Problem: This project was a pitch from the Center of Environment Restoration and Sustainable Energy. The three main functions are Data Collection & GPS, Data Transfer, and Alarm.
In order to document water contamination, it requires a human manually boating to the affected area, and taking notes. The drawbacks to this method include safety concerns, less efficiency, and timeliness.
Our solution: Build a system that can remotely monitor water quality and contamination on a boat. This system will be able to automatically log data at distance intervals, determine if water is contaminated, and if it is, send a text message alert with GPS location coordinates, and sensor data. The data will be stored on an SD card and can be transferred wirelessly to a mobile device to upload to a Cloud server upon docking the boat.
- Atmega 2560 microcontroller for interfacing with sensors, SD card, GPS module, bluetooth module. Could switch to other microcontrollers or FPGA if more inputs/outputs are needed.
- Calculates distances and logs water sensor measurements at distance intervals w/ location information onto an SDcard.
- Determines if water contamination is detected.
- Takes picture of site of contamination (optional)
- Factors we want to get data on: Water hardness, pollution levels, chemical leakage in rivers.
- Main water quality sensors: pH, dissolved oxygen, water temperature, conductivity (salinity)
- Additional sensors we could incorporate: turbidity, Calcium (Ca+)/ Magnesium (Mg2+) concentrations, nitrates.
- Extra feature: Camera for image capture (optional)
Processing and Communications Subsystem:
- Bluetooth is used to connect to our system in order to transfer logged data from SDcard onto smartphone app for upload to server when boat is docked.
- Cellular GSM is used for sending an SMS text alert with GPS location data if contamination detected.
- Our system will be powered off the boat’s 12V battery. This requires a 12V-5V voltage regulator in order to power our system and sensors.
Criterion for success:
- System is able to log water quality measurements.
- System is able to easily add on to a boat.
- System is able to send a text message alert when water contamination is detected.
- Optional: taking image using camera of contamination site.
Existing solutions on market:
- Libelium Smart Water wireless platform uses Cellular (3G, GPRS, WCDMA) to send data to Cloud but does not provide a text messaging instant alert like the system we proposed. It also does not take an image of the contamination site is best for monitoring a specific site, not for monitoring changing/multiple sites which our proposed system does with GPS location.
We have contacted CERSE and are waiting for their feedback on our proposal.
||Ching Chieh Yang
|Dongwei Shi||Michael Oelze||design_document2.pdf
|Group: Taha Anwar (tanwar2), Junnun Safoan (safoan2), Ching Chieh Yang (cyang87)
This is a sponsored project by Petronics, a company that builds Mousr, a mouse robot that plays with cats. We aim to create an IMU-based cat collar that measures, transmits and analyzes the cat’s activity when playing with a Mousr robot. When it is detected that the cat is playing with mousr, a camera is turned on to monitor the cat.
Cat owners do not always have time to play with their cats. The Mousr is a clever solution that accompanies your cat in your absence. However, Petronics still doesn’t have a way to check if there is a direct correlation between a cat’s overall activity and Mousr’s activity.
The Mousr unit developed by Petronics is already able to make event predictions such as ‘inactive’,’engaged’,‘needs charging’ and so on. The motivation for the cat collar is to use the data collected from it to confirm the event predictions by the Mousr in order to assess the cat’s actual engagement with Mousr. Furthermore, the collar will wake up the Mousr when the cat is within a certain vicinity. This will be instrumental for Petronics, as it will allow them to measure the effectiveness of Mousr in engaging with the cat.
The collar tracks the motion of the cat using IMU sensors. The data from the sensors is compressed from 120-200Hz to 1 Hz using signal processing for efficient data acquisition. We will develop algorithms to determine whether the cat is: inactive, walking, or engaged with mousr using the data collected. In engaged mode, the raspberry pi turns on a camera for recording the cat’s activity, which serves to verify the results. Mousr also sends events to Raspberry pi, which helps to find out how the Mousr events correlate to the cat’s activity. Examples of Mousr events are running in circles, zigzag, stuck in corner, rest, etc.
We will be using an ESP32, which is a low-power system-on-chip series with Wi-Fi and bluetooth capabilities for communication with the Mousr, as well as uploading data from the collar using wifi. We will use the camera on the Raspberry Pi to record video data, which will serve as a ground truth for the cat’s activity. We will use the accelerometers and gyroscopes in the IMU to track the motion of the cat. The power supply unit would use batteries that are small enough to be easily integrated with a traditional cat collar and can go on upto 6-8 hrs without charging. Therefore, the main components that will be included in the cat collar are ESP32, IMU, Charging/power circuit, status LEDs and other features we can add on in the future.
We will design an algorithm that takes in IMU data from the collar of the cat and Mousr events as inputs to generate an output determining whether the cat is inactive, walking, engaging, etc.
Mousr events will be coming in from Python Flask framework and API that the Petronics team has built, while acceleration timestamp values come in as raw data from IMU sensors from the collar.
It is expected that inactivity is easiest to predict, since the IMU readings will be mostly constant. We can determine if the cat is walking by checking for spikes in the z-axis and y-axis of accelerometer readings. Engagement is expected to cause rapid changes in the accelerometer readings, however, it will be a challenge to determine different motions during play. When engagement mode is detected, the camera turns on to record the cat’s activity.
Our final representation will involve either logging the results to a csv file or developing the front end for displaying a simple chart with percentages of cat’s daily activities as well as correlation between cat activities and mousr events.
Criteria For Success:
Our criterion of success is to design a collar that can accurately predict when the cat is inactive/sleeping, walking and engaged with the Mousr unit.
Reach goals would be to delve deeper into the activity of engagement through experimentation and provide more details about its motions during play.
|43||Gait Controlled Treadmill
|Anthony Caton||Michael Oelze||design_document1.pdf
|Members: pruiett2, (Jacob Pruiett), cllewis2 (Charles Leonard Lewis IV)
When I was at the gym, I couldn't help but notice how many people hold on to the rails of their chosen treadmill. It's a common behavior for gym patrons to feel a sense of insecurity due to the inherent nature of a machine dictating the speed of your stride. Even with the added protective features of a safety clip (used to break the circuit connection to the belt), several stop buttons, and handrails there is always the fear of losing one's balance in the absence of being able to access one of these features (as a quick youtube search will reveal a plethora of treadmill mishap compilations). Furthermore, the lack of natural speed control takes a runner out of the moment by having to physically reach out to a board and make changes. Even manual treadmills lack the seamless controllability of being able transition between speeds due to the unnatural force application required to accelerate the belt; generally speaking, very light or very heavy people would tend to run into complications when it comes to the manual governing of the belt speed.
I propose a hybrid between an automated and manual treadmill. This gait controlled treadmill would only require the user to begin walking before an on board automated system would match their speed. This would eliminate the need to apply excess force on the tread (as dictated by a purely manual treadmill) as well as the predetermination of setting a gait speed by way of a console (as in an automated treadmill).
Forecasting a considerable design challenge would be the smooth transitioning between velocities as the user changes speed. Designing a system to manage speed control may require a relatively sophisticated understanding of control theory; unless other considerations can be made.
As an added note, the previous iteration of this proposal required the implementation of force and pressure sensors to detect changes in velocity and acceleration. Due to those sensors being replaced with what we have suggested here, we have can feasibly scale up or down the model to be implementable as a full scale human sized model as we will be using something akin to a table top sized model and utilize an RC vehicle as the test “volunteer”. This will eliminate the need for a human test subject and thus any safety concerns associated with such a consideration. In addition it will add greater ease of transportation over a normal sized treadmill, and will take up much less lab space with which other students will have more space to work on their own projects. In summation we have decided to downscale for ease of testing, safety, and in consideration of our peers.
MOTOR CONTROL SYSTEM:
Powering the treadmill will be a small motor powered by an outlet using an adapter. We will then use an ATmega328 microcontroller to control the motor based on sensory input. The motor speed will be based on how fast the user on the treadmill is moving relative to the belt, and their position relative to the center of the belt. If a user is slowing down, and thus moving slower than the belt itself, the belt will slow down to accommodate, and will likewise speed up if a user is moving faster than the belt.The biggest problems we will face in this department is making sure the belt is able to accelerate to match a users pace quickly, whilst also making the transition smoothly enough so that it does not cause a sudden jolt in movement, making the user lose balance and possibly become injured.
The user’s desired acceleration/deceleration and velocity control would be dictated by the change in gait speed. As the user begins a forward stride or begins to loose forward momentum the electronic feedback PI or PID control system will detect the relative velocity change in position. Originally we had wanted to use force input from the treadmill to gather data on the user, but after gaining some advice from TA’s and the Mechanical Engineering department, we decided to opt for distance and velocity based sensory input. For the sensor itself, we want to experiment with both laser (mainly lidar) and ultra-sonic sensors, testing out multiple configurations and seeing which result is the best. The reason we want to try multiple configurations is because, if mounted from the front, the sensor may be not be able to gather accurate data due to the users arm swaying in front of them, but cannot be mounted directly behind the user for safety reasons, so we will test having multiple sensors from different angles, and compare efficiency of detection to how expensive a given configuration is. One such setup may include multiple sensors lining the bottom of the treadmill (near the feet of the user) that would store multiple readings of position and velocity relative to one another thereby picking up when the user is advancing (speeding up) or lagging (slowing down).
CRITERIA FOR SUCCESS:
In order to determine that our design is successful, it will need to be able to effectively determine the position and velocity of our RC car subject relative to the treadmill, then use that data in conjunction to the velocity of our treadmill velocity data to make the treadmill move faster or slower depending on the acceleration and velocity of our user. Our control system will need to implement this smoothly enough so as to keep the treadmill safe, but make it fast enough to still keep pace with the user.
Reach Goals: The weight of the user can be acquired if we include a force sensor on the static side rails (surrounding tread), which can then be used to determine calories burned during a workout. In addition we will be using the velocity data from the motor in order to determine the speed of the treadmill.
Link to idea post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31807
Note: We are still looking for a third person to assist with this project. Perhaps a person with some circuit control and/or power experience; but, we're open to other experiences. email@example.com
|44||POWER-SAVING MODULAR LIGHT CONTROL SYSTEM FOR EXISTING INFRASTRUCTURE
|Amr Martini||Arne Fliflet||design_document2.pdf
Ibrahim Odeh (irodeh2)
Konrad Woo (kwoo3)
Rohan Tikmany (tikmany2)
Problem: While modern systems do exist to control lights through motion sensors for the sake of power saving, they require a total overhaul of the existing lighting infrastructure. This can be inconvenient and prohibitively expensive.
Solution: We plan to create an inexpensive modular system that can be attached to existing lighting infrastructure that will use motion detection to control the lights.
Uniqueness: Our system would require very little setup and no expertise on the technologies. Ideally, the user would simply tack on sensors to the ceiling and controllers onto the light switches and plug in a central hub.
Our system consists of 3 primary components.
1. Sensor boards that can be tacked to the ceiling to detect motion and wirelessly communicate with the hub. These sensor boards will be identifiable in groups so that the hub will be able to differentiate what parts of the room are occupied.
2. A hub that will communicate with the sensor boards and be able to control individual switches based on the received sensor data.
3. Physical hardware that will flip a light switch using a servo motor based on wireless signals from the hub.
The switch system will consist of a mini servo which will control an actuator and toggle the switch. There will be a button to manually operate the mechanism and the switch itself will still be operational. Each switch assembly contains an RF transceiver to communicate with the hub and an Atmega 328 for processing.
The sensors will be utilizing PIR in order to detect movement. They will also contain an Atmega for processing logic and a RF transciever for communicating with the hub. The RF and PIR are expected to consume low enough power that a battery cell will suffice. The sensor boards will preassigned with unique identifiers so that the hub can differentiate what areas of a room are occupied, which would be useful for larger rooms with several switches controlling several zones of lighting.
The central hub will be the brains of it all, containing the same RF transciever to communicate with the sensors and the switch systems, and an Atmega processor for respective control. The hub will be powered by a larger 9V battery, which should help longevity. We will use appropriate DC-DC converters to power the atmega at the correct voltage.
Whenever a person enters a room, the sensors will communicate to the hub that they have detected motion, and the hub will command the appropriate switch mechanism to turn on the lights. After a certain timeout with motion, the hub would then command the switch mechanism to turn off the light.
|46||Low-Cost Head-Tracking Headphones
|Zhen Qin||Arne Fliflet||design_document3.pdf
The goal of this project is to design and build a small device capable of attaching to and augmenting an existing pair of over-the-ear headphones, in order to give the wearer the ability to track the orientation of the head in real time. Broadly, the goal of this project stems from the fact that normal headphones do not track head orientation; when the listener rotates their head in real-life, sensory input to the brain changes, and a perceptual experience of space occurs. When listening to music on normal headphones while rotating the head, no such perceptual experience occurs. The ability to track head orientation with headphones opens new possibilities for experiencing customized sounds in a more immersive, exciting, and realistic way.
This project idea comes from Professor Eli Fieldsteel of the music department, who would like to compose ambisonic musical work using electronic sounds.
Project Uniqueness and Hardware Complexity:
It is acknowledged that technology for head orientation-tracking headphones is fairly well developed already, as exemplified by various VR hardware-software paradigms. The goal of this project is to create a simplified, budget-friendly, non-software-specific version that others with non-specialized skills can replicate by following a simple article/manual that will follow the project.
Professor Fieldsteel’s Comments: “Appreciating that other interested members of the DIY community may not have access to the same resources as ECE 445 students, it is desirable to produce two versions of the augmented headphones: one with substantial circuitry-building work that is small and somewhat specialized with PCB and soldered connections, and another version that relies more heavily on pre-build components that are affordably available through commercial sites, as a way of making this technology more broadly available to the novice electronic music composition community.”
Ambisonic sound refers to a mathematical framework for handling true three-dimensional sound placement and positioning. The composer will be using the Ambisonic Tool Kit (ATK) running in the SuperCollider audio programming language (http://www.ambisonictoolkit.net/) Many of the generators, spatializers, and other software tools in the ATK rely on angle values in radians in order to modify the orientation of a three-dimensional sound field. In particular, the FoaRTT object (http://doc.sccode.org/Classes/FoaRTT.html) will be a focal point in creating the musical composition. The data output by the device should align with the requirements of FoaRTT and other Ambisonic UGens, if possible, e.g. providing three values for rotational angles about the x, y, and z axes.
The augmented headphones should output data at a refresh rate that is appropriately high, at least 30 Hz, preferably 60 Hz or even higher. The device should also be able to be calibrated such that an arbitrary angle within the horizontal plane will be considered to be “front-facing”, perhaps by the inclusion of a small button. The professor’s current prototype is capable of tracking azimuth angle on the horizontal plane, with no means of calibration; the magnetometer uses compass north as an absolute reference point. Using the accelerometer to track elevation angle is as a goal of the project.
For reference, “Audeze Mobius 3D Headphones” are one example of high-end, software-specific orientation-tracking headphones (https://www.waves.com/hardware/audeze-mobius-3d-headphones-360-ambisonics-tools) which the project seeks to re-engineer, though with great emphasis on budget-friendliness, DIY-friendliness, non-specificity of receiving software, and simplification of output data streams.
|47||CONNECTED PIEZO ELECTRIC PRESSURE SENSING SHOE
|Kyle Michal||Arne Fliflet||design_document1.pdf
|Team members: Alan Lee (alanlee2), Gerald Kozel (gjkozel2)
General Description: we want to make a piezo electric pressure sensing shoe. this has applications in athletic training as well as patient monitoring for specific orthopedic conditions.We’re planning on making it an insert that visualizes a user’s distribution of pressure across the foot. We will know if it’s successful if a person can view a pressure map of their foot over a specific period of time and perhaps have the app give some feedback (Unequal distribution of pressure indicating poor posture or possible risk of diabetes or other conditions). Additionally, the piezoelectric would also have a back up power source since our research indicates that it would take a lot of walking for enough power for the entire insert. The piezoelectric power could be stored and used in addition with our secondary power source. We hope this can reduce the bulkiness of our insole.
Solution: the shoes will be powered by piezo electric generators. It will include pressure and flex sensors to track how the user walks and what pressure the user is putting on what parts of the foot. our solution will also include a mobile app interface for the user to track their walking or running techniques and analyze what needs to be changed. As for comfort we would use very thin/flexible pressure sensors such as the one you can find here: https://www.adafruit.com/product/1075?gclid=EAIaIQobChMItpe4uv-O4AIVDdbACh17Ggm2EAQYASABEgI6jPD_BwE
These would be embedded within a soft insole along with the piezo as to minimize the effect on our users’ walking habits. We’d also use the smallest batteries possible to keep it lightweight.
The Complexities of the project:
Sensors in the shoe - flex, pressure
Power in the shoe - piezo electric crystals
Communications - bluetooth/wifi in the shoes to connect to the phone
Mobile app - app on a phone to monitor the walkers technique.
Design Considerations: Powering the shoe's communication and sensors will need sufficient power from either the piezo or additional electrical components. In addition, we must make it comfortable and light enough so that the user experience does not diminish. We also must take into account the mobile app and its communication features along with its UI. We hope to have communication up real time for when users are walking in the shoes and tracking their dynamic foot pressure while walking. Such a feature will be impacted by how much power the piezo will be able to supply and what other designs we will implement so that all subsystems are powered correctly.
Foot pressure sensing
Foot flex sensing
Piezo power system
Battery power system
Shoe communication system
|48||Assistive Shogi Board
|David Hanley||Casey Smith||design_document1.pdf
|Shogi is a complex game for beginners to play and is hard for beginners to play without violating the rules. To make the game easier to learn, we want to design an assistive shogi board that can aid players in knowing what moves with a certain piece are valid.
The idea is that when a player lifts a piece certain positions on the board will be lit based on the places that piece can move given the current board configuration. We will be utilizing 81 photoresistors and 81 LEDs on each position on the shogi board. We will have the board layout initialized in the memory of a raspberry pi (Probably a Pi 3 or Pi 0) that interfaces to an array (4) of Atmel Atmega 328P chips (each with 5 analog pins, we need a total of 81 analog pins so we will make use of ADC as explained here https://forum.arduino.cc/index.php?topic=397456.0) to read data from the photoresistors and light the respective LEDs.
On top of which, there are special rules that need to be accounted for like piece replacement, our plan is to have a touch screen interface with the Raspberry Pi to help account for such specific rules. We hope to have a custom board made at the machine shop if possible as well. We believe this board would be a great way for people in experienced in shogi to be able to learn the game. To account for bumping, we can have the users check the state of the board on the touch screen and correct the game state through the interface if need be.
As a reach goal, we will have position light up a different color for when a movement causes a "capture" of another shogi piece as well.
|49||Automated Boba Station
|John Kan||Michael Oelze||design_document1.pdf
Boba, a popular drink among millenials, has prices that are still largely dictated by the manual labor involved in making it, so shops still require many employees. Unlike coffee, making boba tea requires handling both solids (boba, etc), and liquids (tea, syrup, milk). With the large variety of recipes, human workers are prone to make mistakes. Finally, taste consistency is hard to achieve without an automated solution, leaving drinks sometimes oversweet.
## Solution Overview
This automated boba station would have multiple dispensers connected to tubes that would all drip into a cup. For simplicity, we would start off with 3 liquid dispensers, one for the tea (cold), one for the milk (cold), one for sugar syrup (cold) which would be controlled by a microcontroller (maybe raspberry pi). We also would have an additional dispenser that dispenses the boba (solid). This part is a little tricky because boba needs to bathed in a water, only retrieved during serving.
The software portion will include Web UI that is able to connect to the microcontroller to control the amount of ingredients/amount of time the valve is open for each dispenser. We can also then record exactly how much ingredients is used throughout the day, and in the future prevent overbuying ingredients and causing food waste.
The goal is to make these dispensers modular to allow for permutations of drinks using different tea and topping and other ingredients by just adding more dispensers. However, for our proof of concept, we’ll be making just 1 drink.
## Cup Platform
We’ve decided against a moving platform mentioned on the webboard due to the extra complications and cost. Instead, we’ll use a “gutter” system to let the ingredients flow/roll into a stationary position/platform. Software will be adjusted to account for delays from valve to pressure sensor.
We would have a pressure sensor built into the platform to measure liquids by weight in cup.This will measure the weight of the cup and will be in sync with the dispensers to ensure the correct amount is dispensed.
## Liquid Dispensing Mechanism.
A solenoid valve will be used in conjunction with the pressure sensor in the platform. Once a certain weight is reached, the solenoid valve would shut off. A microcontroller will be used to control this flow. We would also time the release of liquid as a emergency shut-off to when the pressure sensor isn’t working properly.
## Boba Dispensing Mechanism
The hard part would be to dispense the boba (solid). This is because boba is must be held in water to prevent drying out. To solve this, boba will be held in a funnel with a bottom made of both mesh and a solid material.
To dispense the boba, we first need to slide the solid material out of the way, via a servo, to drain the water. Then we can slide the mesh layer out of the way to allow the boba itself to drop.
We would measure the amount of boba using pressure sensors in the platform. After both layers have been closed, we need to then resoak the boba, perhaps using the previous liquid dispensing mechanism.
However, we are open to other ideas, and will be exploring the solution. This will perhaps will be the most time consuming and complicated part.
Funnel \ /
Mesh layer - - -
Filled in layer ___
## Criterion for Success
Our final product should be able to:
Dispense 2-3 different liquids with a preset amount
Have parameters adjusted with a web user interface
Record and store the amount of liquids/boba dispensed, the time of usage.
## Previous Idea Posts
|50||Indoor noise monitor
|Zhen Qin||Casey Smith||design_document1.pdf
|Team Members: William Xu (wxu39), Ziyao Li (ziyaoli2), Quan Liu (quanliu2)
Project Name: Indoor noise monitor
Previous idea post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31942
Maintaining acceptable noise levels in crowded areas such as apartments has always been an issue. Some people like to live in quiet environments and are easily disturbed by their neighbors. However, most apartments don’t control the noise well. We need a product for landlords to maintain a quiet environment so that tenants will have a good living experience.
We want to create a device that can detect noise levels inside the apartment and will first give warnings(alarm and blinking light) that the guest/tenant is being too noisy and eventually a notification to the device’s administrator. This device would be able to be used by people such as landlords who want to ensure a quiet living environment. The device will have certain adjustable thresholds such as an average of 90 dB lasting for 1 minute during the daytime. Our device will be different than others because of the many adjustable parameters such as time of day and sound threshold as well as an advanced warning system.
Our device will have 5 different subsystems.
The first one is the noise sensor(e.g. omnidirectional microphone).
The second one is the warning subsystem(e.g. microcontroller that will take in noise level in decibel by finding the average noise level over a certain adjustable period of time and will determine whether to give a warning based on the time of the day and other parameters, it will also determine when to alert the landlord after a certain amount of warnings).
The third one is the subsystem that will alert the people indoor(sound alarms or some lights). The fourth one is the device that will be able to connect to a cellphone using Bluetooth (reach goal: wireless network or something similar to be able to send a message to a longer distance) to alert the administrator after several warnings.
The fifth one is the power subsystem that supports the energy use of the whole device.
|Amr Martini||Casey Smith||design_document2.pdf
|A backpack with NFC/RFID sensors embedded in each pocket. The sensors will be used to scan items when they are placed into and removed from the backpack. There will be tags on certain items, such as books, notebooks, textbooks, laptops, etc., but not on smaller items like pens or pencils. As the tagged items go into the backpack, the scanner will recognize the tag and will log it to the mobile app. Taking things in and out of the backpack are the only times the sensor will need to pick up on the item, so there will be no need for the scanner to be able to distinguish items in close proximity, as they will already be in the backpack. For the untagged items, the user can choose to manually log them in the mobile app if they want. As it gets close to an event on the user’s calendar, the backpack will tell you if there are items for the event that are missing, so as to remind you to be prepared. It will also keep track of the items on per-pocket basis, because as things are added to backpacks, the messier and more difficult it is to find things. This information will be presented in a mobile app. The app will also be used to designate what items are needed for each calendar event and for other features like valuable item monitoring. Whenever the user moves from their current location, the app will verify that the item is in the backpack, so uses will not forget important items like a laptop.
This backpack would be a standard backpack (smaller than a hiking backpack, similar to the ones that students use). As a result, the battery would ideally need to last a day at most. The only power necessary for the backpack is with the scanner, so the battery would not be too heavy for a user when added to the backpack. Additionally, the scanner would only scan if enough light is detected, because the scanner should only be scanning when the backpack is open (the scanner should be doing nothing if the backpack is closed, all the items should already be logged in the app).
For the mobile app, the user will need to match the tags for the sensor to an item before hand.
This project is good for senior design as it incorporates both circuits and software engineering. The sensor array will require orchestration and communication with the mobile app, and the app will effectively present its information to the user. As far as we have seen, there are no products that exist with these capabilities. Most “smart” backpacks focus on battery packs or laptop security.
YoungJoo (Jay) Yoon
|Soumithri Bala||Arne Fliflet||design_document1.pdf
|The base project is a drawer that can extend and retract on command. For this project, we will install the hardware onto an existing 3-drawer dresser though we imagine a market product would be sold as a unit, not an installation, that could be in multiple product areas such as the kitchen, desks, and entertainment centers. Our target audience for the dresser is people with limited mobility who might struggle with strength and/or range of motion to open a drawer; more specifically, this term describes people with physical handicaps, people with mental disabilities, and the elderly. Our primary ethical concern is to ensure that people with limited mobility do not feel discriminated against due to the nature of this project – to that end, we will be in communication with some members of our target audience to best tune aspects like motor speed, access method, and terminology. The first module is the mechanics, which will involve one or two motors (depending on the power output per motor) controlling extension mechanisms on the left and right sides. Our considerations for an extension mechanism that can reasonably fit around a drawer are: a pulley and belt system or a rack and pinion setup. Pneumatic linear actuators were suggested and considered, but we do not plan to use them because of cost, size, load, speed, and reversal-of-motion considerations. The PCB will control how and when the drawer moves. We expect to have at least two functioning drawers using a second module, RF-transmitting buttons (on the dresser or on a remote), to activate drawer motion. A third module will direct the power: we want this prototype to plug into a wall outlet because that seems the most convenient for a dresser setup. Finally, we plan to have some form of sensor module (heat, motion, infrared) on the inside of the drawer that can sense when the drawer is blocked and reverse the retraction so as not to trap and injure a hand. Additional potential features include: opening the drawers with Alexa; opening the drawers through RFID; opening the drawers with a different signal like a hand wave; adjustable motor speeds (probably 3 different settings) for users who are not people with limited mobility; USB power outlets on the top of the dresser; more extensive safety features, pending advice from people with limited mobility; and some kind of (LED) indicator to inform the user if there is space remaining in the drawer for further storage.
David Stone davidms2
Levi Applebaum lappleb2
Jay Yoon yyoon25
|53||FPV Drone shooting game
|David Null||Arne Fliflet||design_document1.pdf
|General description :
FPV drone racing is quite popular among these years and more and more people DIY their own drone. Most of the activities related to FPV drone is playing with racing track or taking highly control technique demand videos. In order to help new players be familiar with flying skills and add more fan to the FPV drone activities, I want to design a game system which every drone carry with these chips could join in a FPV drone shooting game using their FPV goggles.
The project mainly consists two parts.
First is the shooting system. There are two subsystems:
1.shooting command (controller-drone communication) : the shooting button will be set on the RF controller and send RF signals through the antenna on the controller. The receiver will be on the drone.
2.attack detection(drone-drone communication): Each drone will send two specify RF signals, one is to tell that the drone is under attack range, the other is the shooting signal. These two signals can be received by other drones. If the drone detects the first signal, there will be hints shows on the goggle screen that there is an enemy nearby, but with no specific direction. If the drone detects the second signal, it will send data to the Arduino board that the drone is being attacked.
The second part is the Arduino based game-interface chip. It receives digital signals from camera and combines it with game interface and send it to the video transmitter of the drone, and the video transmitter will send the analog signals to the FPV goggles. All the chips will be powered by the power distribution board with 1300mAh battery.
For the game interface, it should be able to turn on/off the game mode, with game mode turn off, the FPV goggle should receive images exactly what are recorded by the camera. When game mode is on, there will be game info shows along with the video,like the remaining enemy number, enemy destroy number and life. Once the life reduces to 0, the shooting command will be disabled and there will be LED align with the drone to indicate whether the drone is ‘alive’ or ‘dead’.
In order to show the game, I will assemble two fpv drones carrying this game system and a bunch of targets on the ground to test the shooting system.
|54||Soccer Team Gameplay Metrics
Yi Rui Zhao
|Dongwei Shi||Arne Fliflet||design_document1.pdf
Current smart soccer balls only measure ball speed and spin after a stationary shot. There are also gps chips that measure player location during the duration of a match. Also there are apps that allow coaches to manually enter player data as completed passes, shots on goal, completed dribbles etc. All of this data is not readily available from an automatic system. There has to be a human recording each touch of the ball to calculate completed passes. We aim to combine all these features into one system.
# Solution Overview
We want to build a system that is able to measure metrics for individual players over the duration of a soccer game. The metrics that we aim to gather include data like: Passes between player A and B (by knowing when two players of the same team touch the ball consecutively). Bad passes (in the next ball touch is by an opposing player). Longest string of dribbles(the most consecutive touches of the ball by one player), time of possession (continuous time until an opposing player touch is recorded)
Because we don’t want players to be carrying heavy electronics around during the game, we will be integrating all the electronics inside the ball. The only additional thing players will have is a lightweight, paper-thin RFID tag sticker in their cleats.
Instead of messing around with an actual soccer ball, we will be using a high density foam ball that can be easily manipulated to fit the necessary sensors inside.
# Solution Components
## Sensor Subsystems
To achieve this, we will use a accelerometer, gyroscope and an RFID reader to be able to measure the necessary data. The accelerometer and gyroscope will be able to tell us if the ball is in motion or not and what is the spin and speed of the ball. We can achieve this by using an IMU (inertial measurement unit) which combines the two sensors into one chip and provides 2 to 9 degrees of freedom based on the selected chip. The accelerometer should be in the rage of ±4g(higher value would increase accuracy), the MPU 6050 could be a good option. The RFID reader paired with RFID tags on the player will be used to tell us the specific player that is or has been in contact with the ball. The RFID reader selection will be determined based on the power draw, read range, and the read time (<8.3ms, based on the foot to ball contact time).
## Processing Subsystem
We will use a SoC microcontroller with wifi and bluetooth capabilities (such as the ESP32) to process data received by the sensors and to send it to a cloud database or a computer nearby. The microcontroller only needs enough processing power to read and send out sensor data. The actual data parsing will be done by an external computer.
## Power Subsystem
The ball cannot be wired during a game, so we will use a rechargeable lithium ion battery to power the system. We aim to have the ball last at least a quarter of a soccer match (~22.5 minutes). We will try to use the smallest battery required to meet this goal to minimize the weight of the ball. Since wireless charging is likely too complicated for this project, our charging system will be wired.
## Backend Data Processing
We will build a mobile app or web app to parse and display desired metrics for players or the team.
# Criterion for Success
Our baseline goal will be to have somewhat kickable ball that can detect and differentiate players that have been in contact with it. With that data we should be able to compute and display individual metrics as described above.
|55||Reconnaissance robot (SCD pitch)
|David Hanley||Arne Fliflet||design_document1.pdf
|Siebel Center for Design|
|Problem: When police officers need to check out suspects in an unknown location (e.g. their shelter), it can be dangerous sometimes. These shelters are usually dark and cramped for deceiving purposes. Police officers can be ambushed by suspects if they enter without having full awareness of the space. So if there is a robot that can help officers do some pre-recon, it can significantly reduce the risk.
Solution overview: An reconnaissance robot that can move freely in such spaces and transfer back high-quality camera data. It can move through single point of entry without stucking and avoid obstacles around it.
The robot’s basic model is a small car with camera(s) on top of it. The arm holding the camera could ascend, descend or rotate to capture different views of the surrounding.
The camera will send data captured to end devices such as mobile phones or laptops.
The robot has auto path finding mechanism which will allow it to investigate the area without being blocked or stucked. Sensors are used to allow the robot to detect obstacles around it.
Users can also send signals start or pause the robot, and control its movement.
Although there are similar products in the market, our product is designed to be smaller in scale, more cost-effective and customized.
Criteria for success:
The baseline is to have a controllable robot that can move on flat ground and camera data can be transferred to users in real-time. Additional features could include a user-controlled arm that can change the position of the camera, auto path-planning based on ultrasonic sensors and human position detection based on infrared sensors.
Team member: Pu Jin(pujin2), Xuqing Sun(xsun63), Shenyi Wang(swang250)
|56||Water Quality Monitoring
||Luis Navarro Velasco
Marina Manrique Lopez Rey
|David Null||Jing Jiang||design_document1.pdf
|Students: Marina Manrique (marina3) and Luis Navarro (luisn2)
This project is the pitch that CERSE presented in class. 70% of the world is water, and only 2.5% is fresh water. This project consists on creating an autonomous boat to analyse the water in rivers and lakes, monitor its pollution levels and send alarms depending on the collected data.
To achieve this goal, the project is going to be divided into 3 parts, regarding location, battery management and data transfer:
1. Remote control of the boat: The objective of the design is to be able to give the boat the autonomy necessary to navigate without human intervention, as if it was using an autopilot. For this, we would use a GPS sensor (NEO-6M) and a compass sensor together with an ATMega328 or Pic32 micro controller (PIC32MX230F064D). This way, an initial trajectory would be divided into several points, and the boat would be able to re-orientate itself once it reached each point. If this was not achieved, it would be controlled via radio control.
2. Power supply: in order to power the boat while it is collecting data, we need to choose the batteries that are going to be used. Also, a solar panel will be integrated in the boat to charge these batteries (which need to give the boat an autonomy of 15 days and supply power to the multiple sensors), so a BMS (Battery Management System) needs to be used. Initially the batteries would be charged off board, using the solar panel's energy to power the micro controller, in order to control the sensors and the communication system.
3. Data transfer: In this section we would use a microcontroller (ATMega328) that would work independently from the navigation platform. This will take the data read from the sensors and stored in the boat every period of time and upload it to an online platform (this can be an ordinary cloud or a distributed analysis cloud like AWS) Data would be stored on a memory on board as well in order to make data recovery easier. Then, a desktop application and Android app with an alarm system would read the data from this cloud and show the analysis and data required by the team at CERSE. The technology we would use would be 2G/3G, depending on the network availability of the desired destination of the boat. Also, the alarms will be also sent using a GPS/GSM system in order to keep track of the boat and the quality of water given a situation in which. network becomes unavailable (in this case, data would be loaded to the cloud once network or power is recovered) This is CERSE.
The proof of concept of the first point (GPS remote control) will most likely be a simulation given the mechanical complications that it implies. In order to probe the other two points, we would try to use a small water toy boat, or another moving vehicle if the first option was not possible. Points 2 and 3 are CERSE's and our priority.
|57||Wearable Smoke/CO Detector for Deaf and Hard of Hearing
|Mengze Sha||Arne Fliflet||design_document1.pdf
|Group Members: mloftis2 (Mike Loftis), msa5 (Mohammad Adiprayogo)
Current smoke and carbon monoxide detectors designed for deaf and hard of hearing people alert them of the presence of fire/CO via strobe light. A hearing impaired person could easily find themselves in an environment that is not equipped with a high intensity strobe alarm. To ensure awareness of dangers in the environment (especially while sleeping), they could carry their strobe light detector with them, but this would be less than ideal.
We are proposing a more portable option with a wearable wristband that will house both a photoelectric sensor for smoke and a metal oxide sensor for carbon monoxide, and notify the wearer with a small vibration motor. The device will be equipped with manual push button to stop motor vibration in case of a false alarm, and an external LED to monitor battery life.
-Photoelectric Reflective Sensor Chamber: This chamber will house both the sensor and reflector, smoke blocking the sensor's view of the reflector will cause a small vibration motor to alert the wearer.
-Metal-Oxide Semiconductor Sensor - In the presence of clean air, oxygen is absorbed onto the semiconductor surface, blocking current flow; when carbon monoxide is introduced, it reacts with the absorbed oxygen, decreasing the amount of oxygen on the surface allowing current to flow. Similar to the photoelectric sensor, the allowed current will trigger the vibration motor.
3V zinc-air or lithium coin cell battery bank (will be decided based on importance of cost vs weight of device). LED illumination will reflect battery life.
Criterion for Success
-Device will alert the wearer if smoke or CO is present via vibration
-LED will illuminate once battery life is under 20% capacity
-Push button will halt vibration upon user request
Link to idea post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=32018
|58||Weld Gun Spatial Tracking System
|David Hanley||Jing Jiang||design_document1.pdf
|Previous idea post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31889
*We are collaborating with our sponsor Illinois Tool Works Inc. (http://www.itw.com/) and they have agreed to give technical support and funding for components orders.
Team Members: Xingjian Zhao (xzhao67), Haoyong Lan (hlan3), Zheyuan Hu (hu66)
In the future, there will be a shortage of 200,000 welder operators over the next 20 years, and a proper weld posture is one of the biggest obstacles to full productivity of welding. Potential workers need a lot of time of training in order to be skillful, which is a significant cost in the industries.
# Solution Overview
The Weld Gun Spatial Tracking System can help get the new hire up to speed in a production environment. The system is composed of one beacon mounted on the weld gun and four listeners mounted at a different location in space. The system is capable of measuring the distance between the beacon and each of the four listenings by using ultrasonic / radio frequency transmitter and receiver. A host PC receives the distances data from the beacon to each listener through WIFI communication and calculates the 3D spatial coordinates on a single point of the weld gun. The position and orientation of the weld gun can be positioned by multiple points on the weld gun if additional beacons are added. This solution will inform welder about proper posture and speed up the training process.
# Solution Components
## Beacon Component
The beacon includes an RF transmitter and an ultrasonic transmitter. A microcontroller unit (MCU) controls the beacon to send RF signal and ultrasonic signal simultaneously at a fixed time interval (1 second). The beacon is installed on the weld gun so a single point on the weld gun can be constantly tracked.
## Listener Component
There will be four listeners which receive both ultrasonic signal and RF signal mounted on pillars with various heights which are located on the four corners of the square with side length 10 feet. Since RF signal travels 10^6 times faster than ultrasonic signal, a microcontroller unit is connected to each listener which calculates the distance from the beacon to the listener by measuring the time delay between the two signals.
## Communication Subsystem
The microcontroller unit (MCU) on each listener controls a WIFI module which sends the distance value to a host PC.
## Processing Subsystem
A host PC calculates the 3D coordinate of the beacon based on the distances from the beacon and four listeners. A point can be calculated by its distance to three distinct points in space. Since we have four listeners, the coordinates of the beacon are calculated three times by choosing three listeners out of the four listeners. This allows our solution to be more accurate by averaging the results. Coordinate of the beacon is updated synchronously with the pulse signal from the beacon. The velocity of the single point on the welding gun is calculated by comparing two consecutive coordinates in time.
## Power Subsystem
24 V DC 2.5A power supply system.
# Criterion for Success
In order to demonstrate our project, our solution needs to accurately track a single point on the weld gun within a 10’x10’x10’ volume. And our system should be able to measure and report values for location and travel speed of the weld gun.
|59||Parking Reservation System
|Kyle Michal||Arne Fliflet||design_document1.pdf
Many locations can be busy to the point that nearby parking is not guaranteed (busy malls, downtown areas, etc). Additionally, in busy city areas, a driver can see an available spot but be too far from the spot to safely get nearby. There are currently solutions such as the LED lighting system in a garage to show open spots, however this system only indicates whether a spot is open once the user is in the actual garage. Furthermore, it does not allow for ahead-of-time payment and reservation. This can result in distracted driving and unnecessary stress.
We propose building a parking reservation system that can do multiple things:
- Communicate the status of parking spots (i.e. open, reserved/taken), so users can get an idea of how busy a parking area is.
- Allow users to reserve and pay for a parking spot ahead of time, eliminating stress and distracted or dangerous driving.
- Validate that a reserved spot is only taken by the user that reserved it. This will be done through license plate recognition and backed up by driver license verification.
- Mobile Application: Allows for identification and reservation of open parking spots, Stores user verification information such as driver’s license information and license plate numbers
- Meter Unit (one per parking spot): Raspberry Pi-like component - Connected via LAN to Hub Unit, Camera - For detection of vehicle in spot and license plate recognition, LED status lights - Indicates status of spot (open, reserved, verified)
- Hub Unit (one per parking lot/area): Raspberry Pi-like component - Connected to central servers through internet, Magnetic Card Reader - For driver’s license authentication
**Proof of Concept Vision:**
We would build 2-5 meter units and one hub unit for our proof of concept. Each meter unit will be tasked with two main goals. First, the meter unit will identify whether a spot is open or taken, and send this information to the hub unit. Second, the meter unit will accurately enforce reservations using license plate recognition, and provide the user with a visual signal that they have been validated (LED light indicator). The hub unit will be given an internet connection (wifi/ethernet or cellular data) to communicate spot availability to central servers and consequently, the mobile app. In addition, in situations where the license plate can’t be seen, (i.e. damage, curbside parking, missing plates) the hub unit will have a magnetic card reader where a user can swipe their drivers license as a means for validation. Through these two methods of verification our system would properly ensure that the correct car is in the correct spot.
In the situation where an incorrect car has been parked in a reserved space, the system will first refer to license plate detection. If license plate verification fails, the system will then check to see if a driver’s license has been scanned within the appropriate time frame. If neither of these verification tests are passed, the parking administrators would be notified. Additionally, the meter would start blinking and buzzing - bringing attention to the violator and therefore incline them to move out of the spot they have wrongfully occupied.
Overall, our system will allow the user to focus on driving and eliminate the stresses of parking in busy areas. They will have peace of mind, as their spot is reserved and paid for before they step into their car.
**How we are different from other parking projects:**
1. We are implementing a reservation system and have a unique way of enforcing it. No project from previous semesters had this feature built into their system. We feel like this a feature that could greatly improve current public parking systems and lead to more meters being paid and therefore increase revenue for city.
2. Other projects use different sensors to verify the presence of a car but we have eliminated the sensor component and instead have chosen to use a camera that would have the dual purpose of scanning for the presence of a car and license plate recognition.
|60||Mousr Autonomous Docking
|Soumithri Bala||Jing Jiang||design_document1.pdf
|Project Proposal: Mousr Autonomous Docking – Yuhao Liu | Robert Malito | Justin Li
Mousr is an interactive robotic cat toy developed by a startup called Petronics here on campus. Mousr plays automatically or by smartphone control and was voted Best Cat Toy 2018 by the American Pet Products Association.
After meeting with the Petronics team and discussing the weaknesses of the Mousr, battery life seems to be the most asked for improvement by consumers.
Mousr Product: https://www.amazon.com/Petronics-Interactive-Robotic-Automatically-Smartphone/dp/B07HBC5M6Q/ref=sr_1_2?ie=UTF8&qid=1548725116&sr=8-2&keywords=mousr
Quote from Amazon review from verified Mousr purchaser Mimi:
“The battery runs out incredibly quickly, especially if you just let the Mousr run in auto mode - I was hoping to have a toy I could leave on when I leave for work to entertain the cats remotely, but the toy will only last for an hour or two on a battery charge.”
Problem and Proposed Solution:
Buyers are cat owners that want to be able to leave the Mousr to entertain their cats for extended periods of time (work, vacation, grocery store, etc.). Due to size constraints stemming from cats disposition to play with small objects, the device’s battery is currently only capable of around 2 hours of continuous operation.
The solution proposed to solve this issue is to provide autonomous docking to the Mousr. This will allow it to locate its charging station and refuel when low on battery or when idle. Having this feature will increase customer satisfaction, device sophistication, and usability of the product.
Petronics is providing a modified Mousr that will allow us to interface with the navigation control system of the device. By placing passive beacons on the charging dock and sensors on the Mousr we aim to develop an algorithmic system to achieve a high degree of reliability in Mousr finding its charging dock. From here, signals will be sent to the navigation control system to guide Mousr to its base.
The main sensor/emitter combination talked about thus far has been IR. Other sensors that could help Mousr in understanding its surroundings and navigating back to its dock include:
- Ultrasonic distance sensors and a classification algorithm to determine floor type (tile, wood, carpet)
- Luminosity sensors to determine common lighting conditions near charging dock
- NFC for close range accuracy
- Ultrasonic sound emitter and sensor to determine proximity of charging dock
- Could be more!
Overall, Petronics is aiming to minimize the cost increase that autonomous docking adds. Choosing economical but reliable sensors and emitters will be important to success as well.
|61||Internet Connected Chess Board
|Thomas Furlong||Michael Oelze||design_document1.pdf
|Team Members: Joel Matthews (jpmathe2), Ritish Raje (rraje2), Jeffrey Ito (jito2)
Title: Internet Connected Chess Board
Problem: Chess is an age old board game. In recent years, it has been taken to the internet where players can play each other from all over the world. However, players lose the tactile feeling of moving physical pieces on a chess board.
Solution: Build a chess board that connects to a PC allowing players to maintain the ability to play each other from all over the world but regain the physical interface of a chess board. The board would move pieces in real time. One of the players could be playing on an app/website while the other player is on the physical board.
Similar to Project 37 from Fall 2017, Hall effect sensors can be used to detect and determine which pieces are where on the board. This information will be transferred to the computer through Bluetooth. When the opponent completes their move, the information will be transferred back to the board. To minimize the mechanical complexity of the project, LEDs underneath the board will light up underneath the opponent's piece that needs to be moved and where it needs to be moved. The player will not be allowed to make his/her next move until the opponent's piece has been placed in the appropriate location.
original post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30568
|62||Electronic Chip/Betting System
|Nicholas Ratajczyk||Arne Fliflet||design_document1.pdf
|Original Post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31922
Poker is one of the many card games that requires multiple components, such as a deck of cards and some type of chip system in order to keep track of money, in order to be successfully played. While most poker games are planned beforehand, there may be some unforeseen change of circumstances, such as extra players joining or a lack of chips among many others. This could create problems and cause players to make unwanted adjustments to their game.
Our solution is to create an electronic money and betting system that will serve as the main interface for keeping track of your current money, as well as adding to, or subtracting from, your current total. Within the game of poker, this solution allows the user to use this as a substitute to physical chips, enabling a larger space (no chips to occupy space on the table) for additional players to play, as well as not having the need to redistribute the already limited amount of chips if more players want to join.
Controller Unit: Maintaining data, and doing arithmetic computations on it. We envision this to be able to do tasks such as raising a placed bet, adding and subtracting from a players total, allowing buy-ins, auto-splitting the values of the coins etc.
Processor Unit: Microcontroller for signal processing between RFID reader and control unit (can use Raspberry Pi)
RFID reader (RC522) to process the RFID card (using RFID MIFARE 1k chip)
The intention behind this is to have a way to identify which player’s turn it is to properly display and add/subtract their chips from their respective pile.
Power Unit: DIY portable solution to power the Raspberry Pi
https://www.makeuseof.com/tag/pi-go-x-ways-powering-raspberry-pi-portable-projects/ (Part 4)
Display Unit: We are planning to integrate the potential use of an LED display, but we are still considering other alternatives to this.
|64||Sydekick - The interactive, fun, and educational robot for those on the autism spectrum
|David Hanley||Arne Fliflet||design_document1.pdf
Members - Sam Feizi (feizi2), Balasabapathi Chandrasekaran (bmchand2), Rohan Mohapatra (rmohapa2)
Motivation - According to the Centers for Disease Control and Prevention (CDC) rates of autism have increased from 1 in 150 newborns being diagnosed with autism in 2000 to 1 in 59 in the year 2018. We are motivated to build a therapeutic “Sydekick” for those who have been medically identified to be on the autism spectrum.
Solution Overview - Sydekick will have a set of wheels to govern its translation and rotational motion. We intend to create a robot that is partially humanoid with a torso, head, and two arm-like appendages that operate on one rotational axis. The robot will contain a set of games built-in for the child to interact with.
Solution Details - From the ground up, we will mention design specifics of the processing unit, wheels, torso, arms, and head as follows:
Processing Unit - We will use a Raspberry Pi 3 Model A+ due to the ability to interface with various sensors as well as its diverse wireless capabilities. Our intent is to integrate the Android SDK and be able to use Google Play Services as a method to communicate through an LCD and have touchscreen capabilities.
Wheels - There will be four motors on the base that will power the wheels using pulse width modulation communicated from the aforementioned Raspberry Pi.
Torso - This module will contain an LCD screen with capacitive touch for the child to interact with (tickle, hug, etc). The chassis will be mainly 3D printed using PLA with aluminum reinforcement.
Arms - Arms will operate in one rotational axis using a motor at the shoulder, but will not replicate a ball and socket joint. Arms are simply meant to offer yet another interactive tool for the child and due to the weight of adding another motor, will therefore not contain elbow movement functionality.
Head - This module will contain another LCD screen to encourage the reward system for the child playing with the robot and will simply have facial emotions programmed into it as well as a speaker to communicate with the child.
Sensors - Sensors will be primarily composed of pressure sensors for child interaction, particularly for games that are built-in such as a modified version of bop-it and educational games to enrich human-human interaction skills of the child.
Criteria for Success - To determine success this group has determined three areas: mechanical, hardware, software.
Mechanical - The robot should mimic a human in regard to movement with the exception of the wheels on the base.
Hardware - A successful hardware result for this group shall be obtained when all sensors are fully functional, LCDs work seamlessly, and all communicate properly with the Raspberry Pi.
Software - The software modules will be considered a success if a child can play the built-in games with the robot and benefit from its audio-visual features.
|65||Autonomous delivery robot (pitched by yummy-future)
|Zhen Qin||Arne Fliflet||design_document1.pdf
Robot is so called the next generation technology and indeed has been brought to our life in many aspects. YummyFuture introduced a revolution in food industry by robots and they had a prototype of robot coffee stand which still need improvement in delivery system. In the coffee shop in the future, not only does the robot make coffee, but also delivery safely by itself. An autopilot robot that is able to deliver food will save a lot labor cost, but the way to its destination is not always smooth and safe, therefore, dealing with different kinds of special situations and potential obstacles will be the goal of our project. Specifically, we will focus on aspect of the combinations of sensors to achieve this goal.
First, we need to build a chassis consists of four motors as the main body of our prototype, and a localization algorithm is necessary for it to find the path to the pre-assigned location, which will be the major incoming problem in the software part of our project. In terms of coping with weather, pedestrian and traffic that will occur on the path, specific sensor combinations need to be installed so that the autonomous vehicle is supposed to give certain reactions to complete its mission. For avoiding potential obstacles such as pedestrian and traffic, the sensor 2D lidar with cameras will be our basic choice. Since the obstacles like pedestrian are not always still, the robot should be capable of detecting surroundings to prevent from potential collisions, we have to consider all the possible directions that the other objects are approaching. One possible solution is that we can use four cameras that cover entire scope to make sure there is no dead angle. Additionally, the acoustic electronic elements such as sonar sensor can be added to enhance the sensibility of the robot, because 2D lidar alone may not be sufficiently sensible to some particular circumstances. We also come up with an extra feature that can improve the robot’s functionality- working at night or in darkness, because the lidar with cameras cannot work effectively in dark environment, we should depend on other sensors that is not influenced by light intensity. Therefore, we are going to use another sensor combination including infrared sensor and acoustic sensor to gather information around to do the same job for avoiding the collisions.
We think the 2D lidar will be enough for our design since we still get the sonar sensors to assist the lidar-camera system. We will choose Raspberry Pi to be our overall processing system. We do care about the safety concern. Like we said, we will be using the acoustic electronic elements to get the information of the distances from the robot to the people’s bodies. We will be using batteries of 5V 2.5A, DC power supply for the entire driving system which is composed of four motors. The data collected from the sensor will be transmitted to the Raspberry Pi, after executing the corresponding data analysis, the processor should accomplish the parameter modifications and send the electronic output signal to the motor for adjusting the current behavior. The PCB basically integrates all the electronic components and power supply.
Criterion for success
Our project has two related parts. We would build a chassis with wheels and motor as a robot prototype. Then our focus is the sensor parts, including lidar-camera system, sonar and Infrared detector. The project will be successful if we build a cart and it could ride to the destination, bypassing any obstacles in its way without collision no matter it is daytime or night.
Original pitch web link: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31673
|66||Smart Exercising Assistant
|Kyle Michal||Arne Fliflet||design_document1.pdf
As a team, (Platon Slynko, William Gammon, Akhil Kandimalla) we are proposing a smart wearable exercising tech.
Today, almost everyone is exercising in some way or another but many are uninformed when it comes to the execution of some workout techniques. Every day, there are more than 10,000 people treated in emergency rooms across the country for injuries stemming from sports, recreation, and exercise. Without proper guidance from personal trainers on proper techniques, this puts people at a higher risk of injury while exercising.
We are proposing a smart wearable tech to improve exercising techniques. The person who is wearing this technology would exercise as usual with data being gathered in the background using BLE sensors attached to the wrists, arm, and shoulder. The data collected would be processed using a central hub located on the torso. This data would be sent via bluetooth to an app where the user would select what workout they are doing, and our system can provide feedback on potential improvements of motion and accuracy.
An example of how this would work: suppose you are doing bicep curls. The proper way to do them is to keep elbows in the same position, perpendicular to the ground. The system would track earth acceleration and compare data from all 3 sensors at each arm to deduce elbow position. Using acceleration we can deduce location by taking integrals with certain constants. So, if the proper technique is in place, acceleration at the shoulder and at the elbow should align, since elbow would be pointing downward. If the elbow is not aligned with the shoulder, there would be an angle between them and the system would be able to tell the user to mind his elbows when doing the exercise.
Each of the sensors involved would have an internal gyroscope and accelerometer and would use BLE technology to communicate with the main module. These BLE beacon and sensors would have integrated batteries. Since distance tracking is not an easily available technology, we would be using accelerometers and gyroscopes to keep track of the body movement during exercising.
The main module that would be worn on the chest would consist of a power module, a Bluetooth receiver, a microprocessor to analyze the data in real time, and send this data via bluetooth to our app where it can give real time advice on how to improve on motion and accuracy of the certain exercise.
|67||VR Hand Simulator
|Dongwei Shi||Casey Smith||design_document4.pdf
Alex Brannick (brannic2)
Daryl Drake (dadrake3)
VR Hand Simulator
Our project idea consists of making a headset of cameras that can accurately track and segment your hands to be used in VR. Using some sort of reflective tape for our hands, we would train two CNNs to trace the location of our hands as well as segment the joints in our fingers to be able to recreate our hands in a virtual environment. From here, we would create a plug in for our device within the Unity game engine so that games for an Oculus Go could be created that use our device. We believe this project is appropriate for a senior design project because it solves a current problem in society and contains a definitive circuit portion. It provides a solution to the problem where games for the Oculus Go are not immersive enough when it comes to complex hand interactions. The circuit portion of our project consists of the headset, where we will need to create a pcb that links our cameras to a raspberry pi and TPU to do the image processing. Furthermore, our team has a lot of experience in almost all of the parts of this project. We have both worked with the Unity game engines to create games, and have even made VR games in the past for an Oculus Go. Our team has also done multiple projects both in and out of class with CNNs. One example being faceid software that locates and identifies our face to be used in unlocking our MacBooks.
One project that has some correlation to ours was the drone hand controller. This project uses a hand to control a drone instead of a typical controller, similar to how ours would replace the oculus joysticks for certain games. However, our project differs from there's because while their project is controlling a drone, we our focused on recreating your exact hand movements and interactions to be used in gaming.
|68||Stereo Phase Corrector: Stereo-way to Heaven
|Kyle Michal||Michael Oelze||design_document1.pdf
|Names: Dave Simley and Rosemary Montgomery
NetID: simley2, rjmontg2
Web Board Post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30822
Listening to music through a nice stereo can really brighten the atmosphere. Whether it's a quiet Sunday or an evening with friends, music can complement many occasions. What one may not realize, however, is that interference patterns greatly affect the quality of the sound.
Say someone has a stereo in their living room, but likes to listen to music while preparing food in the kitchen. Destructive phase relationships can cause the listener to hear music that's tinnier and lacking warmth.
What we are proposing is a four speaker array which we can use to “steer” the sound into a single direction. Our design will use one or more IR sensing cameras to detect where the listener is in the room and apply a linear time delay to the speakers to direct the sound towards the listener.
The array of speakers create overlapping signals with the peak intensity of sound across all frequencies extending out along a line directly from the center of the array. By applying a linear time delay across the array, the line of peak intensity steers away from the center at an angle which changes the direction. With the help of IR sensors, we can track where a person is in the room and use this information to adjust how much we need to angle our line of peak intensity so that it runs through our listener.
Our main reach goal is to incorporate Bluetooth into our project. For example, we would use Bluetooth as the method of inputting music into our speaker array. We could also use Bluetooth to adjust the volume and sound, as well as turn the IR tracking capabilities on and off.
|69||BALL RETURN PUTTING MAT WITH SCOREKEEPER
|Soumithri Bala||Michael Oelze||design_document1.pdf
|Group members: Christopher Bell (cabell4), Benjamin Cannistraro (cnnstrr2)
Problem: It is hard for golfers in many parts of the country to practice putting outside during the winter months or whenever it rains outside. There are putting mats available but none of them will allow you to track your total putts made during a particular time frame or allow you to compete against an opponent.
Solution: We are proposing a putting mat design that will have two holes that are connected to a display. The display will show the total amount of putts made per hole and will return the ball to the user once a putt has been holed.
- A pressure sensor just below the edge of the hole to detect when a ball has entered the hole.
- When a ball has entered the hole it will trigger a pneumatic force to propel the ball back once it has been holed. This force will have a controller circuit that will delay the ball being sent back until it has stopped moving.
- The ball will also trigger a logical counting circuit. If the ball is holed, the circuit will increment a counter that is displayed on a hex display. This signal will be sent via bluetooth to a separate hex board.
- The hex board will also be able to display an adjustable timer that can be controlled by the user.
- The mat will be powered by a wall plug that is converted to 5V DC for the logic counter, hex and pneumatic controlling circuit. The pneumatic power will be sourced from the outlet and will be converted to what is needed.
Criteria for Success: Our working project should be able to detect a ball entering the hole. It should then send a signal to the hex board and the board should be able to increment until reset by the user. The pneumatic force will then be triggered and should return the ball to the user safely.
||Isabel Ugedo Perez
Marc Abraldes Velasco
|Hershel Rege||Jing Jiang||design_document1.pdf
Currently, the Petronics Mousr cat toy requires two micro-controllers, one in the head unit that uses only BLE and processes sensor data and another in the body unit that works as hub that controls wheel and tail movements. Currently the toy is controlled by an Android or iOS app through the BLE functionality of the head chip. The use of Bluetooth causes unwanted battery drain, and the use of the current micro-controller is inefficient in terms of cost.
As a first step, we propose to convert the BLE micro-controller to the ESP32 micro-controller which allows for WiFi connections to control the toy as well as cut the cost. This will be done within three subsystems: an interface between the sensors, ESP, and body, ESP drivers for the sensors as well as a channel to communicate with the rear processor, an updated app that allows for WiFi connection to the Mousr toy.
This interface will collect data from the Mousr head sensors, IMU and TOF, and a midpoint for data out to a rgb LED and a button used for device pairing. This will also work as a bidirectional bus between the ESP and the body processor so that they can communicate between each other (i.e. what the sensors say about the surroundings).
These drivers are pertinent to the use of sensors and processor communication. The sensor data will be collected through the driver, processed in the ESP and then output through the driver to back processor for motor control.
Instead of directly connecting to the toy through the WiFi, the app will connect to the Mousr through Bluetooth, which is viable because the ESP32 allows both WiFi and BLE connection, send router information, and then connect through the router.
CRITERION FOR SUCCESS
In order for this updated framework to be successful, the Mousr should be able to collect data from the sensors in the head and process them on the ESP, and use that information to update motor control in the back processor. The Mousr should also be able to use the app to connect and be controlled through WiFi.
|71||Gloves That Allow You To 'Feel' Electromagnetic Fields
|Anthony Caton||Michael Oelze||design_document1.pdf
The job titles ‘Electrical Worker’ and ‘First Responder’ consistently rank in the top ten most dangerous jobs in the United States. There are around two deaths by electrocution per day for electrical workers in the US and even more in less developed countries with less rigorous safety standards. These deaths frequently come as a result of accidentally touching a live wire the individual was not aware of, voltage leaks, arc flashes, etc. We need a measure to reduce the number of these preventable deaths.
The proposed device is a pair of wearable, insulated gloves that can detect the induced electromagnetic fields (EMF’s) generated by AC power lines and wires from a distance. The gloves would then vibrate with increased intensity the closer/stronger the field became. This tactile response would inform the electrician or first responder of a nearby live wire/electrical source that could harm them, that they may or may not have been aware of previously.
There are two potential designs we would like to pursue. A magnetometer based glove, and an electric field meter based glove. The magnetometer based glove would most likely utilize the MAG3110 IC. This magnetometer would be able to detect the alternating magnetic field generated by the wires. It's fast data sampling rate would allow us to determine if the magnetic field we are detecting is indeed generated by an AC wire. There is also little ambient magnetic noise in the environment reducing the need for much signal processing. The microcontroller would then take those field strengths and translate it into vibration via the vibration disks implanted in the glove.
The second design that we could use is a electric field meter design. This design uses the electric fields generated by all charges to serve as our signal. I don't have a concrete recommendation for the exact detector we will use, but it will either be a electrometer (Capacitively coupled D.C amplifier with a shunt capacitor for calibration) type or an A.C carrier type, as they are low cost, simple and small; all properties we desire. The ambient electric fields in our environment are often large due to electronics so we will need to be able to zero it in respect to it's environment. Thankfully, electric field strength increases greatly, so any large spikes in electric field strength could be the filter we need to determine if we are detecting the correct signal.
For more details you can read my longer proposal with background research, reasons for specific parts, block diagrams, etc, here:
|72||Safe walk hat
Woo Young Choi
Yong Jun Lee
|Dongwei Shi||Michael Oelze||design_document1.pdf
|Safe Walk hat
Vision impaired people use canes. Canes are effective, but it can detect something just near the person and you have to carry it. We want to build a cane free device that could also detect objects approaching the person, which canes can not detect. We would like to implement a comfortable device that is easily carryable for blind people when they walk around.
To protect people with visually impairment from any accident, we thought of a device that notifies people with visual impairment when an object with higher speed than people is approaching. Our solution includes two components. First component is a hat integrated with 6 doppler radar sensors facing six different direction, so that sensors can cover 360 degree. Second component is a speaker located in the device. If an object with higher speed than people walking pace is coming toward people with visual impairment, the device calculates the speed and direction of the object coming, determines level of danger, and send signal to a speaker to notify users.
In order to detect an object coming toward people with visual impairment, we integrated six sensors facing six different direction front, behind, front left, front right, behind left, behind right so that the device can cover 360 degree. We are going to use doppler radar sensors to detect an object coming.
1. The sensors located at the front, behind, left, and right of the cap detect the objects or vehicles if they are in the range of the sensors. The mini speaker, which is attached also at the cap, alerts the blind person saying “Approaching from Front” , or just “Front”.
2. When the user turns on the cap, the speaker automatically tells the remaining of the battery saying, for example, “40%” or “60%”.
3. When 20 % of battery lefts, it also alerts the user saying “Low battery”. So the user can recharge the battery ahead.
We will be using 1.5v rechargeable battery. The sensor, LEDs, and vibration motor do not require that much of power so we think 1.5v is enough.
#Criterion for Success
We believe this cap would enhance blind people to safely cross the crosswalk and walk around with getting alerts on approaching objects (people and vehicles)
|73||Indoor Navigation for the Visually Impaired
|David Hanley||Michael Oelze||design_document1.pdf
|Problem: Blind people need to navigate within indoor and confined places apartments and apartment buildings to get from location A to location B. Typically these spaces are incredibly tight and GPS is not accurate enough to provide room-to-room navigation. Blind people are usually able to locally navigate through the use of sticks and dogs. Stairs, walls, and any hindrances that may occur in their path, however, make moving from one location to another in an unknown indoor environment a harder task because it requires some sort of direction. We aim to provide directional navigation to the visually impaired.
Solution: The blind person will input where he/she wants to go in the apartment (Living Room/Bathroom etc.) and the device will provide voice feedback with rough directions to the destination.
We create a “walkability map” of an apartment (we will first try to test this in an apartment) which includes all the areas a person can walk through (rooms, corridors, entrances etc.). Through a system of bluetooth sensors places in particular locations inside each of the rooms and appropriate corridors, we will be able to localize the blind person through a receiver on their neck/pocket. WIth the position of the bluetooth sensors and the receiver, we can pinpoint their location on the map and then provide them with directions with reasonable certainty.
*see image for a clearer idea*
Bluetooth beacons and receivers:
The placement of these sensors around the apartment will be key to the accuracy of our model. Each of the beacons will emit a signal and we can compare the received signal values with the beacons and their respective coordinates to get the rough position estimate through the RSSI (value of the received radio signal). We can then solve the trilateration problem to get the coordinates of the receiver. We need to receive signal from at-least 3 such beacons at at any given point in time to get a good estimate of the position of the person within 1-2 meters.
This, as the TA pointed out, might be a lot considering the tight spaces and will compound over time. To mitigate the inaccuracies, we will need more beacons and provide better rough estimates of where, for example, a possible exit to another room will be instead of exact distances. Some of the minute detailing within the range of 1-2 meters will be left to the blind person (as we still expect them to use vision supplements like canes); however, we think this will be a huge upgrade to the “the door is to the left” since there will be greater context.
This module will be responsible for receiving the signals from each of the beacons and then transferring them to the the software module where the computation to solve the trilateration problem will happen.
This will be a software module that solves the trilateration problem in a 2D space. Since the space is small, this will not be too computationally expensive. Once we get the proposed coordinates, these will be translated on to the map. We can make use of any of the path finding algorithms (DFS/BFS) or complex ones like A*/RRT* (we want to keep it simple in our prototype) really depending on the performance. The use will then be provided feedback in the form of speech through speakers or earphones to turn right of left after a set amount of meters or steps.
This module will be responsible for sending voice feedback to the user. Will be getting input from the software module.
This module will contain the power routing from a power source that is either a battery, outlet connection, or usb connection.
Criterion for Success:
A criterion for success would be for the system to work seamlessly with one person being able to use it to navigate from Room A to another using the rough voice directions provided by the device.
A reach goal of this would be to try this out in larger public indoor spaces where the compounding of the inaccuracies will become more of an issue. We could use more powerful beacons for localization or look at different sensors to help localize.
Link to original post:
|74||Wind Turbine Generator System Design and Characterization
||John Kan||Arne Fliflet||design_document1.pdf
|Problem: While large-scale wind turbines are cost effective, smaller turbines cost more over the expected lifetime of a system. This is due to the decreased efficiency of smaller turbines and their increased relative cost; 1 kW turbines can cost $9,000.
Solution: Working with Professor James Allison, we will find ways to lessen the initial and operating costs while maintaining acceptable efficiencies with small-scale turbines.
Decreasing the initial cost can be done by using repurposed automotive alternators. Car alternators are relatively cheap and can be found at salvage yards for very low prices ($20-$30) when compared with permanent magnet generators. These alternators provide a good basis to create modifications that will allow us to use the parts for turbine generators as they are cheap and reliable. After confirming with simulations, we will most probably use a three-phase full-wave bridge rectifier for the required AC-DC rectification. This DC power will need to be adjusted based on the specific model of battery we use for energy storage.
To maintain optimal efficiencies at small scales, we plan to design and build a field controller and analyze its effectiveness, both in cost and efficiency. We will also build a system that uses a dynamo instead of a car alternator to use as a reference. Depending on the properties of the dynamo or alternator used, we can use a boost or buck converter along with some filters to avoid voltage ripples to make sure the power being provided to the battery is safe. For a 1kW generator, the output voltage of the turbine should be around the order of magnitude of 100V based on the specific dynamo/alternator we use.
In a general sense, we are given several degrees of freedom, blade pitch control, yaw control and field control. Blade pitch control is too costly for a small-scale wind turbines and yaw control will controlled passively (and is not the focus of our project). This leaves us with field control.
In order to compensate for variations in wind speed, we will change the field strength in our generator. To do this will will change the voltage supplied to the electromagnet. Torque and speed values will have to be fed into an ACU (Alternator Control Unit) and this in turn will allow us to determine appropriate field currents. By analyzing the speed vs. torque of the different architectures, we will be able to optimize the field controllers' effectiveness at varying input speeds. For testing, we will use a DC motor to control the input speeds to simulate the wind powering the turbine.
Our initial objectives will be to find the necessary components for our system that we’ll be able to modify to work with each other. We’ll need to find an appropriately sized dynamo with the right torque-rpm curve; small scale turbines run at about 500 rpm, so we can’t use one that runs optimally at extremely high or low speeds. We’ll also need to find a DC motor (either purchase or use one provided by Professor Allison), and batteries that can run said motor and receive power from the generator output. We will then begin design of the field controller by creating simulation models based on our chosen components.
At this project’s core lies the field controller (and ACU, in our alternator version of the system). If all else fails, our project will have at least successfully designed a field controller and ACU that guide power draw from an alternator or dynamo. However, our true goal is to simulate, optimize, and then implement the entire system of batteries and generators that very closely resemble a working small scale turbine.
|75||Electric Stove Power and Fire Control
|Thomas Furlong||Arne Fliflet||design_document1.pdf
People, especially old people, tend to forget that they have left their stove on. If there are children in the house, it gets worse. If they are outside and the stove becomes on and somehow catch a fire, the materialistic loss would be catastrophic.
To solve the problem, we want to see how the stove is working and control accordingly using the sensor. Since the electric stove is powered by switch of 240 alternating volts. There are two legs for each of the volts to travel and meet up to heat up the stove. We want to use CO sensor and IR camera to make sure the fire is present for sure. The mobile app we will make will have the feature of turning the stove on / off and notify the user of the fire.
IR camera: this will be monitoring the size of the heat to determine that the fire has occurred.
CO sensor: this will check secondary to check after IR camera to make sure the the actual fire is present on the stove.
ESP Chip: We need a way to communicate the data from the main hardware and the timer to our mobile application. So this chip will be the connection from our hardware to our software.
Refillable sprinkler system with control available: This will be used for the user to determine if they want to turn off the fire in case it’s small.
Mobile Application system: This is the way our microcontroller will communicate with the user. It will be both way communication in that the user will be able to turn on/off the stove and use the sprinkler system while the user can be notified of the fire.
Criteria for Success:
The IR camera will detect the fire occurrence and CO sensor will double check to see that there is an actual fire.
The signal will be sent via chip to the phone.
User will be notified of the fire condition and press the sprinkler system to activate the system.
User is able to turn on/off the stove.
|76||STATE ESTIMATION IN MULTI-AGENT PARTIALLY-OBSERVABLE ENVIRONMENT
|Amr Martini||Jing Jiang||design_document1.pdf
|Group Members: Kourosh Arasteh (arasteh2), Junwon Choi (jchoi143)
In applications of collaborative robotics, keeping track of the state of the environment is often split between individual agents that perform both localization and mapping. To do so, these agents require powerful computation capabilities and constant communication with both GPS satellites and D-GPS towers. However, there are applications of robots like these in locations that are GPS-denied, or contested to the point that extraneous long-range communication would rather be avoided. Many military base locations outside of the west fit this description. For construction projects, such as with the Army Corps of Engineers, collaborative robots pose an attractive solution to the problem of distributed construction of large-scale projects. However, without a robust mapping process for the environment, it would be impossible to develop a plan for distributed autonomous construction. Therefore, our problem statement is as follows:
How do we keep track of a large, sparse map between several ground agents, without using GPS or long-range localization technologies?
We will develop a small-scale model of a system that answers the above question. Our system will include 2 ground agents, which will be simple 2-wheeled robots with 2D LIDAR, or cameras if LIDARs are not available. Each ground agent will utilize an ATmega328 or similar microcontroller to handle LIDAR/camera data and communicate over wired connection with the controller agent. The controlller agent will be a PC running the mapping and stitching of the map, and take in data from the ground agents via a ROSserial interface. The controller agent will also direct the ground agents, which will happen over a different ROSserial interface. The final component will be a supervisory 'sky agent', a camera that oversees the stage of the ground agents, and provides estimation of the pose of each of the ground agents to the controller agent over a ROSservice interface. ROS will provide the networking capabilities necessary to interface between the ground, sky, and controller agents. The stage that the agents will map is to be a small, sandbox-like enclosure with small barrels and other obstacles on the scale of the ground agents.
We would define success as implementing the full mapping flow in the following steps:
- The 'sky agent' is able to estimate the location and orientation of the two ground agents within 5 degrees
- The 'sky agent' is able to transfer the pose information to the controller agent over a ROS interface
- The ground agents are able to map their immediate surroundings and pass this information to the controller agent over ROS serial interface
- The controller agent is able to stitch the ground maps into the global map using the pose information
Potential growth areas include integrating WiFi communication into the ROS structure to preclude the need for wired connections between agents. We would also like to increase complexity of the stage to include mounds of dirt or sand, and other less-structured obstacles. Finally, including some sort of time-based component to the global map to show which areas have last been mapped would be another stretch goal.
For more information, refer to our original ideation thread: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30490
|Amr Martini||Jing Jiang||design_document1.pdf
Mousr is a robot mouse toy developed by the startup Petronics, located in Champaign, Illinois. Petronics creates these toys for pet loving households around the world, and Mousr was named as the 2018 Best Cat Toy Award by Pet Guide.
Mousr is a rather advanced and high cost toy at a commercial price point of $149.99 USD. With several features such as a speaker system, LEDs, and encoders to help drive the control algorithms, Mousr is a product designed to be the best in the business.
After meeting with Petronics, we learned that the Mousr team is looking to design a more compact, less technically advanced product called the MicroMOUSR. This product will be based on their original Mousr toy, but will require an entirely new design, both internally and externally.
Solution / Implementation:
There are several changes that need to be made in order to hit the company's ideal price point for a final MicroMOUSR product. In the head of the robot, we need to replace the flex PCB with a stiff board, as stiff boards come at a much lower cost. We will be removing the speaker and LED systems entirely during this process. Due to the simplistic nature of the desired product, we will also be replacing a 6 axis IMU with a 3 axis IMU in the head, and connecting it directly to the microprocessor in the body of the toy. Combining these will allow us to completely remove one of the two microprocessors in the final product.
The microprocessor in the head currently drives control algorithms based on data input from encoders on the wheels of the toy. Because we will be removing this microprocessor, we will also be removing the encoders on the wheels and designing a PWM motor control circuit instead. Because this system needs to be as low cost as possible, they have also asked us to look at redesigning the charging system as well in the body. The combination of a new motor control circuit and power delivery system will require a holistic approach to redesigning the PCB in the body. We will also have to rewrite the control algorithms of the product, as we will no longer have the data from the encoders or the 6 axis IMU. This robot will feature bluetooth connectivity to the existing Petronics app.
We will be able to give a detailed list of components after further meetings with Petronics which will allow us to determine the ideal price estimate for the final device.
Our end goal for this project is to create a functioning MicroMOUSR product at a low price point.
|78||HCESC Sponsored Comprehensive Medical Tool Attachment for VR
|Mengze Sha||Jing Jiang||design_document2.pdf
|The medical simulation training world is growing fast and using VR to train doctors and other medical professionals is a worthwhile endeavor. However doctors train a lot based on finger dexterity and hand motions and off the shelf VR controllers are ill equipped to train medical professionals. Doctors have complained about these controllers feeling alien and the fact that a VR controller feels nothing like syringe or an ambu bag or a laryngoscope takes away from the quality and immersiveness of the VR simulations. Our project would be to create an attachment that could go on numerous medical tools, like a syringe or ampu bag, and would be able to not only track the basic position and rotation but go further and track linear motion, or barometric pressure, or force and acceleration, and then relay that information into a VR training application. For example a user would be in VR and would be able to pick up and track a syringe. The syringe would be mapped into VR through a series of sensors and then the users sight and feel would match up. The syringe would be able to track how much the user pushed on it and how much fluid would be injected in order to get a more accurate and immersive feel of the procedure. The actual attachment we create would have a suite of sensors so that it could be placed on a number of common medical tools and map all of their specific important information into VR. While at a medical conference recently some of the people at HCESC got numerous complaints about the current controller interface for their applications and none of the other groups presenting seemed to have a good solution so I believe this idea is both novel and unique.|
|79||Continuously Recording Microphone
|Dongwei Shi||Jing Jiang||design_document1.pdf
|Team Members: Brian Song (bzsong2) James Chen (jchen251)
Title: Continuously Recording Microphone
Original Post Link: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30655
Problem: Throughout the course of a day, there are many moments that we wish we could've recorded, such as something funny happening or just something somebody said. However, since that moment has passed and there's no way we could've known to record it, the moment is lost forever.
Solution: We are proposing to create a device that would record audio constantly throughout the day in order to capture these moments. In order to accomplish this with a limited amount of memory used, a certain amount of memory would be allocated for the device to record to, around 15 seconds of audio. When the device is powered on, audio will start recording and begin to use up the memory. Once all of the memory is used, new incoming audio would begin to overwrite the old audio recorded in the beginning. When the user wants to capture a clip of audio, they would press a button and the last 15 seconds of audio will be saved.
The device would be handheld with three different modes of operation. The first mode would be to record audio coming from an external microphone. This would be useful simply when walking around outside while wearing headphones. The second mode would be to record audio coming from an external device such as a phone or computer. This would require the use of two headphone jacks. One would connect to the external device and one would connect to a pair of headphones or speakers. The audio from the external device will be recorded while being routed to the speakers in order for the user to listen to what is being said on the external device. This would be useful for recording clips of conversation over the phone or voice chat. The final mode of operation would be to record audio from an internal microphone on the device and then implement a noise cancelling algorithm in order to record audio in a noisy environment. We would likely use an adaptive filter to implement the noise cancellation, which may prove to yield the best results since audio is constantly being recorded. However, the amount of power and time required to do these calculations may prove to be too high, in which case we may move to using a simpler filter. Both options will be explored extensively and we will choose the best based on results and performance.
When the user decides to save a clip, it will be stored in a different part of the memory such that multiple clips can be saved and indexed in order for the user to easily flip through and listen to each clip. A separate function will be added such that the user can select a clip and shave off audio from the beginning or the end in order to have a more precise clip of what was intended to be recorded.
Subsystems: We would have a total of four subsystems: a control subsystem, a power subsystem, a user interface subsystem, and an audio processing/noise cancellation subsystem. The control subsystem would control where all of the audio data goes and would consist of a microcontroller either coming from an arduino or simply add it onto the PCB. The power subsystem will be powered by a set of replaceable batteries and be able to detect when the batteries are low and need replacing. The user interface would consist of a seven segment display, a couple knobs, and a few buttons. The seven segment display would display the index of an audio clip as well as which mode of operation the device is in. The knobs would be used to shave off audio from a clip where one knob controls where the clip begins and another controls where the clip ends. The buttons would be used to turn on the device, cycle through modes of operation, tell the device when to stop recording, and cycle through previously recorded clips. Finally, the audio processing subsystem would consist of a couple microphones and a PCB. The PCB will filter the audio coming from both microphones in order to produce audio with a significantly reduced amount of noise.
Success Criteria: We would know that our project has succeeded if our device is able to function for at least 16 hours at a time while being able to perform all of the three modes listed above. We would need to be able to record at least 100 clips of 15 second audio and be able to cycle through and crop each of these clips seamlessly. Finally, the noise cancellation operation would be able to produce a difference in audio that is significant, while being time and power efficient.
|80|| Motorized Assistive Track Lighting
|Zhen Qin||Arne Fliflet||design_document1.pdf
|This is a revision to a previous posted RFA that was rejected. We met with Professor Smith and talked to a few TA’s to further improve this proposal.
We propose to implement a track lighting system that will allow a user to adjust the lighting requirements of the room depending on the current needs of the user. The ideal use cases would be in art exhibits and museums where artifacts and art pieces are changing every few weeks. Currently to reconfigure lighting in some of these spaces, users must manually configure lighting to fit the current needs of the room. We believe this is unnecessary and can lead to safety hazards with trying to configure lighting that is affixed to high ceilings.
We intend to design this system such that multiple lighting modules can be controlled on the same track. To minimize some of the mechanical complexity, we plan to utilize current track lighting solutions to handle the powering of the receptacles and handling of maintaining power on a bus bar. To allow for the lighting mobility, we plan to retrofit the current track lighting bar into a custom track consisting of C channel to hold the guide wheels for the movable light cart, but also a slotted channel (like 80/20) To fix the bus bar to ensure the cart is constantly powered while moving along the track. By inlaying the bus bar in-between the guide wheels, we can minimize shorting hazards that could occur. The light carts will move by first latching onto a moving belt slightly above the track with controlled by a AT328 processor and a servo motor. The other side of the belt will contain a free spinning wheel to allow for bipolar direction based on movement needs.
The products we intend to use to retrofit our design can be seen below. They will be used to remove some of the mechanical design complexity.
In regards to the electronics, our design proposal will require 2 separate PCB modules, and 3 total. We plan to utilize the same PCB module for each light cart and have a separate module to control the linear motion belt. The light cart PCB module will contain electronics to prevent collision between light sources (simplest solution would be limit switches, but other alternatives like gauging distance with photo-sensors or IR could be damage preventative). It will also contain a servo motors that will power the lead screw latching mechanism as well as an additional servo motor that will control the rotational position of our track lights. (2 motors per light cart).
There will likely be noise that may occur as our cart moves across the bar, but we plan to compensate for the noise by implementing some smoothing through circuit design.
Each cart will need a bluetooth receiver to receive instructions from the app on the ground. Circuitry will need to be designed to prevent sequencing errors with the procedure to latching onto the belt and moving to the desired position. We also plan to add dimming features through PWM to allow the user to have more control over the brightness of their source.
The 2nd unique PCB module that will need to be designed will mainly be focused on running the linear motion belt. It will also require a bluetooth receiver to get instructions from the user app. The motor driving this belt will need to be the most powerful in terms of torque.
In regards to power considerations, aside from the initial AC-DC converter which we plan to buy off the shelf, we plan to mainly operate within 3.3 - 24 VDC to power all our sensors and devices.
From the user end, we plan to utilize an App that will allow a user to communicate with a certain light cart and manually adjust the position (left and right) from the app as well as the orientation of the light off the app.
In terms of competing products, there are devices that deal with track lighting in various applications such as horticulture and stage production, however we feel they don't target our consumer market at all. Those products are offered at a much higher in cost with excessive features and lighting configurations that aren't necessary for our ideal user space.
Original Idea Post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=31695
Rejected RFA Post: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=32297
|81||Heart and Lung sound sensing shirt
|Hershel Rege||Arne Fliflet||design_document1.pdf
| HEART AND LUNG SOUND SENSING SHIRT
Abhiyash Nibber, Hesham Al Ramahi
Millions of people across the world are affected by Heart and Lung illnesses but do not have access to quality healthcare. They often have to travel large distances to talk to a doctor and might have to meet several doctors in person before they can get proper treatment.
SOLUTION AND IMPLEMENTATION
We plan to build a shirt consisting of 10 MEMS microphones placed at different locations on the body. Each of these microphones will obtain the heart and lung sounds, which will be processed by a microcontroller and sent to the user’s phone via Bluetooth. To make sure that the user has sufficient mobility while having this shirt on, we will use stretchable cables to connect the microphones to the microcontroller. To power the shirt, we will use LiPo USB batteries which provide around 4.2 V at full charge. As far as amplification circuitry for the tiny biological signals is concerned, we will design a PCB for that and the op-amps will be powered by the LiPo with a boost converter that will provide us an effective 12V. The shirt is gonna be really tight as well, so if we "stick" the sensors in the shirt, it is unlikely they will fall out. data could then be sent to a doctor for analysis and save the patient’s time and money.
WHAT MAKES THIS UNIQUE ? There are shirts in the market that are able to detect a user’s heart rate and breathing rate but there is no shirt that detects the heart and lung sounds. The feature of recording the sound files and transferring it to a doctor is also very unique. This is different from Project 24 in Spring 2018 because we will implement a bluetooth feature for communicating with a smartphone, which the group wanted to but could not. Also, since this a shirt, we will also have to take sweating and stretchability into our mechanical design and wire up the suit using insulated stretchable wires such as the ones shown in the link here: https://mnwire.com/istretch/