Project

# Title Team Members TA Documents Sponsor
36 Personal Carrier Robot
Alex Tanthiptham
Deniz Caglar
Okan Kocabalkanli
Raman Singh design_document1.pdf
final_paper1.pdf
other2.jpg
other1.mp4
photo1.jpg
photo2.PNG
photo3.png
photo4.png
photo5.jpg
proposal3.pdf
proposal1.pdf
proposal2.pdf
video1.mp4
Team Members:
- Okan Kocabalkanli (okan2)
- Deniz Caglar (dcaglar2)
- Jirawatchara Tanthiptham (jt20)

# Problem

In our current society, there are individuals who may lack the ability to carry objects by themselves. An example of this is elderly individuals who may be unable to carry heavy groceries.

# Solution

We can create a path-finding robot that will follow the individual while avoiding obstacles. We are planning on implementing this using ultrasonic depth imaging to detect obstacles. A series of rotating ultrasonic sensors will be imaging the surroundings of the robot. The person of interest will be sending GPS data to the robot through Bluetooth and another GPS chip will be present on the robot. The robot will calculate the distance between itself and the person of interest using the GPS data, and move in the correct direction based on the heading provided by an onboard compass chip. Combining the obstacle and goal direction data, we will employ a path-finding/SLAM algorithm to direct and move the robot through the terrain.
# Solution Components

## Mechanical
This subsystem will encompass the frame for mounting other components as well as the propulsion system of the unit. The system will be rear-wheel driven with each wheel powered by separate motors to allow for differential steering.

### Components:
- Wooden chassis
- A tank drive system with 4 wheels
- 2 DC motors

## Power Management
This subsystem will be powering the rest of the circuit including the PCB and the motors.

### Components:
- A LiPo battery
- LiPo battery charging circuit

## PCB
This subsystem is the sensor suite and brain of our system, performing simultaneous localization and mapping (SLAM) and pathfinding for the system. This system will be generating a PWM signal for the stepper motor. The stepper motor then rotates the Radar Imaging sensors to generate a full field of view. From measured ultrasonic sensor data, obstacles in the systems environment are mapped. The subsystem uses this mapping in addition to data received from the RPI subsystem via SPI for path finding. When the user is in line-of-sight, MCU will be using the distance data from the RPI subsystem camera. When the user is out of line-of-sight, MCU will be using the user's gyroscope and accelerometer data from the RPI subsystem. Using either RPI data, the location of the user is set as the target point with Kalman Filter being used to predict these mapped points' trajectories. Using this trajectory information, the subsystem will create a probability grid. This grid will consist of specific size blocks with each having a collision probability. Using a path finding algorithm like A*, we draw a path between blocks to the target point in order to find the safest and shortest path. The system will compute the path from this and control the DC motors accordingly.

### Components:
- A microcontroller
- DC Motor controller
- Step Motor Controller (TB67S128FTG)
- Radar Imaging System Connector
- Programmer Circuit
- SPI Connection circuit to RPI
- Simultaneous localization and mapping (SLAM) Algorithm
- Kalman filter for Obstacle tracking and prediction
- Roadmap/Grid path planning with A*

## RPI
This subsystem obtains and processes the data necessary for simultaneous localization and mapping (SLAM) using a Raspberry Pi. By using a camera, the robot will detect a fixed-size tag. The fixed size will allow us to detect the distance using the camera perspective.This distance will be pass to the MCU over SPI. In case of a person blocks the camera view, we will switch to a "search mode" where the RPI will forward the phone's heading information (accelerometer, gyroscope ) to the MCU, which then will head the same heading as the user while avoiding obstacles until we find our user with our camera.

### Components:
- Raspberry Pi
- Bluetooth connection
- RPI Camera

## User
This is the subsystem that will directly interact with our users. In this subsystem, we will use a mobile app in order to send user’s GPS data over Bluetooth. For prototyping, we are planning on using an app called “Blynk”, which lets user transfer sensor data from a smartphone via Bluetooth.
### Components:
- Smartphone

# Criterion For Success
- The robot should be able to consistently follow the phone holder through flat terrain with solid, straightforward obstacles.
- The person of interest can be 3-10 meters away from the robot.
- The obstacles should have a height of at least 30 cm over ground level.
- The robot should also be able to carry a load of 3 kg over level ground.


[Discussion thread]( https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=72004)


Cypress Robot Kit

Todd Nguyen, Byung Joo Park, Alvin Wu

Cypress Robot Kit

Featured Project

Cypress is looking to develop a robotic kit with the purpose of interesting the maker community in the PSOC and its potential. We will be developing a shield that will attach to a PSoC board that will interface to our motors and sensors. To make the shield, we will design our own PCB that will mount on the PSoC directly. The end product will be a remote controlled rover-like robot (through bluetooth) with sensors to achieve line following and obstacle avoidance.

The modules that we will implement:

- Motor Control: H-bridge and PWM control

- Bluetooth Control: Serial communication with PSoC BLE Module, and phone application

- Line Following System: IR sensors

- Obstacle Avoidance System: Ultrasonic sensor

Cypress wishes to use as many off-the-shelf products as possible in order to achieve a “kit-able” design for hobbyists. Building the robot will be a plug-and-play experience so that users can focus on exploring the capabilities of the PSoC.

Our robot will offer three modes which can be toggled through the app: a line following mode, an obstacle-avoiding mode, and a manual-control mode. In the manual-control mode, one will be able to control the motors with the app. In autonomous modes, the robot will be controlled based off of the input from the sensors.