Project
# | Title | Team Members | TA | Documents | Sponsor |
---|---|---|---|---|---|
32 | AUTONOMOUS VEHICLE WITH VR CONTROL |
Chenliang Li Honglu He Siping Meng |
Amr Martini | appendix0.pdf design_document0.pdf final_paper0.pdf photo0.jpg presentation0.pptx proposal0.pdf |
|
Introduction & Problem Robots have changed our life for a long time and definitely we will produce more of them to help our daily life to become better. Smart house is already able to support remote control to turn air condition on or off through host’s mobile phone. But what if we want to feed the dog when we are at work in the office or light the candle on the table to create a romantic environment before we get home? There are some numerous detailed situations that are hard to accomplish by simply using programmed command and smart appliances nowadays in the market. What if we design a brand new interaction method, for instance using a robot to meet our requirements remotely which can break the limitation of normal smart appliances? Then we should find a more interactive way to create a command for our robot. Solution Overview Therefore, we come up with an idea that people could produce the command in VR manually and then an autonomous robot will reproduce this action and finish this task remotely. Our group together with the other group will build a wheeled robot with VR control to perform simple tasks assigned by the user with the VR helmet. Since it is a really large project, after sending Prof. Arne a simple proposal and get his permission, we split the project into two parts, the upper part and the bottom part.(upper part group:https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=27877) Our group will be mostly focusing on the bottom part of the robot, including wheel speed control, localization, robot arm (to move an object around), microcontroller, path planning and power. Solution Components The speed control of the robot will be a PD control. The robot will determine its current location based on its wheel encoder and gyroscope. But to be more accurate, the location will be filtered with Kalman Filter with the predetermined location from vision provided by the other group. We’ll implement two 2-link planar robot arms at the front with servos, and passing end-effector information to let the servo rotate by inverse kinematics. The microcontroller will also be located at the bottom, retrieving object, obstacle and map data from the other group, and sending them to the VR helmet. With continuously updating map by the LiDAR from another group, we will implement A* algorithm to determine the shortest path with obstacle avoidance to the user targeted location. Sensor Subsystem The sensors we’ll use are the wheel encoder and gyroscope to perform localization based on dead reckoning. The other group will use LiDAR and camera to detect obstacles and objects. Processing Subsystem The robot will always start at home location as world origin, and keep attaining sensor values to keep track of location. The processor will receive location info based on the camera of the other group, and the location will be updated. Map will also be continuously updating, and once the map is updated, the microcontroller will re-A* to calculate a new path to final location. The robot arm will receive end-effector location data and calculate corresponding joint angles. Power Subsystem We will use a 12v rechargeable battery with USB at the bottom, to provide power for microcontroller and motors and other sensors. Criterion for Success Our bottom part of the robot will be able to calculate the position of the whole robot and control two robot arms to grab, hold and place objects. It will also be able to process the information that comes from the upper part and then calculates the optimal path to finish the command from VR device. |