Project
# | Title | Team Members | TA | Documents | Sponsor |
---|---|---|---|---|---|
19 | Autonomous Vehicle with VR Control (upper part) |
Kefan Tu Kewei Sui |
Amr Martini | design_document0.pdf final_paper0.pdf other0.pdf photo0.png photo0.jpg presentation0.pptx proposal0.docx video0.mov video |
|
# Introduction & Problem Robots have changed our life for a long time and definitely we will produce more of them to help our daily life to become better. Smart house is already able to support remote control to turn air condition on or off through host’s mobile phone. But what if we want to feed the dog when we are at work in the office or light the candle on the table to create a romantic environment before we get home? There are some numerous detailed situations that are hard to accomplish by simply using programmed command and smart appliances nowadays in the market. What if we design a brand new interaction method, for instance using a robot to meet our requirements remotely which can break the limitation of normal smart appliances? Then we should find a more interactive way to create a command for our robot. # Solution Overview Therefore, we come up with an idea that people could produce the command in VR manually and then an autonomous robot will reproduce this action and finish this task remotely. Our group together with the other group will build a wheeled robot with VR control to perform simple tasks assigned by the user with VR helmet. Since it is a really large project, after sending Prof. Arne a simple proposal and get his permission, we split the project into two parts, the upper part and the bottom part. Our group will be mostly focusing on the upper part of the robot, including object and obstacle recognizing, local area mapping and localization calibrating, basic obstacle avoidance and VR control for the entire robot (both upper part and bottom part). # Solution Components The speed control of the robot will be a PD control. The robot will determine its current location based on its wheel encoder and gyroscope. But to be more accurate, the location will be filtered by Kalman Filter with the pre-determined location from vision provided by the other group. We’ll implement two 2-link planar robot arms at the front with servos, and passing end-effector information to let the servo rotate by inverse kinematics. The microcontroller will also be located at the bottom, retrieving object, obstacle and map data from the other group, and sending them to the VR helmet. With continuously updating map by the LiDAR from other group, we will implement A* algorithm to determine the shortest path with obstacle avoidance to the user targeted location. ## Sensor Subsystem We first will use LiDAR to detect any obstacle by measuring the distance between the robot and the unknown obstacle. Once the robot arrives its destination it uses a camera to find the target object. We currently decide to put a QR code on the target objects in order to implement this function in an easy way, otherwise we have to spend most of our time on CV. The other group will utilize the wheel encoder and gyroscope to perform localization based on dead reckoning, which will be calibrated using the same camera. ## Processing Subsystem The communication between the upper and lower parts of the robot will be wireless so we need a single microcontroller and a PCB board at each part. The robot will always start at home location as the world origin, and will keep attaining sensor values to keep track of its current location. The processor will receive location info based on camera of the other group, and the location will be updated. Map will also be continuously updating. Once the map is updated, the microcontroller will re-A* to calculate a new path to final location. The robot arm will receive end-effector location data and calculate corresponding joint angles. ## Control Subsystem We will create a 1:1 copy of the experimental environment into VR and user could easily move and rotate the target object in VR. Then the new position and orientation informations of that object will be transferred to our robot and it will finish the job remotely. If our robot recognizes any new obstacles or it’s unable to finish the task, a message will be shown in VR or the virtual world will update itself to show the unknown obstacle visually. ## Power Subsystem We will use a 12v rechargeable battery with USB at the bottom, to provide power for microcontroller and motors and other sensors. # Criterion for Success The upper part of the robot can successfully recognize objects and update their positions in both local map created in the microcontroller and 3D map in VR. It has abilities to transmit data to the lower part of the robot and fetch order from the VR front-end. The localization based on dead reckoning using wheel encoder and gyroscope is subject to cumulative errors and may not be accurate, so the robot should also be able to calibrate its localization using camera data. |