Project
# | Title | Team Members | TA | Documents | Sponsor |
---|---|---|---|---|---|
76 | STATE ESTIMATION IN MULTI-AGENT PARTIALLY-OBSERVABLE ENVIRONMENT |
Junwon Choi Kourosh Arasteh |
Amr Martini | design_document1.pdf final_paper1.pdf presentation1.pdf presentation2.pdf proposal1.pdf |
|
Group Members: Kourosh Arasteh (arasteh2), Junwon Choi (jchoi143) Problem Overview: In applications of collaborative robotics, keeping track of the state of the environment is often split between individual agents that perform both localization and mapping. To do so, these agents require powerful computation capabilities and constant communication with both GPS satellites and D-GPS towers. However, there are applications of robots like these in locations that are GPS-denied, or contested to the point that extraneous long-range communication would rather be avoided. Many military base locations outside of the west fit this description. For construction projects, such as with the Army Corps of Engineers, collaborative robots pose an attractive solution to the problem of distributed construction of large-scale projects. However, without a robust mapping process for the environment, it would be impossible to develop a plan for distributed autonomous construction. Therefore, our problem statement is as follows: How do we keep track of a large, sparse map between several ground agents, without using GPS or long-range localization technologies? Solution: We will develop a small-scale model of a system that answers the above question. Our system will include 2 ground agents, which will be simple 2-wheeled robots with 2D LIDAR, or cameras if LIDARs are not available. Each ground agent will utilize an ATmega328 or similar microcontroller to handle LIDAR/camera data and communicate over wired connection with the controller agent. The controlller agent will be a PC running the mapping and stitching of the map, and take in data from the ground agents via a ROSserial interface. The controller agent will also direct the ground agents, which will happen over a different ROSserial interface. The final component will be a supervisory 'sky agent', a camera that oversees the stage of the ground agents, and provides estimation of the pose of each of the ground agents to the controller agent over a ROSservice interface. ROS will provide the networking capabilities necessary to interface between the ground, sky, and controller agents. The stage that the agents will map is to be a small, sandbox-like enclosure with small barrels and other obstacles on the scale of the ground agents. Goals: We would define success as implementing the full mapping flow in the following steps: - The 'sky agent' is able to estimate the location and orientation of the two ground agents within 5 degrees - The 'sky agent' is able to transfer the pose information to the controller agent over a ROS interface - The ground agents are able to map their immediate surroundings and pass this information to the controller agent over ROS serial interface - The controller agent is able to stitch the ground maps into the global map using the pose information Stretch Goals: Potential growth areas include integrating WiFi communication into the ROS structure to preclude the need for wired connections between agents. We would also like to increase complexity of the stage to include mounds of dirt or sand, and other less-structured obstacles. Finally, including some sort of time-based component to the global map to show which areas have last been mapped would be another stretch goal. For more information, refer to our original ideation thread: https://courses.engr.illinois.edu/ece445/pace/view-topic.asp?id=30490 |