Project

# Title Team Members TA Documents Sponsor
2 HARDWARE ACCELERATED PANORAMA IMAGE STITCHING CAMERA ON EMBEDDED LINUX
Cole Herrmann
Gautam Pakala
Jake Xiong
Qingyu Li design_document1.pdf
final_paper1.pdf
other1.pdf
video
Team Members:
- Gautum Pakala (gpakala2)
- Cole Herrmann (colewh2)
- Jake Xiong (yuangx2)

# Problem

Time and energy are resources that aren't plentiful in UAVs. Traditionally when a UAV is used for aerial mapping, it will take a picture every time it flies a predetermined distance interval. Since UAVs must be kept lightweight, it’s uncommon to find any with enough onboard processing hardware and energy reserves to stich hundreds of frames into a map. That’s why most mapping UAVs perform the map generation offsite on more powerful hardware than the onboard camera and flight controller. In time sensitive emergencies (open combat, search and rescue, etc), it may not be possible to land the UAV to render an aerial map, and it would be much more convenient if the drone could render the map itself, which could be viewed on a ground station through a UDP radio link.
# Solution

We would like to design a camera that has onboard hardware acceleration capability to stitch images together. When stitching images together into a panorama or map, several repetitive operations are required to "prep" the images for stitching. Operations to greyscale, blur, and convolute images can be performed on a traditional CPU, but the processing time and power consumption can be improved when such repetitive operations are pipelined through an FPGA. With Cole’s ECE 397 funding from last semester, he acquired a Diligent Embedded Vision bundle (https://digilent.com/shop/embedded-vision-bundle/), which we plan on using the Zybo Z7020 and PCAM 5C as the basis for the camera. The Zybo board comes with two A9 processor cores which can run Xilinx's Embedded Linux distro called PetaLinux. By running PetaLinux on the camera, I have easier access to the I/O and filesystem on the Zybo board rather than trying to create a baremetal design. After completing this project, I plan to integrate the camera into one of my drones, including adding serial communication between the flight controller and the Zybo board (another pro of building on PetaLinux), which would give access to a plethora of sensors such as GPS, airspeed, etc that could bring a live rendering aerial mapping drone into reality!


# Solution Components

## Subsystem 1: Keypoint Detection/Description, and Matching
As mentioned before, the development of this projecet will be done on the Zybo board that has the embedded Linux environment. The majority of code base and algorithms below would be written in SystemVerilog for the hardware portion. There may be some image pre-processing done in the Linux environment if that is easier to implement.
All image stitching for panoramas has 3 main processes: Keypoint Detection/Description, Keypoint matching, and Homography
Transformation.
Keypoint Detection is the process of identifying keypoints in an image that are recognizable from different angles, lighting, and scale. Many computer vision algorithms accomplish this goal such SIFT, SURF, and FAST to name a few. We are choosing to implement the FAST algorithm for keypoint detection, not just because it is faster than most other algorithms, but also because it is the least resource intensive for the FPGA to execute. These algorithms already take into account scale and rotational invariance for the images.
Keypoint Description gives each identified keypoint a unique descriptor that can be used to identify each keypoint on the image. Again, there are many methods of doing this, but the simplest is to compile a matrix of the gradient vectors around each keypoint that can be obtained through convolving the image with specific filters.
Keypoint matching occurs when the keypoints are detected and described in each image. If the difference between the descriptors is below a certain error threshold, the keypoints in each image are said to be a match. Typically, a minimum of 4 keypoint matches is needed for Homography Transformation.

## Subsystem 2: Homography Transformation
When image stitching, the angle of the images needs to be rectified to create a clean output panorama. Homography Transformation is a common problem that transforms the coordinate system of an image into the plane of the reference image through a 3x3 homography matrix. The homography matrix can be calculated using the keypoint matches matrix and solving a constrained least squares problem in order to find the eigenvector with the lowest eigenvalue. This transformation is then applied. One issue with the homography transformation is that the result can be skewed with outliers in the keypoint matching process, where there are keypoint matches detected, but they are not really matches. A common solution to any outlier problem like this is the RANSAC algorithm. This is easily transferrable to hardware and can be used to make the computation of the homography matrix more robust. After the images are warped (transformed) and overlapped, there may be some image blending required for a cleaner result which can be done in the Linux environment.

## Subsystem 3: HDMI Output
HDMI output is a system that operates with the TMDS protocol. There have been plenty of people who have created image renderers for HDMI. Our goal is to be able to transfer the image that is being processed in the accelerator through an HDMI renderer we design and output to the HDMI port for an instantaneous results viewer. If the process of creating the accelerator is too long, it would be simpler to host a webserver and display the image in the Linux the environment.

# Criterion For Success

Due to the time limited nature of ECE 445, we will not have enough time to write a full mapping application on PetaLinux, and fully integrate it into the UAV’s PX4 cube flight controller. With this in mind, we plan on building a simpler application that can create panoramas and render them once they are fully processed. At minimum, we would like to have our camera be able to generate a panorama from three frames side by side. We also have to have a way to view the final panaorma, which will either be a low level solution from the TMDS video port on the Zybo board, or high level solution where it’s hosted on a local webpage(PetaLinux already has support for hosting web pages).

Propeller-less Multi-rotor

Ignacio Aguirre Panadero, Bree Peng, Leo Yamamae

Propeller-less Multi-rotor

Featured Project

Our project explored the every-expanding field of drones. We wanted to solve a problem with the dangers of plastic propellers as well as explore new method of propulsion for drones.

Our design uses a centrifugal fan design inspired by Samm Shepard's "This is NOT a Propeller" video where he created a centrifugal fan for a radio controlled plane. We were able to design a fan that has a peak output of 550g per fan that is safe when crashing and when the impeller inside damaged.

The chassis and fans are made of laser-cut polystyrene and is powered using brushless motors typically used for radio-controlled helicopters.

The drone uses an Arduino DUE with a custom shield and a PCB to control the system via Electronic Speed Controllers. The drone also has a feedback loop that will try to level the drone using a MPU6050.

We were able to prove that this method of drone propulsion is possible and is safer than using hard plastic propellers.

Project Videos