Fast and Robust UAV to UAV Detection and Tracking Algorithm
thesisposted on 10.06.2019 by Jing Li
In order to distinguish essays and pre-prints from academic theses, we have a separate category. These are often much longer text based documents than a paper.
Unmanned Aerial Vehicle (UAV) technology is being increasingly used in a wide variety of applications ranging from remote sensing, to delivery, to security. As the number of UAVs increases, there is a growing need for UAV to UAV detection and tracking systems for both collision avoidance and coordination. Among possible solutions, autonamous “see-and-avoid” systems based on low-cost high-resolution video cameras offer the important advantages of light-weight and low power sensors. However, in order to be effective, camera based “see-and-avoid” systems will require sensitive, robust, and computationally efficient algorithms for UAV to UAV detect and tracking (U2U-D&T) from a moving camera.
In this thesis, we propose a general architecture for a highly accurate and computationally efficient U2U-D&T algorithms for detecting UAVs from a camera mounted on a moving UAV platform. The thesis contains three studies of U2U-D&T algorithms.
In the first study, we present a new approach to detect and track other UAVs from a single camera in our own UAV. Given the sequence of video frames, we estimate the background motion via perspective transformation model and then identify distinctive points in the background subtracted image to detect moving objects. We find spatio-temporal characteristics of each moving object through optical flow matching and then classify our targets which have very different motion compared with background. We also perform tracking based on Kalman filter to enforce the temporal consistency on our detection. The algorithm is tested on real videos from UAVs and results show that it is effective to detect and track small UAVs with limited computing resources.
In the second study, we present a new approach to detect and track UAVs from a single camera mounted on a different UAV. Initially, we estimate background motions via a perspective transformation model and then identify moving object candidates in the background subtracted image through deep learning classifier trained on manually labeled datasets. For each moving object candidates, we find spatio-temporal traits through optical flow matching and then prune them based on their motion patterns compared with the background. Kalman filter is applied on pruned moving objects to improve temporal consistency among the candidate detections. The algorithm was validated on video datasets taken from a UAV. Results demonstrate that our algorithm can effectively detect and track small UAVs with limited computing resources.
The system in the third study is based on a computationally efficient pipeline consisting of moving object detection from a motion stabilized image, classification with a hybrid neural network, followed by Kalmann tracking. The algorithm is validated using video data collected from multiple fixed-wing UAVs that is manually ground-truthed and publicly available. Results indicate that the proposed algorithm can be implemented on practical hardware and robustly achieves highly accurate detection and tracking of even distant and faint UAVs.