HUANG Rui 黄睿 Address: Email:
|
Short Bio |
I am now working as an engineer at Alibaba AI Labs, Hangzhou, China. I obtained my Ph.D degree supervised by Prof TAN Ping at the School of Computing Science, Simon Fraser University, Canada. Before transferring to the GrUVi lab at SFU to continue his Ph.D research, I was with the Department of Electrical and Computer Engineering, National University of Singapore. I obtained my Bachelor degree in Electronic and Information Engineering with the First-Class Honors from the Hong Kong Polytechnic University. My research interests include computer vision and robotics. |
Education |
|
Project I: Micro Aerial Vehicle - KayLion |
Abstract:We propose an approach to autonomously control a quadrotor micro aerial vehicle (MAV). With take-off weight of 50 g and 8-min flight endurance, the MAV platform code-named ‘KayLion’ developed is able to perform autonomous flight with pre-planned path tracking. The vision-based autonomous control is realized with a light weight camera system and an ultrasonic range finder integrated to the MAV. An optical flow algorithm is adopted and processed on ground control station to provide position and velocity estimation of the MAV. A model-based position controller is implemented to realize the autonomous flight. |
Video Demo (on Vimeo): |
Video Demo (on Youku in China): |
Project II:Vision based Autonomous Navigation of AR.Drone Project Page with Code |
Abstract:We present a monocular vision-based autonomous navigation system for a commercial quadcoptor. The quadcoptor communicates with a ground-based laptop via wireless connection. The video stream of the front camera on the drone and the navigation data measured on-board are sent to the ground station and then processed by a novel vision-based SLAM system. In order to handle motion blur and frame lost in the received video, our SLAM system consists of a relocalisation module which achieves fast recovery from tracking failure. An Extended Kalman filter (EKF) is designed for sensor fusion. Thanks to the proposed EKF, accurate 3D positions and velocities can be estimated as well as the scaling factor of the monocular SLAM. Using a motion capture system with millimeter-level precision, we also identify the system models of the quadcoptor and design the PID controller accordingly. We demonstrate that the quadcoptor can navigate along pre-defined paths in an unknown indoor environment with our system using its front camera and onboard sensors only after some simple manual initialization procedures. |
Video Demo: Square Path Following |
Project III: Collaborative Visual SLAM for Dynamic Target Following Project Page with Code |
Abstract:Towards autonomous 3D modelling of moving targets, we present a system where multiple ground-based robots cooperate to localize, follow and scan from all sides a moving target. Each robot has a single camera as its only sensor, and they perform collaborative visual SLAM (CoSLAM). We present a simple robot controller that maintains the visual constraints of CoSLAM while orbiting a moving target so as to observe it from all sides. Real-world experiments demonstrate that multiple ground robots can successfully track and scan a moving target. |
Video Demo: |
Project IV: Active Image-based Modeling with a Toy Drone Project Page |
Abstract:We seek to automate data capturing for image-based modeling. The core of our system is an iterative linear method to solve the multi-view stereo (MVS) problem quickly and plan the Next-Best-View (NBV) effectively. Our fast MVS algorithm enables online model reconstruction and quality assessment to determine the NBVs on the fly. |
Video Demo: |
Publications | |
R. Huang Vision-based Autonomous Navigation and Active Sensing with Micro Aerial Vehicles, PhD Thesis Simon Fraser University, August 2017. PDF |
|
R. Huang, Danping Zou, R. Vaughan, P. Tan, Active Image-based Modeling with a Toy Drone, Accepted to 2018 International Conference on Robotics and Automation, Brisbane, Australia, May 2018. PDF Project Page |
|
J. Perron*, R. Huang*, J. Thomas, L. Zhang, P. Tan, R. Vaughan, Orbiting a Moving Target with Multi-Robot Collaborative Visual SLAM, Workshop on Multi-View Geometry in Robotics (MVIGRO) at the 2015 Robotics: Science and System Conference (RSS'15 workshop), Rome, Italy, July 2015. PDF Project Page Code (* are first authors of equal contribution) |
|
R. Huang, P. Tan and B. M. Chen, Monocular vision-based autonomous navigation system on a toy quadcopter in unknown environments, presented at Proceedings of the 2015 International Conference on Unmanned Aircraft Systems, Denver, USA, pp. 1260-1269, June 2015. PDF Project Page Code |
|
K. Li, R. Huang, S. K. Phang, S. Lai, F. Wang, P. Tan, B. M. Chen and T. H. Lee, Vision-based autonomous control of an ultralight quadrotor MAV, presented at the 2014 International Micro Air Vehicle Conference and Competition, Delft, the Netherlands, August 2014. PDF |