In this video we demonstrate an intuitive gesture-based interface for manually guiding a drone to land on a precise spot. Using unobtrusive wearable sensors, an operator can quickly and accurately maneuver and land the drone after very little training; a preliminary user study on 5 subjects shows that the system compares favorably with a traditional joystick interface.

The video has been accepted for publication at Human-Robot Interaction (HRI 2018) conference [1], March 5-8, 2018, Chicago, IL, USA.

To detect the events of pointing we used 1-D convolutional neural network (CNN) that receives a stream of acceleration and orientation data from two inertial measurement units (IMUs) placed on user’s arm [2].

Design of the user study

Video demonstration of a part of experimental sequence performed in the user study.

The next video shows collated trajectory animations for all the subjects for two interfaces: joystick (blue) and pointing (green). The popping dots over the right target signify the landing of the drone:

The Python code and corresponding dataset (ROS bag-files) can be used to reproduce the results presented in this work.

Example of Guiding and Landing

An example of pointing gestures being used for steering and landing a drone:

Acknowledgment

This work was partially supported by the Swiss National Science Foundation (SNSF) through the National Centre of Competence in Research (NCCR) Robotics.

Publications

  1. B. Gromov, L. Gambardella, and A. Giusti, “Video: Landing a Drone with Pointing Gestures,” in HRI ’18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion, March 5–8, 2018, Chicago, IL, USA, 2018.

    @inproceedings{gromov2018video,
      author = {Gromov, Boris and Gambardella, Luca and Giusti, Alessandro},
      title = {Video: Landing a Drone with Pointing Gestures},
      booktitle = {HRI~'18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion, March 5--8, 2018, Chicago, IL, USA},
      conference = {2018 ACM/IEEE International Conference on Human-Robot Interaction Companion},
      doi = {10.1145/3173386.3177530},
      isbn = {978-1-4503-5615-2/18/03},
      location = {Chicago, IL, USA},
      year = {2018},
      month = mar,
      acmid = {3177530},
      publisher = {ACM},
      video = {https://youtu.be/jpG8Jsmth2Y},
    }
    
  2. D. Broggini, B. Gromov, L. M. Gambardella, and A. Giusti, “Learning to detect pointing gestures from wearable IMUs,” in Proceedings of Thirty-Second AAAI Conference on Artificial Intelligence, February 2-7, 2018, New Orleans, Louisiana, USA, 2018.

    @inproceedings{broggini2018learning,
      author = {Broggini, Denis and Gromov, Boris and Gambardella, Luca M. and Giusti, Alessandro},
      title = {Learning to detect pointing gestures from wearable {IMUs}},
      booktitle = {Proceedings of Thirty-Second {AAAI} Conference on Artificial Intelligence, February 2-7, 2018, New Orleans, Louisiana, {USA}},
      year = {2018},
      month = feb,
      publisher = {{AAAI} Press},
      url = {https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16259/16463},
    }