Research

Field Robotics: Exploration, Data Collection, Mapping, and Persistent Monitoring

Present-day robotic algorithms and systems lack sufficient robustness to operate reliably in environments that are unknown a priori. We develop autonomous robotic exploration in unknown and extreme environments such as forests, deserts, and underwater. Our research enables a robot to navigate and traverse a disaster site, collaborate in teams with other robots, or with humans for search and rescue or scientific exploration and discovery. The outcome of our research is an enabling scientific technology for directly sampling and analyzing surface and subsurface compositions.

Robot Tour Guide in Collaboration with UMMA

This project aims to learn about people through art and help them learn about themselves. We explore this hypothesis: museum visitors will be more willing to open up to a robot than to a human docent to admit “lack of knowledge about art.”

Providers-Clients-Robots (PCR): Framework for spatial-semantic planning for shared understanding in human-robot interactions.

RobotX is an international robotic vehicle competition focused on the advancement of autonomous vehicle technology. Organized by the Association for Unmanned Vehicle Systems International (AUVSI) and sponsored by the Office of Naval Research (ONR), the RobotX Maritime Challenge requires teams to utilize autonomous surface vehicle platforms and sensors to build a boat capable of on-board decision-making regarding navigation, mapping, and objective completion without human intervention or remote control. In addition to fully autonomous movement, the boat must be capable of tasks like obstacle avoidance, object detection and recovery, and signal recognition.

Are you interested in joining our RobotX team? Contact us!

Team

Nonparametric Continuous Sensor Registration

We develop a new mathematical framework that enables nonparametric joint semantic/appearance and geometric representation of continuous functions using data. The joint semantic and geometric embedding is modeled by representing the processes in a reproducing kernel Hilbert space. The framework allows the functions to be defined on arbitrary smooth manifolds where the action of a Lie group is used to align them. The continuous functions allow the registration to be independent of a specific signal resolution, and the framework is fully analytical with a closed-form derivation of the Riemannian gradient and Hessian. We study a more specialized but widely used case where the Lie group acts on functions isometrically. We solve the problem by maximizing the inner product between two functions defined over data, while the continuous action of the rigid body motion Lie group is captured through the integration of the flow in the corresponding Lie algebra. Low-dimensional cases are derived with numerical examples to show the generality of the proposed framework. The high-dimensional derivation for the special Euclidean group acting on the Euclidean space showcases the point cloud registration and bird's-eye view map registration abilities. A specific derivation and implementation of this framework for RGB-D cameras outperform the state-of-the-art robust visual odometry and performs well in texture and structure-scarce environments.


Incrementally-exploring Information Gathering for Environmental Monitoring

We developed a sampling-based motion planning algorithm equipped with an information-theoretic convergence criterion for incremental informative motion planning. The framework enables concurrent autonomous robotic exploration and environmental monitoring while planning and acting incrementally using a convergence criterion based on relative information contribution. We derived an automatic stopping criterion for the entire mission based on the least upper bound of the average map entropy. In this framework, the robot pose and a dense map can be partially observable, and the robot behavior has a strong correlation with its perception uncertainty. Regardless of the quantity of interest, we can provide a stopping criterion for the exploration mission via its entropy.


Robust Semantic-Aware Perception System

Terrain classes and object categories enable safe and efficient navigation as well as scene understanding. We develop algorithms for joint geometric and semantic inference for robotic perception systems robust to failures caused by challenging environments or motion modes. The continuous semantic map via Bayesian kernel inference exploits local correlations present in the environment, and queries can be made at arbitrary resolutions.

We developed the Semantic Iterative Closest Point (ICP), a novel point cloud registration algorithm that directly incorporates pixelated semantic measurements into the estimation of the relative transformation between two point clouds. The algorithm uses an ICP-like scheme and performs joint semantic and geometric inference using the Expectation-Maximization technique in which semantic labels and point associations between two point clouds are treated as latent random variables. The minimization of the expected cost on the three-dimensional special Euclidean group, i.e., SE(3), yields the rigid body transformation between two point clouds.


Invariant Robot State Estimation

High-frequency state estimation via the theory of invariant observer design enables one to design a nonlinear observer with strong convergence properties. State-of-the-art exteroceptive sensing-based localization systems such as visual odometry or Lidar odometry have achieved high accuracy but also suffer from a lack of robustness. Localization failures can occur due to the presence of dust, reflective surfaces (e.g., ice), or lack of visual features. Proprioceptive sensors (e.g., IMU, wheel encoders) are highly robust but provide less accuracy. Hence, the objective of this research is to produce reasonably accurate odometry estimates from proprioceptive data. Furthermore, the proprioceptive data can be used to detect contact with obstacles and estimate the location of the contact point to enable the robot to navigate out of the perceptually degraded regions and hence enable recovery of exteroceptive localization after failures.


Monocular Depth Prediction Through Continuous 3D Loss

We develop a new continuous 3D loss function for learning depth from monocular images. The dense depth prediction from a monocular image is supervised using sparse LIDAR points, exploiting available data from camera-LIDAR sensor suites during training. Currently, accurate and affordable range sensor is not available. Stereo cameras and LIDARs measure depth either inaccurately or sparsely/costly. In contrast to the current point-to-point loss evaluation approach, the proposed 3D loss treats point clouds as continuous objects; and therefore, it overcomes the lack of dense ground truth depth due to the sparsity of LIDAR measurements. Experimental evaluations show that the proposed method achieves accurate depth measurement with consistent 3D geometric structures through a monocular camera.


Radio Signal-based Localization and Tracking

Wireless Local Area Network (WLAN) and Bluetooth Low Energy (BLE) technologies are widespread and ubiquitous. Radio signals can be a complementary source of information alongside widely used sensors such as cameras. However, using cameras can raise privacy concerns, depending on the deployed location, limiting the application of such systems. On the other hand, in indoors, radio signals are severely impacted by shadowing and multipathing effects, making the available wireless-based positioning systems less accurate (1-10 m). We propose a radio-inertial localization and tracking system that exploits BLE, Inertial Measurement Unit (IMU), and magnetometer sensors with the quality available in standard smartphones.

Achieving better performance in these tasks generally boils down to differentiating Line-Of-Sight (LOS) from Non-Line-Of-Sight (NLOS) signal propagation reliably, which generally requires expensive/specialized hardware due to the complex nature of indoor environments. We also present a novel deep LOS/NLOS classifier that uses the Received Signal Strength Indicator (RSSI) and can classify the input signal with an accuracy of 85%. The proposed algorithm can globally localize and track a smartphone (or robot) with a priori unknown location, and with a semi-accurate prior map (error within 0.8 m) of the WiFi Access Points (AP).


Sound Source Localization and Tracking

Robot audition is an emerging and growing branch in the robotic community and is necessary for a natural Human-Robot Interaction (HRI). In this paper, we propose a framework that integrates advances from Simultaneous Localization And Mapping (SLAM), bearing-only target tracking, and robot audition techniques into a unified system for sound source identification, localization, and tracking. In indoors, acoustic observations are often highly noisy and corrupted due to reverberations, the robot's ego-motion, and background noise. Therefore, in everyday interaction scenarios, the system requires accommodating outliers, robust data association, and appropriate management of the landmarks, i.e., sound sources. We solve the robot self-localization and environment representation problems using an RGB-D SLAM algorithm, and sound source localization and tracking using recursive Bayesian estimation in the form of the extended Kalman filter with unknown data associations and an unknown number of landmarks. The experimental results show that the proposed system performs well in the medium-sized cluttered indoor environment.