research people publications videos

Our goal is to enable robots to enhance productivity by fluently working with and around humans in dynamic, multiagent environments like manufacturing sites, warehouses, hospitals, and the home. These environments are complex: humans perform a variety of time-critical tasks, ranging from machine operation to inspection. To be truly helpful, robots need to account for human safety, comfort, and efficiency as they complete their own tasks. Our research contributes a variety of tools to approach this goal, including planning and prediction algorithms informed by mathematical insights and models of human behavior, intuitive human-AI interfaces, and highly dexterous robot hardware, all of which are evaluated in extensive experiments with human subjects. To this end, we work across the following thrusts.


Crowd navigation experiments. Left: instance from our experimnents at UW, featuring Honda’s experimental ballbot on understanding the transfer of human motion prediction models to social robot navigation [IROS23]. Middle: instance from our experiments evaluating our topology-aware MPC [RAL23]. Right: video from our crowd navigation user study at Cornell [HRI19][THRI2022].

Social robot navigation

The ability of navigating within human crowds is essential for robots completing important tasks like delivery in dynamic environments. Using insights from social sciences like the "pedestrian bargain" [WOL95] and Gestalt theories, tools from low-dimensional topology [IJRR19][IJRR21], and technologies like machine learning [IROS17][CoRL20][CoRL21a] and control [HRI18][RAL23], our work seeks to capture fundamental properties of multiagent interaction like cooperation [IJRR19] and grouping [CoRL21a] to guide prediction and planning for navigation in complex multiagent domains [ICRA22][WAFR22]. Our algorithms have been deployed on multiple real robots [THRI22][RAL23] generating safe and efficient behavior that is positively perceived by humans. There is still a lot of work to be done to ensure safety, efficiency and comfort within complex human environments as we detailed in our recent survey [THRI23].


We employ topological braids to model complex multiagent behavior. On the left: a planning framework that employs topological braids to account for cooperative collision avoidance in discrete worlds [IJRR19]. On the right: braids can succinctly summarize complex real-world traffic [ICRA22].


Algorithmic frameworks leveraging topological representations for modeling multiagent dynamics. On the left: a planning framework that treats decentralized navigation as implicit communication of topological information, encoded as braids in traffic scenes [IJRR23]. On the right: a prediction architecture that conditions trajectory reconstruction on likelihood of topological modes, identified using winding numbers. [CoRL20].

Modeling humans to build interactive robots

To be truly accepted, robots must abide by human expectations. Our work has leveraged insights from psychology, such as humans’ ``obsession with goals’’ [CSI06] or the ``presentation of self in everyday life’’ [GOF59] to formalize implicit communication [HRI17] and produce interpretable motion that conveys the robot’s collision-avoidance strategy [HRI18] or how the robot makes decisions [CoRL21b]. When deployed in real-world environments, robots will often encounter failures they cannot recover from by themselves. In those occasions, robots can leverage help from bystanders to keep going [HRI22]. However, if robots want to get help in the long run, they need to moderate their help requests; our planning framework reasons about contextual and individual factors when issuing requests for localization help [RSS21].


Our active-learning framework for modeling a map of robot trajectories to likely behavioral attributions from a human observer [CoRL21b][project website].


Asking for help. On the left: planning under uncertainty to determine when to ask a user for localization help [RSS21]. On the right: a system that wanders real-world environments by leveraging human help [HRI22].

Dexterous robots that interact with their environment

Many important tasks like assembly and delivery require dexterous robots, capable of robustly interacting with their environment, autonomously or through human feedback. Our research has looked at the design of underactuated mechanisms [IROS14] whose capabilities can be enhanced through the incorporation of braking technology [IROS15][preprint22], the development of grasp planning algorithms [ICRA13][ICRA14], and sensing technologies [IROS22]. This research has been informed by work on understanding of human dexterity [IROS20][IROS15]. On many occasions, human situational awareness can augment robot capabilities, as we showed in complex tasks like chopstick manipulation [IROS20]. In fact, humans have developed sophisticated nonprehensile manipulation strategies like pushing to complete complex tasks in the real world. Inspired by humans, we built a multirobot pushing system that is capable of rearranging cluttered workspaces [IROS23].


Human-inspired manipulation. Left: human situational awareness enables the completion of challenging tasks like chopstick manipulation through a teleoperation system [IROS20]. Right: a multirobot system that generates push-based manipulation plans to reconfigure cluttered workspaces [IROS23].


Enhancing dexterity via braking technology. Left: electrostatic braking empowers a 10-link robot to perform complex manipulation maneuvers [preprint22]. Right: A brake-equipped underactuated hand is capable of performing complex rolling tasks, leveraging electrostatic braking and proximity sensing [IROS22].

Common to all of our research is a unifying philosophy: We combine mathematical insights with data-driven techniques to introduce structure and interpretability into models of complex behavior. Using such models we have transferred behaviors from simulation to the real world, in several domains including robot navigation in crowds, multirobot coordination, and manipulation. We strive to open-source our code and datasets on Github.

Github Twitter Instagram
© 2023-∞ by the Fluent Robotics Lab; source code adapted from here.