We have one conference and one workshop paper accepted to IROS 2023 this year!
The first work is a collaboration with my graduate lab at ASU. Xiao Liu, a PhD student there, has created a differentiable version of the ensemble Kalman filter. This is a really cool work and holds a lot of promise for future applications, as a big limitation of them (especially back when I was using them in ensemble Bayesian Interaction Primitives) is that they can’t handle high-dimensiona/rich inputs. However, Xiao has found a way to incorporate a differentiable observation encoder, allowing probabilistic filtering over complex inputs such as images!
Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters
Xiao Liu, Geoffrey Clark, Joseph Campbell, Yifan Zhou, Heni Ben Amor
Abstract: This paper introduces a novel state estimation framework for robots using differentiable ensemble Kalman filters (DEnKF). DEnKF is a reformulation of the traditional ensemble Kalman filter that employs stochastic neural networks to model the process noise implicitly. Our work is an extension of previous research on differentiable filters, which has provided a strong foundation for our modular and end-to-end differentiable framework. This framework enables each component of the system to function independently, leading to improved flexibility and versatility in implementation. Through a series of experiments, we demonstrate the flexibility of this model across a diverse set of real-world tracking tasks, including visual odometry and robot manipulation. Moreover, we show that our model effectively handles noisy observations, is robust in the absence of observations, and outperforms state-of-the-art differentiable filters in terms of error metrics. Specifically, we observe a significant improvement of at least 59% in translational error when using DEnKF with noisy observations. Our results underscore the potential of DEnKF in advancing state estimation for robotics. Code for DEnKF is available at https://github.com/ir-lab/DEnKF
The second paper is at the Human Multi-Robot Interaction Workshop at IROS. A collaboration with an undergraduate intern, Xijia Zhang, at my lab at CMU over the summer, this paper explores how to generate natural language explanations for agent policies using large language models (LLMs). This is one of my favorite papers this year as I believe it holds a lot of potential; what if we can take observations of any policy and then produce plausible natural language explanations suitable for humans? These are initial steps in this direction, and we show how to structure observations and input representations that enable the LLM to effectively reason about agent behavior, resulting in greatly reduced hallucinations. Stay tuned for follow-up work on this!
Explaining Agent Behavior with Large Language Models
Xijia Zhang, Yue Guo, Simon Stepputtis, Katia Sycara, Joseph Campbell
Abstract: Intelligent agents such as robots are increasingly deployed in real-world, safety-critical settings. It is vital that these agents are able to explain the reasoning behind their decisions to human counterparts, however, their behavior is often produced by uninterpretable models such as deep neural networks. We propose an approach to generate natural language explanations for an agent’s behavior based only on observations of states and actions, agnostic to the underlying model representation. We show how a compact representation of the agent’s behavior can be learned and used to produce plausible explanations with minimal hallucination while affording user interaction with a pre-trained large language model. Through user studies and empirical experiments, we show that our approach generates explanations as helpful as those generated by a human domain expert while enabling beneficial interactions such as clarification and counterfactual queries.