Publications
Explore our research publications on safety, reliability, and robustness of AI systems.
Filters
Year
Venue
Topics
FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding?
M. Bosnstein, A. S. Bedi, A. Mohamed, F. Huang • NeurIPS 2024
Transfer Q*: Principled Decoding for LLM Alignment
S. Chakraborty, S. Ghoshal, M. Yin, D. Manocha, M. Wang, A. S. Bedi, F. Huang • NeurIPS 2024
When, What, and with Whom to Communicate: Enhancing RL-based Multi-Robot Navigation through Selective Communication
S. H. Arul, A. S. Bedi, D. Manocha • IROS 2024
LANCAR: Leveraging Language for Context-Aware Robot Locomotion in Unstructured Environments
C. L. Shek, X. Wu, W. A. Suttle, C. Busart, E. Zaroukian, D. Manocha, P. Tokekar, A. S. Bedi • IROS 2024
TrustNavGPT: Trust-Driven Audio-Guided Robot Navigation under Uncertainty with Large Language Models
X. Sun, Y. Zhang, X. Tang, A. S. Bedi, A. Bera • IROS 2024
PIPER: Primitive-Informed Preference-based Hierarchical Reinforcement Learning via Hindsight Relabeling
U. Singh, W. A. Suttle, B. M. Sadler, V. P. Namboodiri, A. S. Bedi • ICML 2024
MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences
S. Chakraborty, J. Qiu, H. Yuan, A. Koppel, F. Huang, D. Manocha, A. S. Bedi, M. Wang • ICML 2024
On the possibilities of ai-generated text detection
S. Chakraborty*, A. S. Bedi*, S., Zhu, B. An, D. Manocha, F. Huang • ICML 2024
PARL: A Unified Framework for Policy Alignment in Reinforcement Learning
S. Chakraborty, A. S. Bedi, A. Koppel, D. Manocha, H. Wang, M. Wang, F. Huang • ICLR 2024
STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning
S. Chakraborty, A. S. Bedi, A. Koppel, M. Wang, F. Huang, D. Manocha • ICML 2023
Beyond Exponentially Fast Mixing in Average-Reward Reinforcement Learning via Multi-Level Monte Carlo Actor-Critic
A. S. Bedi*, W. Suttle*, B. Patel, B. Sadler, A. Koppel, D. Manocha • ICML 2023
SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication
M. Bornstein, T. Rabbani, E. Wang, A. S. Bedi, F. Huang • ICLR 2023
Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policy Optimization
S.Chakraborty, A. S. Bedi, K. Weerakoon, P. Poddar, A. Koppel, P. Tokekar, D. Manocha • ICRA 2023
RTAW: An Attention Inspired Reinforcement Learning Method for Multi-Robot Task Allocation in Warehouse Environments
A. Aggarwal, A. S. Bedi, D. Manocha • ICRA 2023
Decentralized Multi-agent Exploration with Limited Inter-agent Communications
H. He, A. Koppel, A. S. Bedi, D. Stilwell, M. Farhood • ICRA 2023
Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm
Q. Bai, A. S. Bedi, V. Aggarwal • AAAI 2023
Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
S.Chakraborty, A. S. Bedi, A. Koppel, B. Sadler, F. Huang, P. Tokekar, D. Manocha • AAAI 2023