Publications

Explore our research publications on safety, reliability, and robustness of AI systems.

Filters

Year

Venue

Topics

FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding?

M. Bosnstein, A. S. Bedi, A. Mohamed, F. Huang • NeurIPS 2024

Federated Learning Mechanism Design Truthfulness Free Riding Distributed Systems

Transfer Q*: Principled Decoding for LLM Alignment

S. Chakraborty, S. Ghoshal, M. Yin, D. Manocha, M. Wang, A. S. Bedi, F. Huang • NeurIPS 2024

Large Language Models Alignment Decoding LLM Transfer Learning

When, What, and with Whom to Communicate: Enhancing RL-based Multi-Robot Navigation through Selective Communication

S. H. Arul, A. S. Bedi, D. Manocha • IROS 2024

Multi-robot Reinforcement Learning Selective Communication Robot Navigation

LANCAR: Leveraging Language for Context-Aware Robot Locomotion in Unstructured Environments

C. L. Shek, X. Wu, W. A. Suttle, C. Busart, E. Zaroukian, D. Manocha, P. Tokekar, A. S. Bedi • IROS 2024

Robot Locomotion Natural Language Context-aware Unstructured Environments

TrustNavGPT: Trust-Driven Audio-Guided Robot Navigation under Uncertainty with Large Language Models

X. Sun, Y. Zhang, X. Tang, A. S. Bedi, A. Bera • IROS 2024

Robot Navigation Audio Guidance Uncertainty Large Language Models Trust

PIPER: Primitive-Informed Preference-based Hierarchical Reinforcement Learning via Hindsight Relabeling

U. Singh, W. A. Suttle, B. M. Sadler, V. P. Namboodiri, A. S. Bedi • ICML 2024

Hierarchical Reinforcement Learning Preference Learning Primitives Hindsight Relabeling

MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences

S. Chakraborty, J. Qiu, H. Yuan, A. Koppel, F. Huang, D. Manocha, A. S. Bedi, M. Wang • ICML 2024

RLHF LLM Alignment Human Preferences Large Language Models Equity

On the possibilities of ai-generated text detection

S. Chakraborty*, A. S. Bedi*, S., Zhu, B. An, D. Manocha, F. Huang • ICML 2024

AI-generated Text Text Detection Deep Learning LLM

PARL: A Unified Framework for Policy Alignment in Reinforcement Learning

S. Chakraborty, A. S. Bedi, A. Koppel, D. Manocha, H. Wang, M. Wang, F. Huang • ICLR 2024

Policy Alignment Reinforcement Learning RL Framework

STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning

S. Chakraborty, A. S. Bedi, A. Koppel, M. Wang, F. Huang, D. Manocha • ICML 2023

Model-based RL Stein Information Exploration Reinforcement Learning

Beyond Exponentially Fast Mixing in Average-Reward Reinforcement Learning via Multi-Level Monte Carlo Actor-Critic

A. S. Bedi*, W. Suttle*, B. Patel, B. Sadler, A. Koppel, D. Manocha • ICML 2023

Reinforcement Learning Average Reward Monte Carlo Actor-critic

SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication

M. Bornstein, T. Rabbani, E. Wang, A. S. Bedi, F. Huang • ICLR 2023

Federated Learning Decentralized Wait-free Communication Distributed Systems

Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policy Optimization

S.Chakraborty, A. S. Bedi, K. Weerakoon, P. Poddar, A. Koppel, P. Tokekar, D. Manocha • ICRA 2023

Sparse Rewards Policy Optimization Robotics Reinforcement Learning Heavy-tailed

RTAW: An Attention Inspired Reinforcement Learning Method for Multi-Robot Task Allocation in Warehouse Environments

A. Aggarwal, A. S. Bedi, D. Manocha • ICRA 2023

Multi-robot Task Allocation Warehouse Attention Reinforcement Learning

Decentralized Multi-agent Exploration with Limited Inter-agent Communications

H. He, A. Koppel, A. S. Bedi, D. Stilwell, M. Farhood • ICRA 2023

Decentralized Multi-agent Exploration Communication Robotics

Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm

Q. Bai, A. S. Bedi, V. Aggarwal • AAAI 2023

Constrained RL Policy Gradient Primal-dual Reinforcement Learning

Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning

S.Chakraborty, A. S. Bedi, A. Koppel, B. Sadler, F. Huang, P. Tokekar, D. Manocha • AAAI 2023

Coreset Stein Discrepancy Model-based RL Reinforcement Learning