News

New paper accepted at ICML 2025

May 7, 2025

We're excited to announce that our paper `Inference-Time Alignment of LLMs via User-Specified Multi-Criteria Transfer Decoding` has been accepted at ICML 2025. This work represents a inference-time alignment of LLMs that can be used to align LLMs with user-specified criteria.

Read more →

New paper accepted at AAAI 2024

December 12, 2024

Our paper titled `Align-Pro: A principled approach to alignment of LLMs` has been accepted at AAAI 2024. This work represents a principled approach to alignment of LLMs that can be used to align LLMs by employing a trainable prompter

Read more →

Welcoming a PhD student to the lab

August 19, 2024

We're delighted to welcome a new PhD student, Avinash Reddy, joining our lab this fall semester. He will be working on the broad topic of `Alignment of Language Models`.

Launch of SAFERR AI Lab

August 19, 2024

We're excited to announce the launch of SAFERR AI Lab. This lab is dedicated to the research of safety, reliability, and robustness of AI systems.