Launch of SAFERR AI Lab: Shaping the Future of Safe and Reliable AI

Launch of SAFERR AI Lab: Shaping the Future of Safe and Reliable AI
We are thrilled to announce the official launch of SAFERR AI Lab — a new research initiative at the University of Central Florida (UCF), focused on advancing the safety, reliability, and robustness of Artificial Intelligence systems.
At a time when AI is rapidly reshaping industries, economies, and daily life, our mission is to ensure these systems are built with the highest standards of trustworthiness and accountability. SAFERR (Safety and Fairness for Reliable and Responsible AI) aims to become a global leader in AI research that upholds human values, social good, and scientific rigor.
🧠 Our Vision
We believe that safe and reliable AI is not just a technical challenge — it is a societal imperative.
Our vision is to develop AI systems that:
- Behave robustly even in adversarial or unforeseen conditions
- Operate reliably across diverse environments and user populations
- Comply with ethical and policy frameworks
- Foster public trust through transparency and interpretability
🔬 Core Research Areas
The SAFERR AI Lab works at the intersection of theory, engineering, and ethics. Our key research areas include:
- Adversarial Robustness: Building models that withstand malicious inputs and distribution shifts
- Formal Verification of AI: Applying formal methods to validate neural network behavior
- Safe Reinforcement Learning: Ensuring exploration strategies respect safety constraints
- AI Alignment: Aligning models with human goals and avoiding unintended behavior
- Causal and Fair Learning: Designing algorithms that promote fairness, explainability, and accountability
- Policy-Aware AI Systems: Bridging the gap between technical development and public policy
🌐 Our Institutional Home
SAFERR AI Lab is based in the Department of Computer Science at the University of Central Florida, and operates in collaboration with:
- Ethics in Technology Center: Addressing the societal and ethical implications of emerging technologies
- Policy Research Institute: Grounding technical advances in real-world legal and policy considerations
These partnerships reflect our commitment to interdisciplinary research that considers AI’s broader impact on society.
👥 Our Team
Led by Dr. Amrit Singh Bedi, the lab brings together a growing team of researchers, PhD students, engineers, and policy experts with backgrounds in:
- Machine Learning & Deep Learning
- Reinforcement Learning
- Human-Centered AI
- Science, Technology, and Society (STS)
We value diversity, intellectual curiosity, and an open collaborative culture.
📢 Get Involved
We’re just getting started — and we’re looking for collaborators, students, partners, and supporters who share our mission.
Here’s how you can engage:
- 🧪 Prospective Students: Apply to join our lab through UCF’s graduate programs
- 🧭 Researchers & Labs: Let’s co-author papers or launch joint initiatives
- 🏛️ Policy Makers & NGOs: Partner with us on policy shaping and public-interest AI
- 🛠️ Engineers & Builders: Help translate research into real-world systems
Stay tuned for upcoming talks, preprints, and open-source toolkits.
🔗 Follow Our Work
You can explore more about SAFERR AI Lab on our official website: https://saferr.ai
Follow us on Twitter and LinkedIn for updates.
If you’re passionate about building responsible and trustworthy AI, we’d love to hear from you.
Together, let’s shape a safer future for AI.
📍 SAFERR AI Lab
University of Central Florida
Department of Computer Science
Contact: contact@saferr.ai