Dr. Murari Mandal

Assistant Professor, KIIT Bhubaneswar

Post-Doc, National University of Singapore (NUS)

murarimandal.jpeg

101-H, Campus 14

KIIT Bhubaneswar

Odisha, India 751024

I am an Asst. Professor at the School of Computer Engineering, KIIT Bhubaneshwar. I lead the RespAI Lab where we focus on advancing large language models (LLMs) by addressing challenges related to long-content processing, inference efficiency, interpretability, and alignment. Our research also explores synthetic persona creation, regulatory issues, and innovative methods for model merging, knowledge verification, and unlearning. My research work has been published in ICML, KDD, AAAI, ACM MM, CVPR. Check out my research group’s website here: RespAI Lab. I regularly serve as a Reviewer to NeurIPS, ICML, ICLR, AAAI, CVPR, ICCV, and ECCV. Indexed in CSRankings.

Research Impact: My pioneering works on Machine Unlearning is cited by Anthropic, Yoshua Bengio, Hugo Larochelle, Google Deepmind, etc. With 600+ citations, our works Fast Unlearning [TNNLS], Zero shot Unlearning [TIFS], and Bad Teacher [AAAI] are among top 10 highly cited papers in the field of Machine Unlearning.

Earlier, I was a Postdoctoral Research Fellow at National University of Singapore (NUS). I worked with Prof. Mohan Kankanhalli in the School of Computing. Long time back, I graduated in 2011 with a Bachelors in Computer Science from BITS, Pilani. Find me on X @murari_ai.

"When you go to hunt, hunt for rhino. If you fail, people will say anyway it was very difficult. If you succeed, you get all the glory"


Current Research

  • Addressing Challenges in Long-Content Processing for LLMs: Investigating solutions to performance bottlenecks, memory limitations, latency issues, and information loss when dealing with extended content lengths in large language models (LLMs).

  • Optimizing LLM Inference Efficiency: Developing strategies to reduce the computational cost of LLM inference, focusing on improving speed, memory usage, and leveraging smaller models for complex tasks.

  • Interpretability and Alignment of Generative AI Models: Exploring the interpretability of generative AI models, aligning their outputs with human values, and addressing the issue of hallucinations in model responses.

  • Synthetic Persona and Society Creation: Creating and studying synthetic personalities, communities, and societies within LLMs, and analyzing the behaviors and dynamics of these synthetic constructs.

  • Regulatory Challenges in LLMs: Investigating regulatory concerns surrounding LLMs, including the implementation of unlearning techniques to comply with data privacy regulations and enhance model fairness.

  • Model Merging and Knowledge Verification: Developing methods for merging multiple models, editing model behavior, and verifying the accuracy and consistency of the knowledge they generate.


News


Mar 20, 2025 Preprint and Source Code of “Guardians of Generation: Dynamic Inference-Time Copyright Shielding with Adaptive Guidance for AI Image Generation” is available!
Mar 17, 2025 RespAI Lab offering “Introduction to Large Language Models” at KIIT Bhubaneswar this Spring 2025. Course Website - https://respailab.github.io/llm-101.respailab.github.io
Feb 07, 2025 Preprint of “ReviewEval: An Evaluation Framework for AI-Generated Reviews” is available on Arxiv.
Jan 20, 2025 Preprint of “ALU: Agentic LLM Unlearning” is available on Arxiv.
Dec 22, 2024 Invited talk on “Machine Unlearning for Responsible AI” at IndoML 2024.
Dec 17, 2024 1 paper accepted in main track AAAI-2025, Philadelphia, Pennsylvania, USA [Acceptance Rate - 23.4%]. The paper is also selected for Oral [Acceptance Rate - 4.6%].
Nov 23, 2024 Delivered an Invited Research Talk on “Machine Unlearning” at BITS Pilani, Pilani Campus. PPT
Nov 18, 2024 I will be presenting our recent works on Unlearning in Generative AI Unlearning in Diffusion Models, Unlearning in LLMs at IndoML 2024. Would love to connect and discuss all things Gen AI!
Oct 25, 2024 Preprint of UnStar: Unlearning with Self-Taught Anti-Sample Reasoning for LLMs is available on Arxiv.
Oct 10, 2024 Preprint of ConDa: Fast Federated Unlearning with Contribution Dampening is available on Arxiv.
Sep 11, 2024 Preprint of Unlearning or Concealment? A Critical Analysis and Evaluation Metrics for Unlearning in Diffusion Models is available on Arxiv.
Sep 10, 2024 Preprint of A Unified Framework for Continual Learning and Machine Unlearning is available on Arxiv.
Jul 13, 2024 Invited Guest in a Panel Discussion on “AI: The Dual Edge of Innovation”, World Salon. You can find more details about the event on LinkedIn and World Salon.
May 20, 2024 Preprint of “Multimodal Recommendation Unlearning” paper is available on Arxiv
May 10, 2024 EcoVal Data Valuation Paper Accepted to KDD-2024, Barcelona.


Selected Publications


  1. deep-regression-unlearning.png
    Deep Regression Unlearning
    Ayush Kumar Tarun ,  Vikram Singh Chundawat ,  Murari Mandal , and 1 more author
    In Proceedings of the 40th International Conference on Machine Learning , 23–29 jul 2023
  2. ecoval.png
    EcoVal: An Efficient Data Valuation Framework for Machine Learning
    Ayush K Tarun ,  Vikram S Chundawat ,  Murari Mandal , and 3 more authors
    23–29 jul 2024
  3. bad_teaching.png
    Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks Using an Incompetent Teacher
    Vikram S Chundawat ,  Ayush K Tarun ,  Murari Mandal , and 1 more author
    Proceedings of the AAAI Conference on Artificial Intelligence, Jun 2023