Nirav Diwan
PhD @ UIUC
I am a second-year Ph.D. student in Computer Science at the University of Illinois Urbana-Champaign, advised by Prof. Gang Wang in the Siebel School of Computing and Data Science. My research is at the intersection of Security & Privacy and Machine Learning, focusing on practical adversarial attacks on Foundation Models. I co-led the creation of PurpCode, which is the first reasoning model for cybersafety, winning the Amazon Nova AI Challenge (2025).
Internship. I am looking for both Industrial and Academic Internships for Summer 2026 in the areas of Machine Learning, and Security & Privacy. Please reach out!
Collaboration. This is an open call for collaboration! Feel free to send me an email to talk about ideas, projects, collaborations, or research questions.
Prior Research Experience
Industry experience
News
Jul 22, 2025 | 🥇 Our work PurpCode, the first reasoning model for secure code generation developed using Deliberative Alignment, won the Amazon Nova AI Challenge ($250k prize)! |
Jul 22, 2025 | PurpCode is now accepted at NeuRIPS 2025! See you in San Deigo! |
Dec 1, 2024 | My summer internship work at LG AI Research has been accepted at AAAI Good Data Workshop 2025. |
Sep 16, 2024 | Our proposal for the Amazon Trusted AI Challenge (Grant - $250,000) got accepted. |
Aug 23, 2024 | Officially started my PhD at UIUC! |
May 29, 2024 | Joined LG AI Research as a research intern in the Bi-lingual LLM Team! |
May 11, 2024 | Graduated with my MS CS at UIUC! |
Mar 10, 2024 | Selected to represent UIUC at the Catalyzing Advocacy in Science and Engineering Workshop in Washington! |
Feb 6, 2024 | I will be at USENIX 2024 at Philadelphia USENIX Security Symposium! |
Feb 6, 2024 | Our work highlighting the risk of Diffusion Models for evading online phishing detectors has been accepted at USENIX Security Symposium! |
Oct 17, 2023 | Submitted our work at USENIX Security Symposium! |
Jun 5, 2023 | Joined the Generative AI Team at Ema team as an Applied Science Intern! |
Aug 22, 2022 | Started working as a Teaching Assistant for CS124 Introduction to Computer Science (Fall 2022, Spring 2023, Fall 2023). |
Aug 15, 2022 | Joined the research-track MS CS program at University of Illinois, Urbana-Champaign! |
Jun 1, 2022 | Our work on Identifying Anomalous Users in the YouTube BlackMarket has been accepted at AAAI - ICWSM. |
Sep 1, 2021 | Joined the ProVoice team at Prodigal Technologies as a Natural Language Processing (NLP) Engineer. |
Jun 21, 2021 | Honoured to recieve the Dean’s Thesis Appreciation Award at IIIT Delhi! |
Jun 21, 2021 | Graduated with B. Tech degree in Computer Science from IIIT Delhi. |
Jun 1, 2021 | Our work on Watermarking Fine-tuned Language Model Generated text has been accepted at ACL (Findings). |
Nov 1, 2020 | RecipeDB designed has been accepted at Database: The Journal of Biological Databases and Curation (Oxford university Press) (Impact Factor = 5.8). |
Aug 1, 2020 | Started working as a Teaching Assistant for CS563 Machine Learning (Graduate) Course at IIITD (Fall 2020). |
Mar 1, 2020 | The Information Retrieval model designed for RecipeDB has been accepted at DECOR Workshop at ICDE. |
Affiliations & Internships
selected publications
-
Neurips PurpCode: Reasoning for Safer Code Generation
Jiawei Liu*, Nirav Diwan*, Zhe Wang*, Haoyu Zhai, Xiaona Zhou, Kiet A. Nguyen, Tianjiao Yu, Muntasir Wahed, Yinlin Deng, Hadjer Benkraouda, Yuxiang Wei, Lingming Zhang, Ismini Lourentzou, Gang Wang
The 39^{th} Annual Conference on Neural Information Processing Systems
Developed a reasoning model for secure code generation using Deliberative Alignment.
-
IEEE S&P You Can’t Judge a Binary by Its Header: Data-Code Separation for Non-Standard ARM Binaries using Pseudo Labels
Hadjer Bankrouda, Nirav Diwan, Gang Wang
46th IEEE Symposium on Security and Privacy, 2025
We propose a novel way to separate data and code for non-standard ARM binaries using pseudo labels.
-
USENIX It Doesn't Look Like Anything to Me: Using Diffusion Model to Subvert Visual Phishing Detectors
Qingying Hao, Nirav Diwan, Ying Yuan, Giovanni Apruzzese, Mauro Conti, Gang Wang
33rd USENIX Security Symposium, 2024
Used Diffusion Models + Simple retrieval to attack online phishing detectors. Empirically validated the attack for 100+ brands across both white-box anbd black-box settings.