About me
This is Dr Ali Shahin Shamsabadi (ashahinshamsabadi@brave.com)! I am a Senior Research Scientist (now expanding into product strategy and cross-functional leadership too) at Brave Software. I collaborate across disciplines and organizations to turn scientific insights into innovative, impactful products. Before joining Brave Software, I was a Research Scientist at The Alan Turing Institute (Safe and Ethical AI) under the supervision of Adrian Weller, and a Postdoctoral Fellow at Vector Institute under the supervision of Nicolas Papernot. During my PhD, I was very fortunate to work under Aurélien Bellet, Andrea Cavallaro, Adria Gascon, Hamed Haddadi, Matt Kusner and Emmanuel Vincent.
Research
My research initiates a fundamental question: How can we reliably verify the trustworthiness of AI-based services, given that: i) AI-based services are provided as "black-boxes" to protect intellectual property; ii) Institutions are materially disincentivized from trustworthy behavior.Verifiable Trustworthiness of AI in Practice
- Verifiable alignment: SURE: SecUrely REpairs failures flagged by users [NeurIPS'2025]
- Verifiable Uncertainty: Confidential Guardian [ICML'2025]
- Verifiable Privacy: Confidential-DPproof [ICLR'2024]
- Verifiable Fairness: Confidential-PROFITT [ICLR'2023], OATH [NeurIPS'2025]
- Architecture-Independent Model Distances [ICLR'2022]
Identifying failure modes for AI systems
- Privacy Attacks: Context-Aware MIAs against Pre-trained LLMs [EMNLP'2025], Membership and Memorization in LLM Knowledge Distillation [EMNLP'2025], Locating Model Parameters that Memorize Training Examples [UAI'2023], Trap weights [Euro S&P'2023]
- Fairness Attacks: Fairwashing [NeurIPS'2022]
- Robustness Attacks: ColorFool [CVPR'2020], Mystique [USENIX'2022], EdgeFool [ICASSP'2020], FilterFool [TIP'2022], and FoolHD [ICASSP'2021]
Secure and privacy-preserving (by design) AI
- Privacy-preserving: A Loss for Differentially Private Deep Learning [PETS'2023], Differentially Private Speaker Anonymization [PETS'2023], Differentially Private Graph Neural Networks [USENIX'2022], Deep Private-Feature Extraction
- Secure: Two-Party Secure Neural Network Training and Prediction [CCS'2019], Model Privacy at the Edge using Trusted Execution Environments [MobiSys'2020], From Local to Distributed Private Training and Prediction [TIFS'2020]
Product
Verifiable Privacy and Transparency in AI assistants
Verifiable Privacy and Transparency: A new frontier for Brave AI privacy: Confidential LLM Computing on NEAR AI NVIDIA-backed Trusted Execution Environments to offer cryptographically-verifiable privacy and transparency. Users must be able to verify that Chatbot’s privacy guarantees match public privacy promises. Users must be able to verify that Chatbot’s responses are, in fact, coming from a machine learning model the user expects (or pays for). These user-first features are available in Brave browser’s integrated AI assistant, Leo.
Privacy Preserving Product Analytics
Nebula: a novel, practical and best-in-class system for product usage analytics with differential privacy guarantees! Nebula puts users first in product analytics: i) Formal Differential Privacy Protection; ii) Auditability, Verifiability, and Transparency; and iii) Efficiency with Minimal Impact.
Privacy-Preserving Conversation Analytics
Coming soon.
Secure, Privacy-Preserving and Efficient Agents
Coming soon.
Recent Students
- Jaechul Roh: Brave PhD Intern (Summer 2025), Project: Privacy of web agents
- Dzung V. Pham: Brave PhD Intern (Summer 2025), Project: Efficiency of web agents
- Alisha Ukani: Brave PhD Intern (Summer 2025), Project: Intersection of Browser privacy and Web agents
- Saiid El Hajj Chehade: Brave PhD Visitor (Summer 2025), Project: Optimizing web presentation for agents
- Hongyan Chang: Brave PhD Intern (Summer 2024), Project: Context-Aware Membership Inference Attacks against Pre-trained Large Language Models
- Olive Franzese: Brave PhD Intern (Summer 2024), Project: OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness
- Victoria Smith: Alan Turing Institute PhD Enrichment Student (Fall 2022 - Fall 2023), Project: Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
News
- [September 2025] Two papers accepted at The Thirty-Ninth Annual Conference on Neural Information Processing Systems NeurIPS 2025, called Pin the Tail on the Model: Blindfolded Repair of User-Flagged Failures in Text-to-Image Services and Secure and Confidential Certificates of Online Fairness.
- [August 2025] Two papers accepted at the conference on Empirical Methods in Natural Language Processing EMNLP 2025, called Membership and Memorization in LLM Knowledge Distillation and Context-Aware Membership Inference Attacks against Pre-trained Large Language Models.
- [May 2025] Paper accepted at the 32nd ACM Conference on Computer and Communications Security CCS2025, called Nebula: Efficient, Private and Accurate Histogram Estimation.
- [April 2025] Paper accepted at the 42nd International Conference on Machine Learning ICML2025, called Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention.
- [November 2024] Gave a guest lecture at Imperial College London: Collecting Speech and Telemetry Data Privately.
- [October 2024] Gave a research talk at Google TechTalk: Beyond Trust: Proving Fairness and Privacy in Machine Learning.
- [September 2024] A new preprint on LLMs and privacy, called Context-Aware Membership Inference Attacks against Pre-trained Large Language Models.
- [September 2024] My first-ever privacy-preserving product, Differentially private data collection.
- [September 2024] A new preprint on differential privacy and telemetry data, called Nebula: Efficient, Private and Accurate Histogram Estimation.
- [January 2024] Paper accepted at the 12th International Conference on Learning Representations ICLR2024, called Confidential-DPproof: Confidential Proof of Differentially Private Training. spotlight
- [September 2023] A new preprint on LLMs and privacy, called Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey.
- [July 2023] Recieved Best Reviewers Free Registration award from FL@ICML!
- [July 2023] Our project Confidential-PROFITT: Confidential PROof of FaIr Training of Trees is selected as Turing's top 10 projects of 2022-2023, see Pioneering New Approaches to Verifying the Fairness of AI Models!
- [July 2023] Presented 2 papers at PETS2023: Differentially Private Speaker Anonymization and Losing Less: A Loss for Differentially Private Deep Learning!
- [July 2023] Started as a Privacy Researcher at Brave Software!
- [May 2023] Paper accepted at the 39th Conference on Uncertainty in Artificial Intelligence UAI, called Mnemonist: Locating Model Parameters that Memorize Training Examples.
- [March 2023] Paper accepted at the 23rd Privacy Enhancing Technologies symposium PETs, called Losing Less: A Loss for Differentially Private Deep Learning.
- [March 2023] Organising AI UK 2023 workshop, Privacy and Fairness in AI for Health.
- [January 2023] Paper accepted at the 11th International Conference on Learning Representations ICLR, called Confidential-PROFITT: Confidential PROof of FaIr Training of Trees. notable top 5% of accepted papers
- [November 2022] 2 papers accepted at the 32nd USENIX Security Symposium, called GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation and Tubes Among US: Analog Attack on Automatic Speaker Identification.
- [November 2022] Co-organizing the Privacy Preserving Machine Learning PPML'2022 workshop co-located with FOCS'2022.
- [September 2022] Paper accepted at the 36th Conference on Neural Information Processing Systems NeurIPS, called Washing The Unwashable: On The (Im)possibility of Fairwashing Detection, Code.
- [September 2022] 2 papers accepted at the 23rd Privacy Enhancing Technologies symposium PETs, called Differentially Private Speaker Anonymization and Private Multi-Winner Voting for Machine Learning.
- [August 2022] Chair and organise Olya Ohrimenko in-person talk at Turing AI Programme.
- [July 2022] Chair and organise Nicolas Papernot in-person talk at Turing AI Programme.
- [January 2022] Paper accepted at the 10th International Conference on Learning Representations ICLR, called A Zest of LIME: Towards Architecture-Independent Model Distances, Code.
- [December 2021] A new preprint on federated learning privacy attack, called When the Curious Abandon Honesty: Federated Learning Is Not Private.
- [November 2021] Started as a Research Associate at The Alan Turing Institute under supervision of Adrian Weller.
- [August 2021] Paper accepted at IEEE Transactions on Image Processing TIP, called Semantically Adversarial Learnable Filters, Code!
- [March 2021] Successfully passed PhD viva!
- [February 2021] Started a Postdoctoral fellow at Vector Institute under supervision of Nicolas Papernot.
- [January 2021] Paper accepted at 46th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2021, called FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances, Code! (acceptance rate 19%)
- [October 2020] Giving a toturial at ACM Multimedia 2020 conference, Part2: adversarial images.
- [September 2020] Offered an internship at Inria, under supervision of Aurelien Bellet.
- [April 2020] Selected as 200 young researchers from all over the world for 8th HEIDELBERG LAUREATE FORUM by international experts appointed by award-granting institutions: The Association for Computing Machinery (ACM), the Norwegian Academy of Science and Letters (DNVA) and the International Mathematical Union (IMU)!
- [March 2020] Paper accepted at IEEE Transactions on Information Forensics and Security TIFS, called PrivEdge: From Local to Distributed Private Training and Prediction, Code! (impact factor 6.2)
- [March 2020] Paper accepted at IEEE Transactions on Multimedia TMM, called Exploiting Vulnerabilities of Deep Neural Networks for Privacy Protection, Code! (impact factor 5.5)
- [March 2020] Paper accepted at ACM International Conference on Mobile Systems, Applications, and Services MobiSys, called DarkneTZ: Towards Model Privacy on the Edge using Trusted Execution Environments, Code! (acceptance rate 19%)
- [Feb 2020] Paper accepted at Conference on Computer Vision and Pattern Recognition, CVPR2020, called ColorFool: Semantic Adversarial Colorization, Video, Code! (acceptance rate 22%)
- [Jan 2020] Paper accepted at 45th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2020, called EDGEFOOL: AN ADVERSARIAL IMAGE ENHANCEMENT FILTER, Video, Code! (acceptance rate 19%)
- [Jan 2020] Paper accepted at IEEE Internet of Things Journal called A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile Analytics! (impact factor 9.5)
- [April 2019] Paper accepted at 26th ACM Conference on Computer and Communications Security, CCS2019, called QUOTIENT: Two-Party Secure Neural Network Training and Prediction! (acceptance rate 16%)
- [Jan 2019] Paper accepted at 44th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2019, called Scene privacy protection, Code! (acceptance rate 49%)
- [June 2018] Offered a PhD Enrichment scheme, a 9-month placement at The Alan Turing Institute!
- [March 2018] Offered an internship for summer 2018 at The Alan Turing Institute, working on project Privacy-aware neural network classification & training under supervision of Adria Gascon, Matt Kusner, Varun Kanade!
PC services
- 2024: PETs'2024, ICLR'2024, ICLR Private ML workshop
- 2023: NeurIPS'2023, CCS'2023, AISTATS'2023, ICML 2023 workshop federated learning
- 2022: ICML'2022, TIFS'2022, TOPS'2022, Explainable AI in Finance
- 2021: ICLR'2021 Security and Safety in Machine Learning Systems, ICCV'2021 Adversarial Robustness In the Real World
- 2020: ECCV'2020 Adversarial Robustness in the Real World, 42nd IEEE Symposium on Security and Privacy
Selected Research Talks
| Differentially Private Speaker Anonymization (PETS 2023) | Mnemonist: Locating Model Parameters (UAI 2023) | Losing Less: A Loss for DP Deep Learning (PETS 2023) |
| ColorFool: Semantic Adversarial Colorization (CVPR 2020) | EdgeFool: Adversarial Image Enhancement (ICASSP 2020) |
Talks
- 05/2024 - ICLR 2024 conference -- Confidential-DPproof: Confidential Proof of Differentially Private Training Video
- 07/2023 - UAI 2023 conference -- Mnemonist: Locating Model Parameters that Memorize Training Examples Video
- 06/2023 - PETS 2023 conference -- Losing Less: A Loss for Differentially Private Deep Learning Slides Video
- 06/2023 - PETS 2023 conference -- Differentially Private Speaker Anonymization Slides Video
- 05/2023 - ICLR 2023 conference -- Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
- 05/2023 - Workshop on Algorithmic Audits of Algorithms
- 05/2023 - Intel
- 04/2023 - Northwestern University -- How can we audit Fairness of AI-driven services provided by companies?
- 03/2023 - AIUK 2023 -- Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
- 03/2023 - University of Cambridge -- An Overview of Differential Privacy, Membership Inference Attacks, and Federated Learning
- 11/2022 - NeurIPS 2022 conference -- Washing The Unwashable : On The (Im)possibility of Fairwashing Detection Video
- 11/2022 - University of Cambridge and Samsung
- 10/2022 - Queen's University of Belfast
- 09/2022 - Information Commissioner's Office
- 09/2022 - Brave
- 06/2020 - CVPR 2020 conference -- ColorFool: Semantic Adversarial Colorization Video
- 05/2020 - ACM Multimedia 2020 -- A tutorial on Deep Learning for Privacy in Multimedia Slides
- 05/2020 - ICASSP 2020 conference -- EdgeFool: An Adversarial Image Enhancement Filter Video
- 06/2018 - The Alan Turing Institute -- Privacy-Aware Neural Network Classification & Training -- Video
- 06/2018 - QMUL summer school -- Distribute One-Class Learning Video