About me

Hiring interns, feel free to drop me an email highlighting your 3 most significant publications!

This is Dr Ali Shahin Shamsabadi (ashahinshamsabadi@brave.com)! I am an AI Privacy Researcher at Brave Software. Before joining Brave Software, I was a Research Associate at The Alan Turing Institute under the supervision of Adrian Weller, and a Postdoctoral Fellow at Vector Institute under the supervision of Nicolas Papernot. During my PhD, I was very fortunate to work under Aurélien Bellet, Adria Gascon, Hamed Haddadi, Matt Kusner and Emmanuel Vincent.

Main blog posts I co-authored: Differentially private data collection, Zero Knowledge Proof, Verifiable differentially private ML and Privacy risks in Federated Learning.

My research has been cited in the press including Thurrott and TuringTop10.

Research

My research interests lie in identifying and mitigating the potential risks stemming from the use of AI in high-stake decision systems to unleash the full potential of AI while safeguarding our fundamental values and keeping us safe and secure. In particular, I

  1. identify failure modes for AI systems by attacking them in terms of privacy (Mnemonist, Trap weights), fairness (Fairwashing) and security/safety (ColorFool, Mystique, EdgeFool, FilterFool and FoolHD);
  2. mitigate these emerging risks by designing secure and trustworthy (privacy-preserving, robust, fair and explainable) AI to be deployed by institutions (Losing Less, QUOTIENT, DPspeech, GAP, DarkneTZ and Private-Feature Extraction and PrivEdge);
  3. build confidential and reliable auditing frameworks that can be used by the public to audit the trustworthiness of AI-driven services provided by institutions (Confidential-DPproof, Confidential-PROFITT, and Zest).

My research has been published at top-tier conferences including NeurIPS, ICLR, CVPR, CCS, USENIX Security and PETs.

Product

I have joined Brave to leverage my research in creating products that prioritize user privacy, and contribute towards enhancing the privacy and security of millions of users worldwide.

I am thrilled to announce the launch of my first-ever privacy-preserving product, Nebula—a practical system for differentially private histogram estimation of data distributed among users!

Nebula puts user first in product analytics:

  1. Formal Differential Privacy Protection
  2. Auditability, Verifiability, and Transparency
  3. Efficiency with Minimal Impact

Recent Students

News

PC services

Selected Research Talks

Talks

  • 05/2024 - ICLR 2024 conference – Confidential-DPproof: Confidential Proof of Differentially Private Training Video
  • 07/2023 - UAI 2023 conference – Mnemonist: Locating Model Parameters that Memorize Training Examples Video
  • 06/2023 - PETS 2023 conference – Losing Less: A Loss for Differentially Private Deep Learning Slides Video
  • 06/2023 - PETS 2023 conference – Differentially Private Speaker Anonymization Slides Video
  • 05/2023 - ICLR 2023 conference – Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
  • 05/2023 - Workshop on Algorithmic Audits of Algorithms
  • 05/2023 - Intel
  • 04/2023 - Northwestern University – How can we audit Fairness of AI-driven services provided by companies?
  • 03/2023 - AIUK 2023 – Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
  • 03/2023 - University of Cambridge – An Overview of Differential Privacy, Membership Inference Attacks, and Federated Learning
  • 11/2022 - NeurIPS 2022 conference – Washing The Unwashable : On The (Im)possibility of Fairwashing Detection Video
  • 11/2022 - University of Cambridge and Samsung
  • 10/2022 - Queen’s University of Belfast
  • 09/2022 - Information Commissioner’s Office
  • 09/2022 - Brave
  • 06/2020 - CVPR 2020 conference – ColorFool: Semantic Adversarial Colorization Video
  • 05/2020 - ACM Multimedia 2020 – A tutorial on Deep Learning for Privacy in Multimedia Slides
  • 05/2020 - ICASSP 2020 conference – EdgeFool: An Adversarial Image Enhancement Filter Video
  • 06/2018 - The Alan Turing Institute – Privacy-Aware Neural Network Classification & Training – Video
  • 06/2018 - QMUL summer school – Distribute One-Class Learning Video