About me

This is Dr Ali Shahin Shamsabadi (ashahinshamsabadi@brave.com)! I am a Senior Research Scientist at Brave Software. Before joining Brave Software, I was a Research Scientist at The Alan Turing Institute (Safe and Ethical AI) under the supervision of Adrian Weller, and a Postdoctoral Fellow at Vector Institute under the supervision of Nicolas Papernot. During my PhD, I was very fortunate to work under Aurélien Bellet, Andrea Cavallaro, Adria Gascon, Hamed Haddadi, Matt Kusner and Emmanuel Vincent.

Blog Posts and Press

My research has been cited in the press including Thurrott and TuringTop10 .

Research

My research initiates a fundamental question: How can we reliably verify the trustworthiness of AI-based services, given that: i) AI-based services are provided as “black-boxes” to protect intellectual property; ii) Institutions are materially disincentivized from trustworthy behavior.

Verifiable Trustworthiness of AI in Practice

Identifying failure modes for AI systems

Secure and privacy-preserving (by design) AI

Product

Privacy Preserving Product Analytics

  • Nebula: a novel, practical and best-in-class system for product usage analytics with differential privacy guarantees! Nebula puts users first in product analytics: i) Formal Differential Privacy Protection; ii) Auditability, Verifiability, and Transparency; and iii) Efficiency with Minimal Impact.

Secure, Privacy-Preserving and Efficient Agents

  • Coming soon.

Recent Students

News

PC services

Selected Research Talks

Differentially Private Speaker Anonymization (PETS 2023)
Mnemonist: Locating Model Parameters (UAI 2023)
Losing Less: A Loss for DP Deep Learning (PETS 2023)
ColorFool: Semantic Adversarial Colorization (CVPR 2020)
EdgeFool: Adversarial Image Enhancement (ICASSP 2020)

Talks

  • 05/2024 - ICLR 2024 conference – Confidential-DPproof: Confidential Proof of Differentially Private Training Video
  • 07/2023 - UAI 2023 conference – Mnemonist: Locating Model Parameters that Memorize Training Examples Video
  • 06/2023 - PETS 2023 conference – Losing Less: A Loss for Differentially Private Deep Learning Slides Video
  • 06/2023 - PETS 2023 conference – Differentially Private Speaker Anonymization Slides Video
  • 05/2023 - ICLR 2023 conference – Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
  • 05/2023 - Workshop on Algorithmic Audits of Algorithms
  • 05/2023 - Intel
  • 04/2023 - Northwestern University – How can we audit Fairness of AI-driven services provided by companies?
  • 03/2023 - AIUK 2023 – Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
  • 03/2023 - University of Cambridge – An Overview of Differential Privacy, Membership Inference Attacks, and Federated Learning
  • 11/2022 - NeurIPS 2022 conference – Washing The Unwashable : On The (Im)possibility of Fairwashing Detection Video
  • 11/2022 - University of Cambridge and Samsung
  • 10/2022 - Queen’s University of Belfast
  • 09/2022 - Information Commissioner’s Office
  • 09/2022 - Brave
  • 06/2020 - CVPR 2020 conference – ColorFool: Semantic Adversarial Colorization Video
  • 05/2020 - ACM Multimedia 2020 – A tutorial on Deep Learning for Privacy in Multimedia Slides
  • 05/2020 - ICASSP 2020 conference – EdgeFool: An Adversarial Image Enhancement Filter Video
  • 06/2018 - The Alan Turing Institute – Privacy-Aware Neural Network Classification & Training – Video
  • 06/2018 - QMUL summer school – Distribute One-Class Learning Video