About me
Hiring interns, feel free to drop me an email highlighting your 3 most significant publications!
This is Dr Ali Shahin Shamsabadi (ashahinshamsabadi@brave.com)! I am an AI Privacy Researcher at Brave Software. Before joining Brave Software, I was a Research Associate at The Alan Turing Institute under the supervision of Adrian Weller, and a Postdoctoral Fellow at Vector Institute under the supervision of Nicolas Papernot. During my PhD, I was very fortunate to work under Aurélien Bellet, Adria Gascon, Hamed Haddadi, Matt Kusner and Emmanuel Vincent.
Main blog posts I co-authored: Differentially private data collection, Zero Knowledge Proof, Verifiable differentially private ML and Privacy risks in Federated Learning.
My research has been cited in the press including Thurrott and TuringTop10.
Research
My research interests lie in identifying and mitigating the potential risks stemming from the use of AI in high-stake decision systems to unleash the full potential of AI while safeguarding our fundamental values and keeping us safe and secure. In particular, I
- identify failure modes for AI systems by attacking them in terms of privacy (Mnemonist, Trap weights), fairness (Fairwashing) and security/safety (ColorFool, Mystique, EdgeFool, FilterFool and FoolHD);
- mitigate these emerging risks by designing secure and trustworthy (privacy-preserving, robust, fair and explainable) AI to be deployed by institutions (Losing Less, QUOTIENT, DPspeech, GAP, DarkneTZ and Private-Feature Extraction and PrivEdge);
- build confidential and reliable auditing frameworks that can be used by the public to audit the trustworthiness of AI-driven services provided by institutions (Confidential-DPproof, Confidential-PROFITT, and Zest).
My research has been published at top-tier conferences including NeurIPS, ICLR, CVPR, CCS, USENIX Security and PETs.
Product
I have joined Brave to leverage my research in creating products that prioritize user privacy, and contribute towards enhancing the privacy and security of millions of users worldwide.
I am thrilled to announce the launch of my first-ever privacy-preserving product, Nebula—a practical system for differentially private histogram estimation of data distributed among users!
Nebula puts user first in product analytics:
- Formal Differential Privacy Protection
- Auditability, Verifiability, and Transparency
- Efficiency with Minimal Impact
Recent Students
- Hongyan Chang: Brave PhD Intern (Summer 2024), Project: Context-Aware Membership Inference Attacks against Pre-trained Large Language Models
- Olive Franzese: Brave PhD Intern (Summer 2024), Project: OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness
- Victoria Smith: Alan Turing Institute PhD Enrichment Student (Fall 2022 - Fall 2023), Project: Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
News
- [November 2024] Gave a guest lecture at Imperial College London: Collecting Speech and Telemetry Data Privately.
- [October 2024] Gave a research talk at Google TechTalk: Beyond Trust: Fairness and Privacy in Machine Learning.
- [September 2024] A new preprint on LLMs and privacy, called Context-Aware Membership Inference Attacks against Pre-trained Large Language Models.
- [September 2024] My first-ever privacy-preserving product, Differentially private data collection.
- [September 2024] A new preprint on differential privacy and telemetry data, called Nebula: Efficient, Private and Accurate Histogram Estimation.
- [January 2024] Paper accepted at the 12th International Conference on Learning Representations ICLR2024, called Confidential-DPproof: Confidential Proof of Differentially Private Training. spotlight
- [September 2023] A new preprint on LLMs and privacy, called Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey.
- [July 2023] Recieved Best Reviewers Free Registration award from FL@ICML!
- [July 2023] Our project Confidential-PROFITT: Confidential PROof of FaIr Training of Trees is selected as Turing’s top 10 projects of 2022-2023, see Pioneering New Approaches to Verifying the Fairness of AI Models!
- [July 2023] Presented 2 papers at PETS2023: Differentially Private Speaker Anonymization and Losing Less: A Loss for Differentially Private Deep Learning!
- [July 2023] Started as a Privacy Researcher at Brave Software!
- [May 2023] Paper accepted at the 39th Conference on Uncertainty in Artificial Intelligence UAI, called Mnemonist: Locating Model Parameters that Memorize Training Examples.
- [March 2023] Paper accepted at the 23rd Privacy Enhancing Technologies symposium PETs, called Losing Less: A Loss for Differentially Private Deep Learning.
- [March 2023] Organising AI UK 2023 workshop, Privacy and Fairness in AI for Health.
- [January 2023] Paper accepted at the 11th International Conference on Learning Representations ICLR, called Confidential-PROFITT: Confidential PROof of FaIr Training of Trees. notable top 5% of accepted papers
- [November 2022] 2 papers accepted at the 32nd USENIX Security Symposium, called GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation and Tubes Among US: Analog Attack on Automatic Speaker Identification.
- [November 2022] Co-organizing the Privacy Preserving Machine Learning PPML’2022 workshop co-located with FOCS’2022.
- [September 2022] Paper accepted at the 36th Conference on Neural Information Processing Systems NeurIPS, called Washing The Unwashable: On The (Im)possibility of Fairwashing Detection, Code.
- [September 2022] 2 papers accepted at the 23rd Privacy Enhancing Technologies symposium PETs, called Differentially Private Speaker Anonymization and Private Multi-Winner Voting for Machine Learning.
- [August 2022] Chair and organise Olya Ohrimenko in-person talk at Turing AI Programme.
- [July 2022] Chair and organise Nicolas Papernot in-person talk at Turing AI Programme.
- [January 2022] Paper accepted at the 10th International Conference on Learning Representations ICLR, called A Zest of LIME: Towards Architecture-Independent Model Distances, Code.
- [December 2021] A new preprint on federated learning privacy attack, called When the Curious Abandon Honesty: Federated Learning Is Not Private.
- [November 2021] Started as a Research Associate at The Alan Turing Institute under supervision of Adrian Weller.
- [August 2021] Paper accepted at IEEE Transactions on Image Processing TIP, called Semantically Adversarial Learnable Filters, Code!
- [March 2021] Successfully passed PhD viva!
- [February 2021] Started a Postdoctoral fellow at Vector Institute under supervision of Nicolas Papernot.
- [January 2021] Paper accepted at 46th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2021, called FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances, Code! (acceptance rate 19%)
- [October 2020] Giving a toturial at ACM Multimedia 2020 conference, Part2: adversarial images.
- [September 2020] Offered an internship at Inria, under supervision of Aurelien Bellet.
- [April 2020] Selected as 200 young researchers from all over the world for 8th HEIDELBERG LAUREATE FORUM by international experts appointed by award-granting institutions: The Association for Computing Machinery (ACM), the Norwegian Academy of Science and Letters (DNVA) and the International Mathematical Union (IMU)!
- [March 2020] Paper accepted at IEEE Transactions on Information Forensics and Security TIFS, called PrivEdge: From Local to Distributed Private Training and Prediction, Code! (impact factor 6.2)
- [March 2020] Paper accepted at IEEE Transactions on Multimedia TMM, called Exploiting Vulnerabilities of Deep Neural Networks for Privacy Protection, Code! (impact factor 5.5)
- [March 2020] Paper accepted at ACM International Conference on Mobile Systems, Applications, and Services MobiSys, called DarkneTZ: Towards Model Privacy on the Edge using Trusted Execution Environments, Code! (acceptance rate 19%)
- [Feb 2020] Paper accepted at Conference on Computer Vision and Pattern Recognition, CVPR2020, called ColorFool: Semantic Adversarial Colorization, Video, Code! (acceptance rate 22%)
- [Jan 2020] Paper accepted at 45th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2020, called EDGEFOOL: AN ADVERSARIAL IMAGE ENHANCEMENT FILTER, Video, Code! (acceptance rate 19%)
- [Jan 2020] Paper accepted at IEEE Internet of Things Journal called A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile Analytics! (impact factor 9.5)
- [April 2019] Paper accepted at 26th ACM Conference on Computer and Communications Security, CCS2019, called QUOTIENT: Two-Party Secure Neural Network Training and Prediction! (acceptance rate 16%)
- [Jan 2019] Paper accepted at 44th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2019, called Scene privacy protection, Code! (acceptance rate 49%)
- [June 2018] Offered a PhD Enrichment scheme, a 9-month placement at The Alan Turing Institute!
- [March 2018] Offered an internship for summer 2018 at The Alan Turing Institute, working on project Privacy-aware neural network classification & training under supervision of Adria Gascon, Matt Kusner, Varun Kanade!
PC services
- 2024: PETs’2024, ICLR’2024, ICLR Private ML workshop
- 2023: NeurIPS’2023, CCS’2023, AISTATS’2023, ICML 2023 workshop federated learning
- 2022: ICML’2022, TIFS’2022, TOPS’2022, Explainable AI in Finance
- 2021: ICLR’2021 Security and Safety in Machine Learning Systems, ICCV’2021 Adversarial Robustness In the Real World
- 2020: ECCV’2020 Adversarial Robustness in the Real World, 42nd IEEE Symposium on Security and Privacy.
Selected Research Talks
Talks
- 05/2024 - ICLR 2024 conference – Confidential-DPproof: Confidential Proof of Differentially Private Training Video
- 07/2023 - UAI 2023 conference – Mnemonist: Locating Model Parameters that Memorize Training Examples Video
- 06/2023 - PETS 2023 conference – Losing Less: A Loss for Differentially Private Deep Learning Slides Video
- 06/2023 - PETS 2023 conference – Differentially Private Speaker Anonymization Slides Video
- 05/2023 - ICLR 2023 conference – Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
- 05/2023 - Workshop on Algorithmic Audits of Algorithms
- 05/2023 - Intel
- 04/2023 - Northwestern University – How can we audit Fairness of AI-driven services provided by companies?
- 03/2023 - AIUK 2023 – Confidential-PROFITT: Confidential PROof of FaIr Training of Trees Video
- 03/2023 - University of Cambridge – An Overview of Differential Privacy, Membership Inference Attacks, and Federated Learning
- 11/2022 - NeurIPS 2022 conference – Washing The Unwashable : On The (Im)possibility of Fairwashing Detection Video
- 11/2022 - University of Cambridge and Samsung
- 10/2022 - Queen’s University of Belfast
- 09/2022 - Information Commissioner’s Office
- 09/2022 - Brave
- 06/2020 - CVPR 2020 conference – ColorFool: Semantic Adversarial Colorization Video
- 05/2020 - ACM Multimedia 2020 – A tutorial on Deep Learning for Privacy in Multimedia Slides
- 05/2020 - ICASSP 2020 conference – EdgeFool: An Adversarial Image Enhancement Filter Video
- 06/2018 - The Alan Turing Institute – Privacy-Aware Neural Network Classification & Training – Video
- 06/2018 - QMUL summer school – Distribute One-Class Learning Video