This is Dr Ali Shahin Shamsabadi! I am a Research Associate at the Artificial intelligence (Safe and ethical) under supervision of Adrian Weller at The Alan Turing Institute. Meanwhile, I am a visitor at CleverHans Lab where I was a Postdoctoral Fellow under the supervision of Nicolas Papernot at Vector Institute before joining the Turing. I have received a PhD in Computer Science from the Queen Mary University of London. During my PhD, I did internships at Inria (with Aurélien Bellet and Emmanuel Vincent on the intersection of differential privacy and audio) and The Alan Turing Institute (with Adria Gascon and Matt Kusner on the intersection of multi-party computation and machine learning). I was also a PhD visitor at Imperial College London working with Hamed Haddadi.
I love to explore the intersection of Machine Learning, Privacy, Fairness, Security, Explanation and and Image/Audio Processing!
My research has been published at top-tier conferences including NeurIPS, ICLR, CVPR, CCS, USENIX Security and PETs. My research interests and recent works include:
- Auditing (see Confidential-PROFITT, FRAUD-Detect and Zest)
- Designing machine learning models tailored to privacy enhancement technologies (see QUOTIENT and PrivEdge)
- Differential privacy for images/audios and graphs (see DPspeech, GAP)
- Privacy attacks (see trap weights)
- Designing adversarial examples by considering image/audio semantics (see ColorFool, Mystique, EdgeFool, FilterFool and FoolHD)
- Data privacy and edge computing (see DarkneTZ and Private-Feature Extraction)
- [May 2023] Paper accepted at the 39th Conference on Uncertainty in Artificial Intelligence UAI, called Mnemonist: Locating Model Parameters that Memorize Training Examples. [Joint work between DeepMind and the Alan Turing Institute]
- [March 2023] Paper accepted at the 23rd Privacy Enhancing Technologies symposium PETs, called Losing Less: A Loss for Differentially Private Deep Learning.
- [March 2023] Organising AI UK 2023 workshop, Privacy and Fairness in AI for Health.
- [January 2023] Paper accepted at the 11th International Conference on Learning Representations ICLR, called Confidential-PROFITT: Confidential PROof of FaIr Training of Trees. notable top 5% of accepted papers
- [November 2022] 2 papers accepted at the 32nd USENIX Security Symposium, called GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation and Tubes Among US: Analog Attack on Automatic Speaker Identification.
- [November 2022] Co-organizing the Privacy Preserving Machine Learning PPML’2022 workshop co-located with FOCS’2022.
- [September 2022] Paper accepted at the 36th Conference on Neural Information Processing Systems NeurIPS, called Washing The Unwashable: On The (Im)possibility of Fairwashing Detection, Code.
- [September 2022] 2 papers accepted at the 23rd Privacy Enhancing Technologies symposium PETs, called Differentially Private Speaker Anonymization and Private Multi-Winner Voting for Machine Learning.
- [August 2022] Chair and organise Olya Ohrimenko in-person talk at Turing AI Programme.
- [July 2022] Chair and organise Nicolas Papernot in-person talk at Turing AI Programme.
- [January 2022] Paper accepted at the 10th International Conference on Learning Representations ICLR, called A Zest of LIME: Towards Architecture-Independent Model Distances, Code.
- [December 2021] A new preprint on federated learning privacy attack, called When the Curious Abandon Honesty: Federated Learning Is Not Private.
- [November 2021] Started as a Research Associate at The Alan Turing Institute under supervision of Adrian Weller.
- [August 2021] Paper accepted at IEEE Transactions on Image Processing TIP, called Semantically Adversarial Learnable Filters, Code!
- [March 2021] Successfully passed PhD viva!
- [February 2021] Started a Postdoctoral fellow at Vector Institute under supervision of Nicolas Papernot.
- [January 2021] Paper accepted at 46th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2021, called FoolHD: Fooling speaker identification by Highly imperceptible adversarial Disturbances, Code! (acceptance rate 19%)
- [October 2020] Giving a toturial at ACM Multimedia 2020 conference, Part2: adversarial images.
- [September 2020] Offered an internship at Inria, under supervision of Aurelien Bellet.
- [April 2020] Selected as 200 young researchers from all over the world for 8th HEIDELBERG LAUREATE FORUM by international experts appointed by award-granting institutions: The Association for Computing Machinery (ACM), the Norwegian Academy of Science and Letters (DNVA) and the International Mathematical Union (IMU)!
- [March 2020] Paper accepted at IEEE Transactions on Information Forensics and Security TIFS, called PrivEdge: From Local to Distributed Private Training and Prediction, Code! (impact factor 6.2)
- [March 2020] Paper accepted at IEEE Transactions on Multimedia TMM, called Exploiting Vulnerabilities of Deep Neural Networks for Privacy Protection, Code! (impact factor 5.5)
- [March 2020] Paper accepted at ACM International Conference on Mobile Systems, Applications, and Services MobiSys, called DarkneTZ: Towards Model Privacy on the Edge using Trusted Execution Environments, Code! (acceptance rate 19%)
- [Feb 2020] Paper accepted at Conference on Computer Vision and Pattern Recognition, CVPR2020, called ColorFool: Semantic Adversarial Colorization, Video, Code! (acceptance rate 22%)
- [Jan 2020] Paper accepted at 45th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2020, called EDGEFOOL: AN ADVERSARIAL IMAGE ENHANCEMENT FILTER, Video, Code! (acceptance rate 19%)
- [Jan 2020] Paper accepted at IEEE Internet of Things Journal called A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile Analytics! (impact factor 9.5)
- [April 2019] Paper accepted at 26th ACM Conference on Computer and Communications Security, CCS2019, called QUOTIENT: Two-Party Secure Neural Network Training and Prediction! (acceptance rate 16%)
- [Jan 2019] Paper accepted at 44th International Conference on Acoustics, Speech, and Signal Processing, ICASSP2019, called Scene privacy protection, Code! (acceptance rate 49%)
- [June 2018] Offered a PhD Enrichment scheme, a 9-month placement at The Alan Turing Institute!
- [March 2018] Offered an internship for summer 2018 at The Alan Turing Institute, working on project Privacy-aware neural network classification & training under supervision of Adria Gascon, Matt Kusner, Varun Kanade!
- 2024: PETs’2024
- 2023: NeurIPS’2023, CCS’2023, AISTATS’2023, ICML 2023 workshop federated learning
- 2022: ICML’2022, TIFS’2022, TOPS’2022, Explainable AI in Finance
- 2021: ICLR’2021 Security and Safety in Machine Learning Systems, ICCV’2021 Adversarial Robustness In the Real World
- 2020: ECCV’2020 Adversarial Robustness in the Real World, 42nd IEEE Symposium on Security and Privacy.
- 05/2023 - Workshop on Algorithmic Audits of Algorithms
- 05/2023 - Intel
- 04/2023 - Northwestern University – How can we audit Fairness of AI-driven services provided by companies?
- 03/2023 - AIUK 2023
- 03/2023 - University of Cambridge – An Overview of Differential Privacy, Membership Inference Attacks, and Federated Learning
- 11/2022 - University of Cambridge and Samsung
- 10/2022 - Queen’s University of Belfast
- 09/2022 - Information Commissioner’s Office
- 09/2022 - Brave