AI Security Case Study - Deepfake results in the theft of $77 million
Deepfake results in the theft of $77 million.
Case Study Number - AISec-0001/24
Summary
This case study provides an example of where an AI Security vulnerability was exploited at a cost of $77 million to a government organisation.
Threat Capability Level (all levels) – Productionised and Deployed: TRL9
Primary Threat Vector - Deepfake
Date – 2020
Reporter – Ant Group AISec Team
Actor – Two individuals (unknown)
Target - Shanghai government tax office's facial recognition service
Incident Detail
This form of camera hijack assault is capable of circumventing the conventional live facial recognition authentication system, thus granting entry to high-security systems and facilitating identity fraud.
A pair of individuals in China leveraged this method to infiltrate the local government's taxation system. They established a spurious shell company and dispatched invoices through the tax system to ostensible clients. Commencing this fraud in 2018, the duo successfully amassed $77 million through deceitful means.
Tactics, Techniques, and Procedures
Mitigations
- Undertake an AI security assessment (link).
- Catalogue your AI infrastructure assets (hardware and software).
- Employ a Secure by design methodology for the development of your AI products and services.
- Gain a comprehensive understanding of your supply chain and construct your AI Bill of Materials (AIBOM).
- Maintain an Open Source Intelligence (OSINT) feed to stay abreast of emerging AI threat vectors.
- Autonomously track, prioritise, and document your vulnerabilities – there are too many for humans to do it.
- Utilise a quantitative risk management strategy that justifies investment returns of your control measure.
- Initiate a consultation call or go to my useful resources for AI Security.
Final Note
Leadership is the fundament solution to Cybersecurity, so become a cyber leader, not a cyber manager!
Reference
- The Wall Street Journal - Camera Hijack Attack on Facial Recognition System
Other Related Articles
- The Register – Deepfake attacks can easily trick live facial recognition systems online
- MIT Technology Review - The hack that could make face recognition think someone else is you
- arXiv - eFace: Real-time Adversarial Attacks on Face Recognition System