Model Inversion Attack
An attack that attempts to reconstruct training data or sensitive features of individuals by exploiting access to a trained machine learning model's predictions.
In Plain Language
An attack where someone uses an AI's responses to reverse-engineer private training data. Like using a facial recognition AI's outputs to reconstruct actual faces from the training set.
Why This Matters
Model inversion attacks pose a direct risk to data privacy. Your AI risk assessment process should evaluate this threat, particularly for models trained on sensitive personal data and implement appropriate technical safeguards.
.png)
