Adversarial Example
An input to a machine learning model that has been intentionally perturbed in a way that causes the model to produce an incorrect output while appearing normal to humans.
In Plain Language
A specific trick input designed to fool AI. A photo that looks perfectly normal to you but has been subtly altered so the AI thinks a panda is actually a gibbon.
.png)
