|
Algorithm 2: Adversarial-Aware Deep Learning System. |
[forged, label] = Adv-aware (x)
Input: image x.
Output: Whether the image is forged by adversarial attack or a clean image; classification label if x is a clean image.
y← DNN(x) # DNN model classification label for the image x
Top_k ← RF(x, k) # The top k group of labels generated by the RF classification model
if y∈ then
forged = false; label = y
else
forged = true; label = None
end if
return [forged, label]
|