Deep Neural Networks (DNNs) are employed in an increasing number of
applications, some of which are safety critical. Unfortunately, DNNs are known
to be vulnerable to so-called adversarial attacks that manipulate inputs to
cause incorrect results that can be beneficial to an attacker or damaging to
the victim. Multiple defenses have been proposed to increase the robustness of
DNNs. In general, these defenses have high overhead, some require
attack-specific re-training of the model or careful tuning to adapt to
different attacks.

This paper presents HASI, a hardware-accelerated defense that uses a process
we call stochastic inference to detect adversarial inputs. We show that by
carefully injecting noise into the model at inference time, we can
differentiate adversarial inputs from benign ones. HASI uses the output
distribution characteristics of noisy inference compared to a non-noisy
reference to detect adversarial inputs. We show an adversarial detection rate
of 86% when applied to VGG16 and 93% when applied to ResNet50, which exceeds
the detection rate of the state of the art approaches, with a much lower
overhead. We demonstrate two software/hardware-accelerated co-designs, which
reduces the performance impact of stochastic inference to 1.58X-2X relative to
the unprotected baseline, compared to 15X-20X overhead for a software-only GPU
implementation.

By admin