Practical parallel self-testing of Bell states via magic rectangles. (arXiv:2105.04044v4 [quant-ph] UPDATED)
Self-testing is a method to verify that one has a particular quantum state from purely classical statistics. For practical applications, such as device-independent delegated verifiable quantum computation, it is crucial…
Practical parallel self-testing of Bell states via magic rectangles. (arXiv:2105.04044v4 [quant-ph] UPDATED)
Self-testing is a method to verify that one has a particular quantum state from purely classical statistics. For practical applications, such as device-independent delegated verifiable quantum computation, it is crucial…
Practical parallel self-testing of Bell states via magic rectangles. (arXiv:2105.04044v4 [quant-ph] UPDATED)
Self-testing is a method to verify that one has a particular quantum state from purely classical statistics. For practical applications, such as device-independent delegated verifiable quantum computation, it is crucial…
Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart. (arXiv:2105.14785v4 [cs.LG] UPDATED)
Correctly classifying adversarial examples is an essential but challenging requirement for safely deploying machine learning models. As reported in RobustBench, even the state-of-the-art adversarially trained models struggle to exceed 67%…
Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart. (arXiv:2105.14785v4 [cs.LG] UPDATED)
Correctly classifying adversarial examples is an essential but challenging requirement for safely deploying machine learning models. As reported in RobustBench, even the state-of-the-art adversarially trained models struggle to exceed 67%…
Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart. (arXiv:2105.14785v4 [cs.LG] UPDATED)
Correctly classifying adversarial examples is an essential but challenging requirement for safely deploying machine learning models. As reported in RobustBench, even the state-of-the-art adversarially trained models struggle to exceed 67%…
Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart. (arXiv:2105.14785v4 [cs.LG] UPDATED)
Correctly classifying adversarial examples is an essential but challenging requirement for safely deploying machine learning models. As reported in RobustBench, even the state-of-the-art adversarially trained models struggle to exceed 67%…
Data Augmentation for Opcode Sequence Based Malware Detection. (arXiv:2106.11821v2 [cs.CR] UPDATED)
In this paper we study data augmentation for opcode sequence based Android malware detection. Data augmentation has been successfully used in many areas of deep-learning to significantly improve model performance.…
Data Augmentation for Opcode Sequence Based Malware Detection. (arXiv:2106.11821v2 [cs.CR] UPDATED)
In this paper we study data augmentation for opcode sequence based Android malware detection. Data augmentation has been successfully used in many areas of deep-learning to significantly improve model performance.…
Demystifying the Transferability of Adversarial Attacks in Computer Networks. (arXiv:2110.04488v3 [cs.CR] UPDATED)
Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against…