International Journal of Advance Computational Engineering and Networking (IJACEN)
.
Follow Us On :
current issues
Volume-12,Issue-2  ( Feb, 2024 )
Past issues
  1. Volume-12,Issue-1  ( Jan, 2024 )
  2. Volume-11,Issue-12  ( Dec, 2023 )
  3. Volume-11,Issue-11  ( Nov, 2023 )
  4. Volume-11,Issue-10  ( Oct, 2023 )
  5. Volume-11,Issue-9  ( Sep, 2023 )
  6. Volume-11,Issue-8  ( Aug, 2023 )
  7. Volume-11,Issue-7  ( Jul, 2023 )
  8. Volume-11,Issue-6  ( Jun, 2023 )
  9. Volume-11,Issue-5  ( May, 2023 )
  10. Volume-11,Issue-4  ( Apr, 2023 )

Statistics report
May. 2024
Submitted Papers : 80
Accepted Papers : 10
Rejected Papers : 70
Acc. Perc : 12%
Issue Published : 134
Paper Published : 1557
No. of Authors : 4058
  Journal Paper


Paper Title :
Generating Attachable Adversarial Patches to Make the Object Identification Wrong Based on Neural Networks

Author :Shi-Jinn Horng, Huang Huang

Article Citation :Shi-Jinn Horng ,Huang Huang , (2023 ) " Generating Attachable Adversarial Patches to Make the Object Identification Wrong Based on Neural Networks " , International Journal of Advance Computational Engineering and Networking (IJACEN) , pp. 35-42, Volume-11,Issue-4

Abstract : Adversarial example is the one that can make our network misclassification through small disturbance, which are often harmless to human cognition but fatal to neural networks. Nowadays, there is no way to resist all kinds of disturbance attacks, which makes people have more doubts about the architecture of the network. Three different sub-models are proposed in this research to attack the neural networks. The attack scope model can effectively reduce the attack range and guide the adversarial algorithm to conduct accurate perturbation attack. Adversarial attack models can generate different adversarial patches through adversarial algorithms. The adversarial patches are compact and can be manufactured artificially. This disturbance patch can be directly attached to the original map to efficiently and accurately disturb the target model. The success rate of disturbance by generating a small patch is 70.1%. Especially, the method proposed in this paper can be applied in different neural networks. Keywords - Deep Learning, Neural Network, Adversarial Attack, Adversarial Patch

Type : Research paper

Published : Volume-11,Issue-4


DOIONLINE NO - IJACEN-IRAJ-DOIONLINE-19644   View Here

Copyright: © Institute of Research and Journals

| PDF |
Viewed - 26
| Published on 2023-07-10
   
   
IRAJ Other Journals
IJACEN updates
Paper Submission is open now for upcoming Issue.
The Conference World

JOURNAL SUPPORTED BY