Proceedings of the 6th International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR 2024)

Evaluating the Vulnerabilities of Deep Learning Architectures: A Case Study of VGGNet, ResNet50 and InceptionV3

Authors
Lovi Dhamija1, *, Urvashi Bansal1
1Dr B.R Ambedkar National Institute of Technology, Jalandhar, India
*Corresponding author. Email: dualovi@gmail.com
Corresponding Author
Lovi Dhamija
Available Online 25 June 2025.
DOI
10.2991/978-94-6463-740-3_23How to use a DOI?
Keywords
Adversarial Attacks; Deep Learning; Perturbations; Noise; Perturbations
Abstract

Deep Learning models are gaining widespread adoption and popularity across various realworld applications, including image recognition, speech recognition, self-driving cars, and critical infrastructure Systems. However, these models are found to be vulnerable to imperceptible adversarial perturbations or noises. Adversarial Attacks involve adding small perturbations into the original input to deceive the recognition model. This paper explored the vulnerable effects of adversarial perturbations or noises against deep learning architectures: VGGNet, ResNet50, and InceptionV3. In this work, we systematically investigated the vulnerabilities of convolutional neural architectures (CNN), and performance is evaluated using robustness metrics, including adversarial loss, generalization gap, attack transferability, and attack success rate. The listed deep learning architectures are evaluated with benchmark datasets (CIFAR10 and CIFAR100) and employ various adversarial attacks, including FGSM, C&W, PGD, and Gaussian Noise. Our results indicate that the performance of Resnet50 is resilient against adversarial attacks, and InceptionV3 performed resiliently against common corruptions (Gaussian Noise) instead of VGG16, which struggled against both types of perturbations. Therefore, it indicates that the more profound layer architectures are more vulnerable because adversarial perturbations are magnified when they go deeper into the network. The findings of this work shed light on the importance of enhancing the robustness of deep learning architectures, especially deployed in critical real-world applications.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 6th International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR 2024)
Series
Advances in Intelligent Systems Research
Publication Date
25 June 2025
ISBN
978-94-6463-740-3
ISSN
1951-6851
DOI
10.2991/978-94-6463-740-3_23How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Lovi Dhamija
AU  - Urvashi Bansal
PY  - 2025
DA  - 2025/06/25
TI  - Evaluating the Vulnerabilities of Deep Learning Architectures: A Case Study of VGGNet, ResNet50 and InceptionV3
BT  - Proceedings of the 6th International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR 2024)
PB  - Atlantis Press
SP  - 261
EP  - 271
SN  - 1951-6851
UR  - https://doi.org/10.2991/978-94-6463-740-3_23
DO  - 10.2991/978-94-6463-740-3_23
ID  - Dhamija2025
ER  -