The Effects of Compounded Model Size Reductions on Adversarial Robustness

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Recent advances in Edge AI and Tiny Machine Learning (TinyML) have enabled the deployment of machine learning models on resource-constrained environments. However, deploying these models on edge devices, such as micro-controllers, requires significant model footprint reduction through a variety of techniques such as quantization, pruning, and clustering. While these optimization methods offer considerable advantages, they potentially introduce AI-related security vulnerabilities, particularly concerning model robustness with respect to adversarial AI attacks. Prior research has extensively examined the impact of quantization on adversarial robustness; however, the effects of alternative reduction techniques and their combinations remain understudied. This paper investigates the impact of model size reduction techniques on adversarial robustness, when applied individually and combined. We utilized Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks to generate adversarial perturbations for both training and testing data, and then evaluated the models' accuracy under adversarial training conditions. Our findings revealed that reduction techniques generally diminished robustness; although, combining techniques was not found to make robustness any worse than when applied individually. Moreover, specific techniques can potentially enhance resistance to small size perturbations. This research provides insights into the trade-offs between model size reduction and security, establishing a foundation for future investigations into improving adversarial training techniques and methodologies for maintaining robustness while preserving memory footprint benefits.

Original languageEnglish
Title of host publicationProceedings of the 18th IEEE Dallas Circuits and Systems Conference, DCAS 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Edition2025
ISBN (Electronic)9798331599348
DOIs
StatePublished - 2025
Event18th IEEE Dallas Circuits and Systems Conference, DCAS 2025 - Arlington, United States
Duration: Apr 11 2025Apr 13 2025

Conference

Conference18th IEEE Dallas Circuits and Systems Conference, DCAS 2025
Country/TerritoryUnited States
CityArlington
Period04/11/2504/13/25

Keywords

  • adversarial robustness
  • clustering
  • edge ai
  • pruning
  • quantization
  • tinyml
  • weight sharing

Fingerprint

Dive into the research topics of 'The Effects of Compounded Model Size Reductions on Adversarial Robustness'. Together they form a unique fingerprint.

Cite this