TY - JOUR
T1 - An eye for AI
T2 - quantitative insights into viewer ability to identify AI-generated artworks
AU - Cunningham, Joshua
N1 - Publisher Copyright:
© Emerald Publishing Limited
PY - 2025
Y1 - 2025
N2 - Purpose – This study aims to investigate whether individuals can accurately distinguish between visual artworks created by humans and those generated by artificial intelligence (AI). As generative AI platforms increasingly produce complex, human-like art, questions arise regarding authorship, perception and aesthetic value. By examining public ability to assign correct authorship to unlabeled artworks, this research contributes to discourse on digital creativity and perceptual bias in the context of AI-mediated expression. Design/methodology/approach – A cross-sectional survey of 406 US-based adults recruited via CloudResearch presented participants with ten unlabeled artworks − five AI-generated using DALL·E and five created by human artists − matched by style and composition. Participants identified each work’s origin without guidance. Statistical analysis measured attribution accuracy and examined patterns in relation to digital aesthetics, revealing biases in authorship judgments. Results highlight perceptual boundaries in AI-influenced art evaluation and explore how aesthetic cues shape public interpretation of creative provenance. Findings – Participants demonstrated a modest ability to distinguish AI-generated from human-created artworks, correctly attributing authorship 53.51% of the time − only slightly above chance. While AI-generated images were identified with 66.27% accuracy, human-created digital artworks were frequently misclassified as machine-made, revealing a strong bias associating digital aesthetics with artificiality. These results highlight challenges in authorship perception, copyright attribution and valuation in the age of generative AI. Research limitations/implications – This research is limited by its US-only participant pool and reliance on a curated set of ten artworks, which may not fully capture the diversity of global or stylistic perspectives. In addition, all AI-generated images were created using a single platform (DALL·E 3), limiting generalizability across other generative models. However, these constraints highlight opportunities for future research: expanding cross-cultural samples, testing across multiple AI platforms and conducting longitudinal studies to observe shifts in public discernment as generative technology evolves. Practical implications – This study underscores the urgency for platforms, policymakers and developers to implement mechanisms that clearly identify AI-generated content. Features such as embedded metadata, visible watermarks or algorithmic provenance tools could aid in preserving authorship integrity. Furthermore, as AIgenerated works enter galleries, competitions and marketplaces, establishing guidelines for categorization will help protect the value of human-created art and inform ethical practices for artists integrating AI tools into their workflows. These findings also inform content moderation strategies, copyright enforcement and digital literacy initiatives. Social implications – The inability of individuals to reliably distinguish AI-generated from human-created artworks raises critical concerns about authenticity, trust and attribution in digital culture. As generative tools become more prevalent, the public may struggle to make informed judgments about creative authorship, potentially leading to confusion, devaluation of artistic labor and erosion of credibility in online content. These findings call for greater transparency in the labeling of AI-generated media and underscore the need for public education about the capabilities and limitations of generative technologies in cultural and creative contexts. Originality/value – This study offers one of the first empirical examinations of public perception in distinguishing AI-generated from human-created visual artworks. It uniquely combines survey-based attribution analysis with critical insights into digital bias and aesthetic interpretation. The findings challenge assumptions about AI’s transparency and artistic legibility, revealing how digital mediums distort perceptions of authorship. By foregrounding the perceptual gap between creators and observers, this research provides timely contributions to discussions on human−AI interaction, digital authorship and the socio-technical implications of generative creativity.
AB - Purpose – This study aims to investigate whether individuals can accurately distinguish between visual artworks created by humans and those generated by artificial intelligence (AI). As generative AI platforms increasingly produce complex, human-like art, questions arise regarding authorship, perception and aesthetic value. By examining public ability to assign correct authorship to unlabeled artworks, this research contributes to discourse on digital creativity and perceptual bias in the context of AI-mediated expression. Design/methodology/approach – A cross-sectional survey of 406 US-based adults recruited via CloudResearch presented participants with ten unlabeled artworks − five AI-generated using DALL·E and five created by human artists − matched by style and composition. Participants identified each work’s origin without guidance. Statistical analysis measured attribution accuracy and examined patterns in relation to digital aesthetics, revealing biases in authorship judgments. Results highlight perceptual boundaries in AI-influenced art evaluation and explore how aesthetic cues shape public interpretation of creative provenance. Findings – Participants demonstrated a modest ability to distinguish AI-generated from human-created artworks, correctly attributing authorship 53.51% of the time − only slightly above chance. While AI-generated images were identified with 66.27% accuracy, human-created digital artworks were frequently misclassified as machine-made, revealing a strong bias associating digital aesthetics with artificiality. These results highlight challenges in authorship perception, copyright attribution and valuation in the age of generative AI. Research limitations/implications – This research is limited by its US-only participant pool and reliance on a curated set of ten artworks, which may not fully capture the diversity of global or stylistic perspectives. In addition, all AI-generated images were created using a single platform (DALL·E 3), limiting generalizability across other generative models. However, these constraints highlight opportunities for future research: expanding cross-cultural samples, testing across multiple AI platforms and conducting longitudinal studies to observe shifts in public discernment as generative technology evolves. Practical implications – This study underscores the urgency for platforms, policymakers and developers to implement mechanisms that clearly identify AI-generated content. Features such as embedded metadata, visible watermarks or algorithmic provenance tools could aid in preserving authorship integrity. Furthermore, as AIgenerated works enter galleries, competitions and marketplaces, establishing guidelines for categorization will help protect the value of human-created art and inform ethical practices for artists integrating AI tools into their workflows. These findings also inform content moderation strategies, copyright enforcement and digital literacy initiatives. Social implications – The inability of individuals to reliably distinguish AI-generated from human-created artworks raises critical concerns about authenticity, trust and attribution in digital culture. As generative tools become more prevalent, the public may struggle to make informed judgments about creative authorship, potentially leading to confusion, devaluation of artistic labor and erosion of credibility in online content. These findings call for greater transparency in the labeling of AI-generated media and underscore the need for public education about the capabilities and limitations of generative technologies in cultural and creative contexts. Originality/value – This study offers one of the first empirical examinations of public perception in distinguishing AI-generated from human-created visual artworks. It uniquely combines survey-based attribution analysis with critical insights into digital bias and aesthetic interpretation. The findings challenge assumptions about AI’s transparency and artistic legibility, revealing how digital mediums distort perceptions of authorship. By foregrounding the perceptual gap between creators and observers, this research provides timely contributions to discussions on human−AI interaction, digital authorship and the socio-technical implications of generative creativity.
KW - Artificial intelligence
KW - Computer ethics
KW - Digital interaction
KW - Electronic media
KW - Information ethics
KW - Intellectual property law
UR - https://www.scopus.com/pages/publications/105016876743
U2 - 10.1108/JICES-05-2025-0105
DO - 10.1108/JICES-05-2025-0105
M3 - Article
AN - SCOPUS:105016876743
SN - 1477-996X
JO - Journal of Information, Communication and Ethics in Society
JF - Journal of Information, Communication and Ethics in Society
ER -