How AI Shapes Our Security Decisions: Early Insights from a New Pilot Study
In an age where both cybersecurity threats and digital assistance tools are growing, our research team is investigating a crucial question: Whose advice do we trust more: AI or human experts?
Pilot study
As part of the Special Interest Group (SIG) on AI, Government and Behavior at Utrecht University, working in collaboration with the focus areas Governing the Digital Society and Human-centered AI, we recently conducted a pilot study to explore how people respond to AI-generated cybersecurity advice compared to guidance from human professionals.
In this experiment, participants were presented with a series of short, realistic digital dilemmas, such as receiving a suspicious email or being asked to approve a software update. Each scenario came with two pieces of advice: one "safe" (cybersecure) and one "unsafe" (vulnerable). Both were identical in content across participants, but with one twist: the advice was framed as coming either from an AI system or a cybersecurity advisor.
Early findings
Interestingly, our early findings show that identical advice is trusted less when it's attributed to AI. This reveals a behavioral gap: people tend to prefer human guidance, even when the AI provides equally valid (or safer) suggestions. Why does this matter? As organizations face increasing cyber threats and a shortage of experts, scalable AI support tools are becoming more attractive. But if people don’t trust these tools, they won’t follow the advice and thus undermining both security and innovation efforts.
This study is part of a broader project examining the intersection of trust, digital policy, and AI adoption in cybersecurity. Our team (Leendert van Maanen, Katsiaryna Labunets, Stephan Grimmelikhuijsen, and Macy Bouwhuizen) aims to bridge the gap between behavioral science, AI development, and public governance. Stay tuned as we build toward practical guidance for responsible AI implementation in the public sector!