
A new joint report from the Internal Audit Foundation and AuditBoard reveals that, while internal audit leaders widely recognize artificial intelligence–enabled fraud as a growing organizational risk, only four in ten believe their functions are adequately prepared to detect or respond to it.
Based on insights from more than 370 senior internal audit leaders in North America, the survey examines how audit functions are currently assessing and responding to fraud, including their most pressing concerns, the key barriers limiting effective action, and the steps that practitioners and organizations should consider keeping pace with evolving threats. The findings underscore a growing need for audit functions and leadership to prioritize skills development, resource allocation, and cross-functional collaboration within their organizations.
“AI is reshaping how organizations operate, driving greater efficiency, automation, and insight,” said Anthony Pugliese, president and CEO of the IIA. “At the same time, those capabilities are increasingly being leveraged to enable more sophisticated and scalable fraud. As adoption accelerates, internal audit has a critical role to play in helping organizations understand these risks, identify emerging threats, and respond effectively. This survey offers a timely benchmark to help audit functions assess preparedness and strengthen organizational resilience in a rapidly evolving risk environment.”
High Awareness, Varying Perceptions of Readiness
While most practitioners view AI-enabled fraud as a moderate (58 percent) to high (27 percent) risk, confidence in preparedness remains limited. Fewer than 40 percent believe their internal audit function is adequately prepared to detect AI-enabled fraud, highlighting a clear opportunity for practitioners to strengthen capabilities and awareness across audit functions.
Despite varying perceptions of audit readiness, many functions are already actively engaged in addressing AI-related risks. More than half (57 percent) currently assess control weaknesses that enable fraud and advise management on AI-related governance or policy updates (51 percent). Other common ways that audit functions are taking action include supporting awareness or training initiatives (40 percent); testing or strengthening fraud prevention and detection (38 percent); providing fraud risk assessments to leadership (31 percent); and investigating and documenting AI’s role in fraud incidents (26 percent).
Concerns and Barriers
Across the board, AI-powered phishing attacks are the most-cited concern for audit leaders, with 88% of respondents identifying them as a top risk. Other leading threats include fabricated invoices or financial documents (65 percent), automated social engineering (58 percent), and deepfake audio or video impersonation (45%).
Though cited less frequently, additional concerns that reflect the expanding scope of AI-enabled fraud risks include the use of AI to insert malicious code (41 percent), forged contracts or legal documents (29 percent), fabricated job applications or employee profiles (28 percent), and synthetic identity fraud (27 percent).
When asked about primary barriers to increasing response effectiveness and preparedness, more than half of respondents identified a lack of appropriate technology or tools (57 percent) and insufficient staff with relevant skills or expertise (55 percent). Limited financial budget (46 percent), competing organizational priorities (43 percent), and insufficient time (43 percent) to dedicate to AI-specific risk management efforts also pose significant challenges for audit functions.
“While the awareness of AI-enabled fraud is high, the ‘readiness gap’ remains a significant vulnerability for most organizations,” said Richard Chambers, Senior Advisor, Risk and Audit at AuditBoard. “Internal audit leaders must take disciplined action by equipping their teams with the right technology, continuous training, and access to cross-functional data. In a world of automated, AI-powered threats, manual fraud detection is no longer a viable defense.”
Looking Ahead
Importantly, the report highlights the key actions that internal audit leaders see as most critical to enhancing readiness. Practitioners emphasize skill building as a priority, including engaging in continuous, regularly updated training to support AI-related responsibilities, alongside the need for greater organizational alignment on AI use. Stronger collaboration among technology, cybersecurity, and risk management teams was also emphasized as a critical step to enhancing understanding of AI deployment and delivering more effective risk management.
Additional Findings
The report also highlights the growing implementation of AI within internal audit functions. Currently, AI is leveraged the most frequently in:
- Audit planning (35 percent cited extensive use; 33 percent cited occasional use)
- Reporting (35 percent cited extensive use; 34 percent cited occasional use)
- Risk assessment (25 percent cited extensive use; 39 percent cited occasional use)
- Fieldwork (19 percent cited extensive use; 39 percent cited occasional use)
Looking ahead, an overwhelming 83 percent of respondents expect their internal audit function to increase AI usage over the next year, reinforcing the importance of understanding both the benefits and potential risks to AI.
The survey was distributed online in Q4 2025 to senior internal audit leaders in North America to assess current awareness and practices related to AI-enabled fraud. Insights are based on the responses from 373 individuals across diverse industries and sectors. ![]()

