The AI Security & Governance Report

Nearly 700 data leaders share how AI impacts their approach to data security and governance.

Confidence in AI Adoption & Implementation Is High

Despite its security challenges, most organizations are steadily throwing their weight — and spending — behind AI technology.

More than three-fourths of respondents (78%) say that their budget has increased for AI systems, applications, and development over the past 12 months. Companies anticipate significant business value through additional AI adoption, and they feel positive about their ability to adapt to the security challenges and changes ahead.

Strong Confidence in AI Data Security Strategies

Despite so many data leaders expressing that AI makes security more challenging, 85% say they’re somewhat or very confident that their organization’s data security strategy will keep pace with the evolution of AI. This is in contrast to research just last year that found 50% strongly or somewhat agreed that their organization’s data security strategy was failing to keep up with the pace of AI evolution.

In this survey, respondents expressed similar, but slightly lower, confidence in the intersection between using and protecting data in AI applications. Two-thirds (66%) of data leaders rated their ability to balance data utility with privacy concerns as effective or highly effective.

Much about AI security remains unknown, and businesses are still in a “honeymoon period” with the technology. New challenges and potential AI failures arise all the time, which are sure to prompt new regulations and standards. Even now, the EU’s AI Act enacts controls around not just the data that can go into a model but why someone can use a model — and purpose-based AI restrictions are a new challenge that organizations don’t yet know how to enforce.

So while data leaders may feel deep confidence in their AI data security strategy, it’s impossible to predict where regulations are headed in the future. Organizations should prepare and train their teams in line with the current state of AI regulations, and make their strategy agile enough to evolve as needed.

“In the age of cloud and AI, data security and governance complexities are mounting. It’s simply not possible to use legacy approaches to manage data security across hundreds of data products.”
Sanjeev Mohan

Principal at SanjMo

Standards & Frameworks

One primary contributor to the widespread confidence in AI security we’re seeing is a reliance on external standards and internal frameworks for both data governance and ethics.

More than half of data leaders (52%) say their organization follows national or international standards for governance frameworks, or guidelines around AI development. Slightly more (54%) say they use internally developed frameworks.

These industry standards are useful reference points, yet they’re also as new as the recent AI advancements themselves. Organizations need to incorporate these standards into their governance frameworks while taking their own proactive steps to address AI security risks internally.

Data experts are similarly using a mix of approaches to AI ethics, with the largest proportion (56%) following industry standards. They’re also setting specific internal ethical guidelines or policies (53%) and conducting regular ethics training and awareness programs with employees (53%). This last approach is critical. Rather than setting and forgetting ethical guidelines for AI, leaders need to take an active role in defining and updating these policies — and training teams to abide by them.

Organizations need to create policies and structures that help them avoid major reputational backlash from ethical dilemmas, like using biased data to train recruiting tools or relying on non-transparent data collection and analytics.

Ethics poses a significant risk — one that requires an intentional and proactive approach to unknown challenges rather than a reactive one.

Up Next: Policies & Processes Are Changing

Go Now