The AI Security & Governance Report

Nearly 700 data leaders share how AI impacts their approach to data security and governance.

04 Data Experts Are Forward-Facing

These widespread changes to budgets, training, and policies all point to not just the changes AI is already bringing, but the innovation to come.

While most data leaders (66%) agree that privacy concerns have slowed or hindered the integration of AI applications within their organization, their outlook on data protection within AI models is positive:

  • 80% agree their organization is capable of identifying and mitigating threats in AI systems
  • 74% agree that they have full visibility into the data being used to train AI models
  • 81% agree that their organization has updated its governance and compliance standards in response to AI

These assessments of risk, transparency, and compliance align closely with the confidence levels we noted earlier. And while a large majority of companies may have initially adapted, it’s less clear what they’ll have to do to stay up-to-date with data protection and security as both risks and regulations continue to evolve.

This is where the future of AI security — the tools and processes that protect AI systems from threats — is of critical importance. AI security is progressing as well, and data leaders say that these advancements are the most promising:

AI as a Tool For Data Security

AI innovation unlocks a wide range of potential security uses, and data leaders are split on what they think the biggest benefit will be for their organization.

When asked about the main advantage AI will have on data security operations, the two functions that rank highest are anomaly detection and security app development (both 14%). Respondents also think AI will help enable data security through:

  • Phishing attack identification (13%)
  • Security awareness training (13%)
  • Enhanced incident response (12%)
  • Threat simulation and red teaming (10%)
  • Data augmentation and masking (9%)
  • Audits and reporting (8%)
  • Streamlining SOC teamwork and operations (8%)

In the future, AI is likely to be a powerful force for security. For instance, recent Gartner research found that by 2027, generative AI will contribute to a 30% reduction rate in false positive rates for application security testing and threat detection.

Yet in the meantime — at least through 2025 — Gartner advises security-conscious organizations to lower their thresholds for suspicious activity detection. In the short term, this means more false alerts and more human response, not less.

We don’t know what we don’t know about AI. The top priority in this stage of rapid evolution should be to reduce uncertainty: Uncertainty increases risk exposure, which in turn, drives up costs. Introduce guardrails to minimize hallucinations in AI outputs, vet the results, and assess any risks associated with the data you feed your LLMs.

Up Next: Conclusion

Go Now