As generative AI tools rapidly reshape digital workflows, security teams are struggling to keep up with emerging threats. A recent study by penetration testing firm Cobalt reveals that over a third of cybersecurity leaders and practitioners admit GenAI is advancing faster than their teams can secure it.
Nearly 48% of respondents called for a ‘strategic pause’ to better align defenses against evolving AI-driven attacks—though most acknowledge this pause is unlikely to materialize. Alarmingly, 72% identified GenAI-related threats as their top IT risk, yet one-third are not performing regular security assessments like penetration testing for their large language model (LLM) deployments.
“Threat actors aren’t waiting, and neither can we,” said Gunter Ollmann, CTO at Cobalt. “AI is redefining both productivity and risk. Security frameworks must evolve or risk becoming obsolete.”
The report also highlights a divergence in priorities between executive leaders and frontline security practitioners. While 76% of C-suite and VP-level respondents voiced concern over long-term threats like adversarial attacks, practitioners (45%) showed more immediate worry over operational risks such as inaccurate outputs.
Security leaders appear more inclined to adapt defense strategies for GenAI-specific threats, with 52% considering structural changes compared to 43% of practitioners.
Top concerns across all respondents include:
- Sensitive data exposure (46%)
- Model poisoning or theft (42%)
- Inaccurate outputs (40%)
- Training data leaks (37%)
Additionally, 50% of participants demand greater transparency from software vendors regarding vulnerability detection and mitigation, underscoring a widening trust gap within the AI supply chain.
Cobalt’s internal pentesting data also sheds light on vulnerabilities in LLM implementations. While 69% of all high-priority issues are addressed across categories, that figure drops to a mere 21% for LLM-specific high-severity issues—despite their significant risk level.
Interestingly, while serious GenAI issues are resolved faster—with a mean time to resolution (MTTR) of just 19 days, the lowest among all test types—this likely reflects a focus on simpler fixes, rather than comprehensive mitigation.
“Just like the early days of cloud, GenAI has exposed a critical gap between innovation and security readiness,” Ollmann warned. “We need to shift from reactive audits to proactive, programmatic AI testing—urgently.”
Stay ahead of emerging cybersecurity threats. For the latest insights and updates on cloud security, follow SOC News.
News Source: ITPro.com