AI ‘arms race’ raises healthcare cybersecurity risks, CIO warns

KUALA LUMPUR, MALAYSIA — As healthcare organizations race to deploy AI-powered cybersecurity systems, Kumar Krishnamurthy Venkateswaran, Chief Information Officer (CIO) of Narayana Health, warned that the technology presents both solutions and new risks.
During the HIMSS25 APAC 2025, Venkateswaran revealed how AI enables hackers to launch a million attacks per second while urging hospitals to implement explainable AI defenses with human oversight.
AI-powered attacks threaten healthcare’s digital front lines
Modern hackers now weaponize AI to overwhelm healthcare systems, with phishing attacks available for just $5 and denial-of-service attempts scaling to millions per second.
Venkateswaran told HIMSS25 APIC attendees that traditional defenses crumble against this onslaught, saying, “It’s just nearly impossible for us to resolve all those events.”
The healthcare sector faces particular vulnerability as it manages sensitive patient data while transitioning to digital systems.
He notes that organizations can fight AI with AI, deploying AI cyber defense systems to fight AI-powered cyber attacks. Venkateswaran advocates for real-time, self-learning systems that proactively detect threats rather than react post-breach.
“ AI in healthcare – from a cybersecurity perspective – has to be something real-time, active, and continuously learning. It should not be an afterthought. It should not launch a defence after an event has happened,” he notes.
He stresses the importance of automated defenses that analyze data patterns to trigger protective measures before attacks penetrate networks. However, he cautions that without proper configuration, these systems risk becoming another vulnerability point for sophisticated hackers to exploit.
Human oversight critical in AI defense systems
While AI can process threats faster than any human team, Venkateswaran warns against full automation.
“For every AI system that you build, please [appoint] appropriate, knowledgeable subject matter experts to ensure that these decisions (like remediations) are reviewed, analysed, and then approved accordingly,” Venkateswaran explained.
He explains that every AI decision—especially emergency protocols like firewall adjustments—must remain explainable and subject to human review by cybersecurity experts.
Venkateswaran suggests a balanced approach. “AI should augment and not replace human judgment. It should be like a secondary decision support for a security analyst, a security manager, a security head, or a CSO,” he said.
His team implements “self-healing AI” that automatically counters threats like denial-of-service attacks, but always with anonymized data inputs and SME-reviewed algorithms.
This hybrid model allows rapid response while maintaining accountability—a framework other hospitals may adopt as global healthcare cyberattacks surge.