Corporate AI use surges, but safety measures lag behind: Infosys study

BANGALORE, INDIA — A new Infosys Knowledge Institute report exposes a mounting crisis in corporate AI governance, with 78% of companies viewing responsible AI as a key growth driver but only 2% have established meaningful safeguards.
The study warns that 95% of enterprises experienced AI-related incidents in the past two years, with 39% suffering severe consequences.
Companies worldwide adopt AI, but ignore key safeguards
The Infosys Knowledge Institute’s global survey of 1,500 executives reveals most companies are dangerously unprepared for AI risks, despite nearly universal adoption.
The Responsible AI Standards Evaluation: Benchmark and Readiness (RAISE BAR) reveals that even basic safety measures, such as bias reduction and problem-handling plans, remain uncommon, with most companies spending 30% less than they should on governance.
Responsible AI (RAI) leaders demonstrate the value of proper controls, experiencing 39% lower financial impacts and 18% less severe incidents than peers.
These top performers excel in explainable AI systems, proactive bias testing, and centralized governance—practices the report urges others to adopt as agentic AI introduces new risks that 86% of executives remain unprepared to handle.
Responsible AI seen as strategic edge, not just compliance
Ironically, the same study indicates that 83% of leaders are confident that regulations of future AI will accelerate the process of innovations instead of seriously dragging them down.
Infosys views RAI not as a compliance requirement but as a strategic differentiator. The report discloses how companies with governance emphasis generate new sources of revenue as they reduce risks.
Their AI3S framework (Scan, Shield, Steer) is an example that combines the benefits of decentralized innovation with centralized control through special RAI offices.
“Companies should not discount the important role a centralized RAI office plays as enterprise AI scales and new regulations come into force,” asserts Balakrishna D.R., Infosys Executive Vice President (EVP)—Global Services Head, AI, and Industry Verticals, noting that ethical foundations enable sustainable scaling.
The report recommends immediate action, such as adopting secure AI platforms with built-in guardrails, implementing rigorous validation processes, and establishing cross-functional governance teams before regulatory mandates force less strategic implementations.

Independent




