AI adoption soars, yet trust and cost concerns loom: Capgemini study

PARIS, FRANCE — A Capgemini Research Institute report reveals that generative AI has achieved mainstream adoption in the enterprise at an unprecedented rate.
However, this breakneck expansion is creating significant challenges, including a pervasive trust deficit, rising operational costs, and critical governance gaps that organizations are struggling to address.
“As organizations scale Agentic AI, they need to focus on priorities that deliver meaningful value with non-regrettable risk. This requires a unified enterprise framework that ensures consistent standards for ethics, access, orchestration, and observability,” said Craig Suckling, Chief AI Officer of Capgemini Europe.
Generative AI adoption reaches global scale
Generative AI has transitioned from experimental pilots to core business operations at a remarkable pace. Adoption has surged from a mere 6% of organizations in 2023 to 30% in 2025, representing a fivefold increase in just two years.
Currently, a significant percentage, which is 93% of companies, are actively investigating, experimenting with, or deploying GenAI capabilities in their operations. Industries such as telecommunications, consumer goods, and aerospace and defense are leaders in customer operations, marketing, and IT.
This is being driven by substantial and increasing investment, and most organizations report tangible benefits. Eighty-eight percent of organizations are spending more on GenAI, on average, 9% more investments in the last year, with 12% of the total IT budget allocated to the technology.
The returns are also evident, with 79% of organizations expressing satisfaction with their Gen AI results, citing increased innovation and productivity benefits.
Trust and governance gaps threaten AI scaling
Despite rapid adoption, a significant trust barrier prevents the full-scale deployment of autonomous AI systems.
Only 71% of organizations claim to be able to trust autonomous AI agents to be used in the enterprise to the full extent. It is due to security worries of 65%, the risk of privacy of 62%, and the threat of biased results of 51% that this skepticism is based on the deeper fear of relinquishing control without any verifiable assurances.
The lack of trust is complicated by a general inability to introduce efficient governance structures. Although almost half, which is 46% of organizations, have developed AI governance policies, nearly half of them, which is 47% are concerned that their employees rarely adhere to them.
Furthermore, governance is inconsistently applied across business functions in 47% of organizations, and dedicated oversight bodies, like a Chief AI Officer or an ethics committee, are found in only a minority of firms, leaving AI systems without adequate guardrails or accountability.
The Capgemini research surveyed 1,100 leaders at organizations with annual revenue above $1 billion across 15 countries.

Independent




