AI agents face trust crisis as adoption accelerates globally

MASSACHUSETTS, UNITED STATES — As agentic artificial intelligence becomes more embedded in business and daily life, the question of trust is central to the technology’s trajectory and ultimate adoption.
Agentic AI—software that autonomously pursues goals, makes decisions, and acts on behalf of users—holds the promise of transforming how we work and live, but its success hinges on whether individuals and organizations can rely on these agents to act in their best interests.
Companies like Salesforce are already deploying AI agents to handle customer queries across industries, recognizing when human intervention is needed. But as personal AI agents begin to manage calendars, conduct research, and even negotiate purchases, questions about trust and reliability are coming to the forefront.
The promise and peril of personal AI agents
Personal AI agents, acting as digital assistants, could revolutionize how we interact with technology. “The idea of personal AI agents goes back decades, but the technology finally appears ready for [prime time],” note Blair Levin and Larry Downes in the Harvard Business Review.
Leading firms are rolling out prototype agents to customers and suppliers, but with this innovation come significant challenges: “Can AI agents be trusted to act in our best interests? Will they work exclusively for us, or will their loyalty be split between users, developers, advertisers, and service providers? And how will we know?”
The answers to these questions will determine whether users embrace these agents and if their widespread use will strengthen or undermine business relationships and brand value.
Cybersecurity and bias emerge as top agentic AI threats
The most serious concerns include vulnerability to cybercriminals—personal AI agents could be hijacked or reprogrammed to act against users, much like an embezzling employee or identity thief.
Although widespread reports of hijacked agents are rare, security experts warn that even today’s most secure models can be tricked into malicious activities, such as exposing passwords or sending phishing emails.
Another risk is manipulation by marketers. AI agents could be biased to favor certain brands or retailers, with such bias often invisible to users.
“Consumer marketers have strong incentives to keep AI agents from shopping in a truly independent environment,” the authors observe.
Similarly, misinformation poses a general but significant threat—agents may unintentionally rely on false information when making decisions for users, potentially leading to harmful outcomes.
Building trust through regulation and technology
To ensure AI agents can be trusted, experts recommend treating them as fiduciaries—entities with legal duties of loyalty, disclosure, and accountability.
“Legal systems must ensure AI agents and any other software with the capability to make consequential decisions are treated as fiduciaries,” write Levin and Downes.
Market enforcement, such as insurance and independent monitoring, can also help. Credit bureaus and insurers already protect users from financial risks; similar oversight could be extended to AI agents.
Technical solutions, like keeping sensitive data and decision-making localized to users’ devices, are another safeguard.
“Careful design and implementation of agentic AI technology can head off many trust-related issues before they arise.” Companies like Apple and Microsoft are working on agentic AI tools that restrict data disclosure and use strong encryption.
Balancing innovation with accountability in AI’s next era
Agentic AI offers tremendous potential to simplify and improve both business and personal life. However, adoption will depend on user confidence in the technology’s trustworthiness.
Clear legal frameworks, robust oversight, and transparent technical safeguards are essential.
As Levin and Downes conclude, “Getting it right, as with any fiduciary relationship, will require a clear assignment of legal rights and responsibilities, supported by a robust market for insurance and other forms of third-party protection and enforcement tools.”