Legal challenges threaten AI use in customer service

NEW YORK, UNITED STATES — The increasing reliance on artificial intelligence (AI) in customer service is facing significant legal challenges, with two recent lawsuits highlighting potential pitfalls for companies, including banks.
According to American Banker’s Penny Crosman, these cases underscore the complexities and risks associated with deploying AI in customer-facing roles.
Air Canada’s chatbot error sparks legal action
In a notable case, Air Canada encountered legal trouble after a customer was misled by its AI chatbot.
Jake Moffat, seeking a bereavement discount following his grandmother’s death, was incorrectly informed by the airline’s chatbot that he had 90 days to apply for the discount. The airline later refused the discount, citing its policy that required requests to be submitted before the flight.
A civil resolution tribunal ruled in favor of Moffat, ordering Air Canada to award the discount and pay additional fees. The tribunal criticized the airline for not ensuring the chatbot’s accuracy, emphasizing that information from both static pages and chatbots should be reliable.
Greg Ewing, a legal expert, noted that this case might prompt companies to rethink their use of AI chatbots.
“It’s going to both drive innovation and motivate companies like Air Canada to be careful about which chatbots they choose,” he said.
Patagonia faces privacy lawsuit over AI use
The second lawsuit involves Patagonia and its use of Talkdesk, a contact center technology vendor.
Customers accused Patagonia of violating California’s data privacy law by allowing Talkdesk to record and analyze customer-support calls without proper consent. The lawsuit argues that the notification given to customers did not adequately disclose that a third party would use the recordings for its own purposes.
California’s wiretapping law, which prohibits recording conversations without consent, strengthens the plaintiffs’ case.
Ewing pointed out that the standard preamble about calls being recorded for training does not imply third-party involvement, suggesting that companies need to be more transparent about their AI usage.
Banking sector on alert amid AI legal risks
These legal challenges have significant implications for banks, many of which use AI to analyze customer sentiment and call center performance. While banks are cautious about deploying generative AI, the lawsuits highlight the need for clear communication and consent when using AI in customer interactions.
Krish Swamy, Citizens’ Chief Data and Analytics Officer, emphasized the importance of integrating safeguards when adopting AI.
“Smart financial institutions should integrate appropriate guardrails, including human safeguards, protection of customer data, and upholding privacy commitments,” he stated.
As AI continues to evolve, companies must navigate the legal landscape carefully, balancing innovation with customer privacy and transparency.
The outcomes of these lawsuits could set precedents that influence how AI is implemented across various sectors, particularly in banking, where customer trust and data security are paramount.