Air Canada and the Chatbot, Learning from Your CX Mistakes

I can’t get this recent Air Canada customer experience “incident” out of my head. We’ve written and talked extensively about how you can use AI and generative AI to dramatically improve your customer experience (CX). Apparently we also need to talk about how the policies you build around the use of generative AI, particularly customer service chatbots, can deliver the opposite of that “good” CX you think you’re building. Here’s a summary of the story:

After being misled by Air Canada's customer service chatbot on its website regarding the airline's bereavement travel policy, a passenger purchased full price tickets to attend his grandmother’s funeral with the expectation that the airline would refund the difference from the bereavement fare and the full fare, as long as he made the claim within 90 days. However, Air Canada's “official” policy stated that refunds were not available for bereavement travel after booking. Luckily the passenger had a screenshot of the instructions from the chatbot. Despite months of trying to convince Air Canada, the passenger was initially offered a $200 coupon. Unsatisfied, the passenger filed a complaint and was eventually awarded a refund. Air Canada argued that the chatbot was a separate legal entity responsible for its own actions and not their responsibility despite being trained and deployed by Air Canada. The tribunal ruled in favor of the passenger, stating that Air Canada should have taken reasonable care to ensure the accuracy of its chatbot.

It seems that Air Canada deployed chatbots for customer service on its website early 2023. The strategy was to prove them out as a way to, at least initially, deflect some of the service volume from its call center agents, with the ultimate goal of replacing some number of agents over time. The chatbot seems to have been a modern, intelligent, generative AI based bot, presumably trained on their website content and policies. I don’t have visibility into how it was trained, what LLM was used nor what might have been incorporated in the implementation to mitigate hallucinations.

So what can we learn from this incident?

  • An intelligent chatbot deployed in a customer service role has the same responsibility and accountability as any human CS agent. I realize that you can add a disclaimer to the chatbot to protect you legally (maybe), but should you? Simple answer, no.

  • A customer service chatbot must be at least as competent and accurate as your human agents. If not, don’t deploy it. How do you know…test before you go live.

  • Build your CX policies so that anyone in the customer interaction chain understands that punishing a customer who is having some type of personal issues / tragedy is a very poor response, particularly over a few hundred dollars. In this case, you’ve most likely lost a customer, and more importantly are now dealing with a PR problem that is also likely costing you more revenue than the potential refund of the difference in the fares by orders of magnitude.

  • Remember that mistreating a customer at any interaction point reflects on your entire CX strategy and will lead customers to defect, or at least create negative PR for you. If you don’t want to treat customers fairly because of your focus on providing the best service possible, at least do it to prevent it from blowing up and tarnishing your brand.

  • Making inane claims about technology you deployed in any situation, as in trying to disavow responsibility for the accuracy of something on your website, a property that your customers need to believe is trustworthy, is ludacris, and is a shortcut to a PR black eye. Don’t do it, and make sure your team understands that as well.

  • Building a successful CX strategy requires a customer focused culture, that’s the simple explanation for this “incident”. If employees aren’t empowered to solve customer problems with empathy and understanding, you will pay for it long term.

Apparently Air Canada has disabled the chatbot for now. That’s certainly a good idea, but outside looking in, I suspect they are blaming the technology or the vendor for their problems. You need to look at your culture, policies and CX strategy, not the technology.

Michael Fauscette

Michael is an experienced high-tech leader, board chairman, software industry analyst and podcast host. He is a thought leader and published author on emerging trends in business software, artificial intelligence (AI), generative AI, digital first and customer experience strategies and technology. As a senior market researcher and leader Michael has deep experience in business software market research, starting new tech businesses and go-to-market models in large and small software companies.

Currently Michael is the Founder, CEO and Chief Analyst at Arion Research, a global cloud advisory firm; and an advisor to G2, Board Chairman at LocatorX and board member and fractional chief strategy officer for SpotLogic. Formerly the chief research officer at G2, he was responsible for helping software and services buyers use the crowdsourced insights, data, and community in the G2 marketplace. Prior to joining G2, Mr. Fauscette led IDC’s worldwide enterprise software application research group for almost ten years. He also held executive roles with seven software vendors including Autodesk, Inc. and PeopleSoft, Inc. and five technology startups.

Follow me @ www.twitter.com/mfauscette

www.linkedin.com/mfauscette

https://arionresearch.com
Previous
Previous

Making Intelligent Decisions in Information Intensive Business Activities

Next
Next

Transforming Data Analysis: Using AI for Better Business Decisions