Air Canada just found out how much financial trouble AI chatbots can cause. Other companies should take note.

Many companies are chomping at the bit to lay off their human employees and replace them with artificial intelligence.

AI will certainly have its uses and make a lot of these companies a lot of money, it's just that they haven't quite figured out what those uses are yet because most do not really understand what AI is. And if they don't figure it out soon, they stand to lose a lot of money along the way.

One use AI is definitely not good for is customer service.

It can cause real harm to customers seeking information, like when the National Eating Disorder Association replaced their hotline workers with chatbots who told anorexic and bulimic callers they needed to lose weight.

If real people are getting hurt by chatbots, it is inevitable that courts will start holding companies and organizations liable for that harm.

Take, for example, the case of Jake Moffatt.

Moffat's grandmother passed away and he needed round trip airfare from Vancouver to Toronto. Most airlines offer some sort of bereavement discount for folks who have had loved ones that have passed away.

On Air Canada, the $1600 round trip ticket would only cost Moffat $380.

But Moffat was not sure how to get the discount and did not have a lot of time to figure it out, so he consulted the airline's online chatbot to get some quick answers.

The chatbot informed Moffat that he could buy the full price tickets now and the get the difference reimbursed later. Moffat thought that was a reasonable solution, and that's what he did.

When he returned from his grandmother's funeral, he submitted his documentation for the refund and was promptly told that Air Canada did not have a reimbursement policy for bereavement flights. The chatbot had made the policy out of whole cloth because, as everyone should know by now, that's what chatbots do.

They mimic creativity, but they have no moral obligation to tell the truth — at all.

The way AI works is by looking at thousands, perhaps millions, of similar texts or images across the internet and predicting what letter/word/pixel comes next.

The element that is missing is the dire consequences of getting that letter/word/pixel wrong. AI is still just a machine.

But someone has to pay those consequences for the machine's mistake, which is why a Canadian tribunal found Canadian Air liable for the information the chatbot parlayed to Moffat.

While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.

A similar U.S. class action lawsuit winding its way through the courts claims that United Health used chatbots to deny the claims of elderly people receiving medically necessary procedures.

Relying on the nH Predict AI Model, Defendants purport to predict how much care an elderly patient ‘should' require, but overrides real doctors' determinations as to the amount of care a patient in fact requires to recover. As such, Defendants make coverage determinations not based on individual patient's needs, but based on the outputs of the nH Predict AI Model, resulting in the inappropriate denial of necessary care prescribed by the patients' doctors. Defendants' implementation of the nH Predict AI Model resulted in a significant increase in the number of post-acute care coverage denials.

Of course, the trouble with using chatbots to determine the length and need of care is that there's nothing that requires them to base that on anything other than a prediction of what the next letter/word will be in the sentence generated. It might sound official like a real doctor wrote it, but it‘s like a game on Who's Line is It Anyway - everything is made up and the points don't matter.

Except the points do matter, and if more companies start losing money for the crazy things their chatbots make up and tell their customers/clients, those companies might decide AI isn't worth the cost.

And I think that's a mistake.

The creativity of AI is amazing, and it's doing things that humans haven't dreamed of doing.

Designing new buildings for example.

Humans are creatures of habit. When we find a building structure that works, we tend to stick with it. Just drive down the street of a new development sometime and notice how every house looks just the same. AI has been doing all kinds of crazy things with architecture, designing buildings we would never dream of.

However, I wouldn't trust them to do any of the engineering of those buildings. Imagine if they started making up the building code as they went. We'd have a better chance with a DEI engineer in charge than a chatbot.

Yet there's a place for AI - not as a human replacement, but as a new tool to maximize and augment human creativity.

And until we can teach the machine to care about consequences, most other uses are going to have to be weighed against the likelihood of harm.

Disclaimer: The opinions expressed in this article are those of the author and do not necessarily reflect the opinions of Not the Bee or any of its affiliates.


P.S. Now check out our latest video 👇

Keep up with our latest videos — Subscribe to our YouTube channel!

Ready to join the conversation? Subscribe today.

Access comments and our fully-featured social platform.

Sign up Now
App screenshot

You must signup or login to view or post comments on this article.