No chat or didn't give misleading information. It acted on the companies behalf and gave truthful information that the company didn't agree with. Too flippin bad companies. You deploy robots to fulfill the jobs of humans, then you deal with the consequences when you lose money. I'm glad you're getting screwed by your own greed, sadly it's not enough.
Imagine when they find out it's actually shit and they need to hire the people back and they ask for a good salary. They'll turn around again asking their gouvernements for subsidies or temporary foreign workers saying no one wants to work anymore.
I'd love if there were some sort of salary baseline that companies are required to abide before asking for staffing handouts. "We've tried nothing and we're all out of ideas!"
As usual, corporations want all of the PROFIT that comes with automation and laying off the human beings that made them money for years, but they also fight for none of the RESPONSIBILITY for the enshittification that occurs as a result.
No different than creating climate change contributing "externalities," aka polluting the commons and walking away because lol you fucking suckers not their problem.
What i find most stupid about all of this is that Air Canada could just have admitted a mistake, payed The refund of ~450 USD which is basically nothing to them. It would have waisted no one's time and made good customer service and positive feedback. Then quietly fix the AI in the background and move on. Instead they now spend waaayy more money on legale fees, expensive lawyers, employees sallery, have a disabled AI, customer backlash and bad press all costing them many hundreds of thousands of dollars. So stupid.
Paid. Something something "payed" is only for nautical rope or something.
waisted
Wasted. Something something "waisted" is only for dressmaking or something.
I can't remember the details of what that bot says, but it is something along these lines. I am not a bot, and this action was performed manually. Cheers!
Thanks. I do know tho, but im slightly dyslexic and English is not my first language so it's hard for me to catch my own mistakes, while I can easily see it when others are making it. Also autocorrect is a blessing and a curse for me sometimes.
You'd think they'd have tried a better case then. They lost in the court of public opinion as soon as it was about bereavement and their argument that the chatbot on their own site is a separate legal entity they aren't responsible for is pants on head stupid.
In a way, we should be grateful they bungled it and are held liable, other companies may be held to the same standard in the future.
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Customers are forced to. Companies would rather give shitty and inaccurate information with the veneer of helping someone rather than pay a human to actually help someone.
They will continue using chatbots as long as they think it won't cost them more in lost customers or this sort of billing dispute than it saves them in not paying people. What was this, $600? That's fuckall compared to a salary. $600 could happen a few hundred times a year and they'd still be profiting after firing some people.
It's off for now, but it will return after the lawyers have had a go at making the company not liable for the chatbot's errors.
To employ someone at 10$/hr, their actual cost is probably close to 15$/hr when you factor I them coming in to work in the office and all the costs associated with that. At 15$/hr it takes 40 hrs to cost 600$ to thr company. That is one week of work for one employee. This means that they could have a 600$ fuck up every week and still break even over hiring a person. And we are talking about just one person. Chat support is nor.ally contracted out as entire teams and departments.
I could see this simply resulting in every chatbot having a disclaimer that it might be spitting straight bullshit and you should not use it for legal advice.
At this point, I do consider this a positive outcome, too, because it's not always made obvious whether you're talking with something intelligent or just a text generator.
But yeah, I would still prefer, if companies simply had to have intelligent support. This race to the bottom isn't helping humanity.
Can only speak for the UK but as the lowest civil court here, small claims decisions are not binding on any other court (including other small claims courts) but they are considered "pervasive" and thus a judge should be aware and take them into consideration.
I once had a charge for Air Canada on my credit card. I immediately called in to the fraud number and said my number had been stolen. They asked me how I knew. I said I would never in my life fly on Air Canada unless there was no choice. They laughed and canceled the charge.
Dual_Sport_Dork's Ironclad Law Of AI Productivity: The amount of effort you must expend on ensuring that the unsupervised chatbot is always producing accurate results is precisely the same amount of effort you would expend doing the same work yourself.
I wonder how much time and space there will be to "play" between the first case in the US that would uphold this standard legally, and when companies lock down AI from edge cases. I've been breaking generative LLMs since they hit public accessibility. I'm a blackhat "prompt engineer"(I fucking hate that term).
On the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto.
In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked.
Experts told the Vancouver Sun that Moffatt's case appeared to be the first time a Canadian company tried to argue that it wasn't liable for information provided by its chatbot.
Last March, Air Canada's chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI "experiment."
“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.
It was worth it, Crocker said, because "the airline believes investing in automation and machine learning technology will lower its expenses" and "fundamentally" create "a better customer experience."
The original article contains 906 words, the summary contains 176 words. Saved 81%. I'm a bot and I'm open source!
This is, uhhh, not good. Appropriate (or maybe ironic, if you're a Canadian singer songwriter and You Can't Do That on Television alum) for an article about a bad chatbot.