Gavel

When Cloud AI Lands You In Court

It’s clear that AI, including generative AI, will be tested in the courts. Cloud and AI architects must practice defensive design and governance to stay out of trouble.

In a recent legal ruling against Air Canada in a small claims court, the airline lost because its AI-powered chatbot provided incorrect information about bereavement fares. The chatbot suggested that the passenger could retroactively apply for bereavement fares, despite the airline’s bereavement fares policy contradicting this information. Whoops! Of course, the link to the policy was provided in the chatbot’s response; however, the court found that the airline failed to explain why the passenger should not trust the information provided by the company’s chatbot.

The case has drawn attention to the intersection of AI and legal liability and is a compelling illustration of the potential legal and financial implications of AI misinformation and bias.

The tip of the iceberg

I’ve found that humans don’t much like AI—certainly when it comes up with an answer they may disagree with. This can be as simple as the Air Canada case, which was settled in small claims court, or as serious as a systemic bias in an AI model that denies benefits to specific races.

In the Air Canada case, the tribunal called it a case of “negligent misrepresentation,” meaning that the airline had failed to take reasonable care to ensure the accuracy of its chatbot. The ruling has significant implications, raising questions about company liability for the performance of AI-powered systems, which, in case you live under a rock, are coming fast and furious.

Also, this incident highlights the vulnerability of AI tools to inaccuracies. This is most often caused by the ingestion of training data that has erroneous or biased information. This can lead to adverse outcomes for customers, who are pretty good at spotting these issues and letting the company know.

The case highlights the need for companies to reconsider the extent of AI’s capabilities and their potential legal and financial exposure to misinformation, which will cause bad decisions and outcomes from the AI systems.

Review AI system design like you’re testifying in court

Why? Because the likelihood is that you will be.

I tell this to my students because I truly believe that many of the design and architecture calls that go into building and deploying a generative AI system will someday be called into question, either in a court of law or by others who are attempting to figure out if something is wrong with the way the AI system is working.

I regularly make sure that my butt is covered with tracking and log testing data, including detection of bias and any hallucinations that are likely to occur. Also, is there an AI ethics specialist on the team to ask the right questions at the right time and oversee the testing for bias and other issues that could get you dragged into court?

Are only genAI systems subject to legal scrutiny? No, not really. We’ve dealt with software liability for years; this is no different. What is different is the transparency. AI systems don’t work via code; they work via knowledge models created from a ton of data. In finding patterns in this data, they can come up with humanlike answers and carry on with ongoing learning.

This process allows the AI system to become more innovative, which is good. But it can also introduce bias and bad decisions based on ingesting lousy training data. It’s like a system that reprograms itself each day and comes up with different approaches and answers based on that reprogramming. Sometimes it works well and adds a tremendous amount of value. Sometimes it comes up with the wrong answer, as it did for Air Canada.

How to protect yourself and your organization

First off, you need to practice defensive design. Document each step in the design and architecture process, including why technologies and platforms were selected.

Also, it’s best to document the testing, including auditing for bias and mistakes. It’s not a matter of if you’ll find them; they are always there. What matters is your ability to remove them from the knowledge models or large language models and to document that process, including any retesting that needs to occur.

Of course, and most importantly, you need to consider the purpose of the AI system. What is it supposed to do? What issues need to be considered? How will it evolve in the future?

It’s worth raising the issue of whether you should use AI in the first place. There are a lot of complexities to leveraging AI on the cloud or on-premises, including more expense and risk. Companies often get in trouble because they use AI for the wrong use cases and should have instead gone with more conventional technology.

All of this won’t keep you out of court. But it will assist you if it happens.

By: David Linthicum
Originally published at: InfoWorld