The Curious Case of AI and Legal LiabilityAI is no longer a futuristic ideal from sci-fi movies. It's here, and it's affecting the way we do business. Have you considered the legal implications of the fact that the future is now?

ByAndrew Taylor

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur South Africa, an international franchise of Entrepreneur Media.

Bigstock

While we couldn't hope to provide an exhaustive analysis of the circumstances in which the use of artificial intelligence could result in legal liability, it is the intention of this article to provoke some thought about the way in which we integrate AI into our lives and, significantly, into our businesses.

While South Africa lags behind the western world in terms of technology adoption and diffusion, it is without question a matter that warrants a good measure of foresight. This was made particularly relevant when Uber's autonomous vehicle killed a pedestrian in the US earlier this year.

Nowhere are these concerns around the intersection of artificial intelligence and legal liability more applicable. However, it bears mentioning that we have already been living with software systems which, to a greater or lesser degree, have artificially subsumed the role of a human(s) in a given process.

Consider the mid 1980's case of Therac-25, a Canadian-designed radiation dosing machine that incorrectly dosed six patients with a fatal cocktail of radiation.

Moving forward

Conversely, however, the use of modern artificial intelligence and software processes to assist humans in their endeavours has yielded untold gains in efficiency and efficacy across innumerable areas of application.

事实上,不确定性负债our overly litigious society is likely to have hindered the development and commercialisation of many AI solutions that could have been revolutionary, for fear of the possible liability that could ensue as a result of their use. Little doubt, then, that sci-fi has not done very much to aid the cause of the AI evangelists. How then, do we attribute liability to AI?

The problem with conventional criminal and civil liability is that it relies, in large measure on the application of objective standards — criminal liability in South Africa specifically calls for the act (or omission) of a human being and must be a voluntary act. Attributing this standard to AI means that criminal liability cannot ensue for an AI system.

Naturally, there are other forms of liability, but this — at its core — calls for a re-examination of the standards of what constitutes conduct for purposes of criminal conduct. This does not even begin to touch on the hurdles encountered in establishing "fault' on the part of the AI.

Governing AI

The answer lies in the detail of the rationalisation of the decision-making process of the particular application of AI. Perhaps, if we are able to tease out the way in which the AI arrived at the decision as opposed to a black box approach that examines only the result, then we are making some strides to ascertaining whether liability should arise in a given circumstance.

What is clear, is that we need to have a framework in place for the promulgation of appropriate laws that would govern the proverbial Skynet and when liability should arise.

The European Union has made some progress in this regard, having called for an EU-wide legislative framework that will govern the ethical development and deployment of AI, and the establishment of liability for actors, including robots.

It may sound far removed from your day-to-day business, but this may impact your business sooner than you think — from chat bots that enter into contracts, insurance AI that quantifies your risk profile and premium, and legal AI that diagnoses your legal cases using historical case law, to AI that aids judges avoid inherent biases and mete out appropriate sentences, the future is very much here.

From the leading edge

South Africa has an opportunity to lead the regulation of this new frontier and prevent the all too familiar lag of legislation in the dust of technology. It requires a regulatory approach where various formulations of product liability, design and programming liability can be negotiated by informed stakeholders to cater for these new forms of technology and the situations where they go awry, and to more accurately reflect the ethics and concerns of our society.

It is undoubtedly a tricky and murky road, where no system is error-free and wrongfulness of AI is a hard sell, but nevertheless, one which must be explored. In the interim, companies need to ensure that sound corporate governance is practised in all decisions that involve AI, to record the risks identified and to carefully manage its execution and implementation.

Wavy Line
Andrew Taylor

Managing Partner: Henley Estates

Andrew Taylor is a managing partner at Henley Estates, part of the Henley & Partners Group, a global leader in citizenship by investment programmess, with offices in South Africa.

Related Topics

Travel

10 Best Entrepreneurial Events To Attend Before 2023 Is Over

As we head into the latter half of 2023, there's still a great chance for you to get involved in some exciting startup events.

Business News

Netflix is Hiring an AI-Focused Role—and the Starting Salary is up to $900,000

The streaming giant is looking for a leader in its machine learning department.

Living

Finding Balance — How to Pursue Your Entrepreneurial Ideas While Prioritizing Your Well-Being

A question for entrepreneurs: Are we planting seeds or burying ourselves in work?

Starting a Business

10 Common Obstacles to Avoid When Starting a Business

Starting a new business can be an exciting and rewarding venture, but it also comes with its fair share of challenges. Here are some common obstacles to avoid when starting a new business.

Business Models

How to Create a Company Profile in 8 Simple Steps

如果你business does not have a clear and unique identity, it will naturally be tough to raise investment or create a brand that customers prefer.