Why 'Fail Fast' Is a Disaster When It Comes to Artificial IntelligenceFor typical products, going to market quickly and seeing what happens is fine. The implications of AI merit a much more considered approach.

ByMatthew Baker

Opinions expressed by Entrepreneur contributors are their own.

wigglestick | Getty Images

"Fail fast" is a well-known phrase in the startup scene. The spirit of failing fast is getting to market with a minimum viable product and then rapidly iterating toward success. Failing fast acknowledges that entrepreneurs are unlikely to design a successful end-state solution before testing it with real customers and real consequences. This is the "ready, fire, aim" approach. Or, if the blowback is big enough, it's the "ready, fire, pivot" approach.

Consider this quote from Reid Hoffman, CEO of LinkedIn: "If you're not embarrassed by the first version of your product, you've launched too late."

Related:Ready or Not, It's Time to Embrace AI

The opposite of failing fast is a "waterfall" approach to software development, where a significant amount of time is invested upfront -- requirements analysis, design and scenario planning -- before the software is ever tested with real customers.

When it comes to the emerging potential of artificial intelligence, I believe failing fast is a recipe for disaster.

Artificial intelligence is here to stay.

Many different types of artificially intelligent software surround us. Most AI has minimal authority today. For instance, Amazon's software recommends things you might like to buy, but it doesn't actually purchase those things on your behalf -- yet. Spotify's software makes a decision to create a playlist for you, but if a song doesn't suit your tastes, the consequences are benign. Google's software decides which websites are most relevant for your search terms but doesn't decide which website you will visit. In all of these cases, failing fast is okay. Usage leads to more data, which leads to improvements in the algorithms.

But intelligence software is beginning to make independent decisions that represent much higher risk. The risk of failure is too great to take lightly, because the consequences can be irreversible or ubiquitous.

We wouldn't want NASA to fail fast. A singleSpace Shuttle launch costs $450 millionand places human lives in jeopardy.

人工智能的风险正在增加.

Imagine this: What if we exposed 100+ million people to intelligent software that decided which news they read, and we later discovered the news may have been misleading or even fake and resulted in influencing the election for the President of the United States of America? Who would be held responsible?

Related:5 Ways in Which Digital and Artificial Intelligence are Changing Work Dynamics

It sounds far-fetched, but media reports indicateRussian influence reached 126 million people through Facebook alone. The stakes are getting higher, and we don't know whom to hold accountable. I am fearful the companies spearheading advancements in AI aren't cognizant of the responsibility. Failing fast shouldn't be an acceptable excuse for unintended outcomes.

If you're not convinced, imagine these scenarios as a by-product of a fail fast mindset:

  1. What if your entire retirement savings evaporated overnight due to artificial intelligence? Here's how it could happen. In the near future, millions of Americans will use intelligent software to invest billions of dollars in retirement savings. The software will decide where to invest the money. When the market experiences a massive correction, as it does occasionally, the software will need to react quickly to re-distribute your money. This could lead to an investment that bottoms out in minutes and your funds disappear. Is anyone responsible?
  2. What if your friend were killed in an automobile accident due to artificial intelligence? Here's how it could happen. In the near future, millions of Americans will purchase driverless automobiles controlled by intelligent software. The software will decide the fate of many Americans. Will the artificial intelligence choose to hit a pedestrian that accidentally steps into the street or steer the vehicle off the road? These are split-second decisions with real-world consequences. If the decision is fatal, is anyone responsible?
  3. What if your daughter or son suffered from depression due to artificial intelligence? Here's how it could happen. In the near future, millions of kids will have an artificial best friend. It will be sort of like an invisible friend. It will be a companion named Siri or Alexa or something else that talks and behaves like a confidant. We'll introduce this friend to our children because it will be friendly, smart and caring. It might even replace a babysitter. However, if your daughter or son spends all their discretionary time with this artificial friend and years later can't sustain meaningful relationships in the real world, is anyone responsible?

In some cases, the consequences can't be undone.

Responsible approach to AI.

The counter-argument is that humans already cause these tragedies. Humans spread fake news. Humans lose money in the stock market. Humans kill one another with automobiles. Humans get depressed.

Related:Life Coaching Guru Tony Robbins Tells Us Why He;s Investing in an AI Company

The difference is that humans are individual cases. The risk with AI that replaces or competes with human intelligence is that it can be applied at scale simultaneously. The scope and reach of AI is both massive and instantaneous. It's fundamentally introducing higher risk. While one driver who makes an error is truly unfortunate, one driver that makes the same error for millions of people should be unacceptable.

A more responsible approach to AI is needed. Our mindset should shift toward risk prevention, security planning and simulation testing. While this defies the modern ethos of the tech industry, we have a responsibility to prevent the majority of unlikely and unwanted outcomes before they occur. The good news is that with the right mindset, we can prevent the scenarios above from becoming true.

Matthew Baker

VP of Strategy with Freshbooks

Matt Baker is a contributing writer who covers finances and growth for small businesses. His industry experience includes VP Strategy at FreshBooks, engagement manager at McKinsey & Company, and senior strategist at Google, Inc. He also wrote a children's book.

Editor's Pick

Related Topics

Business News

'No Question, We Probably Went Too Far': Delta Airlines CEO Backtracks on Sweeping Changes to SkyMiles Accounts, Sky Club Access

The unpopular changes set to roll out in 2025 were announced earlier this month.

Business News

Jeff Bezos Lost $5 Billion in 1 Day After Amazon FTC Lawsuit News

The lawsuit accuses Amazon of engaging in anticompetitive practices, which has led to a sharp decline in the company's stock value and a substantial reduction in Bezos's net worth.

Business News

凯蒂·佩里是Fighting the Founder of 1-800-Flowers for a $15 Million California Mansion He Doesn't Want to Sell Her

The eight-bedroom, 11-bathroom estate sits on nearly nine acres in the Santa Ynez foothills in Montecito.

Growing a Business

So Your Company Is Talking About Transformation — But Is It Ready? Here's How To Tell.

Transformation is one of a company's many choices — but if a team opts to do it, they have to be sure the business is ready, willing and able.