Elon Musk and Mark Zuckerberg Are Arguing About AI -- But They're Both Missing the PointThe dangerous aspect of AI will always come from people and their use of it, not from the technology itself.

ByArtur Kiulian

Opinions expressed by Entrepreneur contributors are their own.

Erik Simonsen | Getty Images

In Silicon Valley this week,a debateabout the potential dangers (or lack thereof) when it comes to artificial intelligence has flared up between two tech billionaires.

Facebook CEO Mark Zuckerberg thinks that AI is going to "make our lives better in the future," while SpaceX CEO Elon Musk believes that AI a "fundamental risk to the existence of human civilization."

Who's right?

Related:Elon Musk Says Mark Zuckerberg's Understanding of AI Is 'Limited' After the Facebook CEO Called His Warnings 'Irresponsible'

They're both right, but they're also both missing the point. The dangerous aspect of AI will always come from people and their use of it, not from the technology itself. Similar to advances in nuclear fusion, almost any kind of technological developments can be weaponized and used to cause damage if in the wrong hands. The regulation of machine intelligence advancements will play a central role in whether Musk's doomsday prediction becomes a reality.

It would be wrong to say that Musk is hesitant to embrace the technology since all of this companies are direct beneficiaries of the advances in machine learning. TakeTeslafor example, where self-driving capability is one of the biggest value adds for its cars. Musk himself even believes that one day it will be safer to populate roads with AI drivers rather than human ones, thoughpubliclyhe hopes that society will not ban human drivers in the future in an effort to save us from human error.

What Musk is really pushing for here by being wary of AI technology is a more advanced hypothetical framework that we as a society should use to have more awareness regarding the threats that AI brings. Artificial General Intelligence (AGI), the kind that will make decisions on its own without any interference or guidance from humans, is still very far away from how things work today. The AGI that we see in the movies whererobotstake over the planet and destroy humanity is very different from the narrow AI that we use and iterate on within the industry now. In Zuckerberg's view, the doomsday conversation that Musk has sparked is a very exaggerated way of projecting how the future of our technology advancements would look like.

Related:The Future of Productivity: AI and Machine Learning

While there is not much discussion in our government about apocalypse scenarios, there is definitely a conversation happening about preventing the potentially harmful impacts on society from artificial intelligence. White House recently released a couple of reports on thefuture of artificial intelligenceandon the economic effects it causes. The focus of these reports is on the future of work, job markets and research on increasing inequality that machine intelligence may bring.

There is also an attempt to tackle a very important issue of "explainability" when it comes to understanding actions that machine intelligence does and decisions it presents to us. For example,DARPA(国防高级研究计划局)ency within the U.S. Department of Defense, is funneling billions of dollars into projects that would pilot vehicles and aircraft, identify targets and even eliminate them on autopilot. If you thought the use of drone warfare was controversial, AI warfare will be even more so. That's why here it's even more important, maybe even more than in any other field, to be mindful of the results AI presents.

可辩解的AI(新品),该计划由DARPA, aims to create a suite of machine learning techniques that produce more explainable results to human operators and still maintain a high level of learning performance. The other goal of XAI is to enable human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

Related:Would You Fly on an AI-Backed Plane Without a Pilot?

The XAI initiative can also help the government tackle the problem of ethics with more transparency. Sometimes developers of software have conscious or unconscious biases that eventually are built into an algorithm -- the way Nikon camerabecame internet famousfor detecting "someone blinking" when pointed at the face of an Asian person or HP computers were proclaimedracist for not detecting black faceson the camera. Even developers with the best intentions can inadvertently produce systems with biased results, which is why, as the White House report states, "AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias."

Even with the positive use cases, the data bias can cause a lot of serious harm to society. Take China's recent initiative to use machine intelligence topredict and prevent crime. Of course, it makes sense to deploy complex algorithms that can spot a terrorist and prevent crime, but a lot of bad scenarios can happen if there is an existing bias in the training data for those algorithms.

It important to note that most of these risks already exist in our lives in some form or another, like when patients are misdiagnosed with cancer and not treated accordingly by doctors or when police officers make intuitive decisions under chaotic conditions. The scale and lack of explainability of machine intelligence will magnify our exposure to these risks and raise a lot of uncomfortable ethical questions like, who is responsible for a wrong prescription by an automated diagnosing AI? A doctor? A developer? Training data provider? This is why complex regulation will be needed to help navigate these issues and provide a framework for resolving the uncomfortable scenarios that AI will inevitably bring into society.

Wavy Line
Artur Kiulian

Partner at Colab

Artur Kiulian, M.S.A.I., is a partner at Colab, a Los Angeles-based venture studio that helps startups build technology products using the benefits of machine learning. An expert in artificial intelligence, Kiulian is the author ofRobot is the Boss: How to do business with artificial intelligence.

Editor's Pick

Related Topics

Business News

Report: AI Will Take More Jobs Away from Women Than Men

Automation is many things, but apparently, it is not gender-neutral.

Business News

What Is a 'Lazy Girl Job'? New TikTok Trend Empowers Women to Work However They Want

The trend began as a way for women to find more free time during their days.

Growing a Business

3 Solutions That Help Alleviate Everyday Pressures Small Business Owners Face

We live in a world with increasing pressures from stakeholders, constantly changing customer expectations and volatile financial conditions — which for many, especially business owners — can make it hard to create clear distinctions between professional and personal emotions.

Starting a Business

10 Common Obstacles to Avoid When Starting a Business

Starting a new business can be an exciting and rewarding venture, but it also comes with its fair share of challenges. Here are some common obstacles to avoid when starting a new business.

Starting a Business

So You Sold Your First Business and Now You're Starting a New One — Here's How to Make Sure It's a Success.

Starting a second company after selling your first can be daunting, but it's also an exciting opportunity to prove yourself and create something amazing.