Why Are Some Bots Racist? Look at the Humans Who Taught Them.Here are a few things we can do about human biases in machine learning.

ByJordi Torras

Opinions expressed by Entrepreneur contributors are their own.

PhonlamaiPhoto | Getty Images

With its scientific algorithms, we trusted robots to deliver neutral, fair, impartial answers. Since they are supposed to be free from unjust human biases and filters from past experiences, we believe that just as 2+2 = 4, mathematical formulas are black and white. We have found instead that the data scientists who create these algorithms have their own unconscious biases, which are subtly filtered down inside of the algorithms and act on these predispositions. More revealing is that even without the presence of bias during the development phase, machines are learning from the discriminatory undertones they perceive in our society.

Related:Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less Lucrative

We can see this bias at play when we useGoogleto search for images of CEOs, with significantly more white males appearing than women or minorities, and search results forhigher-paying jobs appearmore to men than women. An algorithm designed to predict the likelihood of a person committing a future crime becameunfairly racist.Researchers at Boston University and Microsoft New Englandfound that machines associated the word "programmer" with the word "man," rather than "woman," and that the most similar word for "woman" is "homemaker."

AI's predictive power creates solutions.

Artificial Intelligence (AI) is arriving at a time of great necessity. AI's importance is reflected inresearch conducted by McKinsey,发现总年度外部投资AI was between $8 billion to $12 billion in 2016. With its increasing ability to collect vast amounts of data, AI quickly turns this information into actionable insights, providing critical strategic advantages. Chatbots can find answers to customer queries much faster than humans, diagnose diseases and make accurate predictions that can drive product innovation. The predictive data can be used to detect fraud or match the unemployed with available jobs that utilize their skills, and solve complex traffic problems.

Related:A Humanoid Robot Called Sophia Mocked Elon Musk After Being Asked About the Dangers of AI

Are chatbots reflecting a biased world?

The most straightforward solution, it seems, would be to train programmers so that they do not unintentionally write code or use data that is biased. Neutrality can be difficult to discern during the development phase, asBeauty.AIlearned when the results of its first machine-judged beauty pageant resulted in winners that all had fair skin. Developers realized after the fact that the machine was not taught to recognize those with dark skin.

Enlightened developers are not enough.

The creation of fair and balanced algorithms is critical and necessary yet reflects only part of the solution. Machines have the potential to have hidden biases, whether or not they originated from any predisposition of the designer. Machines continuously learn from outside data to do their task better. We are designing machines to think and learn. The extreme example of continuous improvement isElon Musk的场景,机器人训练消除垃圾邮件ight eventually learn to wipe out the humans who create spam. While we are far from this extreme, it's important to understand that the algorithms used by AI depend upon deep learning, which uses neural networks. However pure the original code is, the machines will always be vulnerable to replicating the bias it sees as it engages with the world.

Related:5 Reasons Machine Learning Is the Future of Marketing

This type of learning was evident afterMicrosoft's Taychatbot imitated what it read on Twitter, regardless of how vicious the behavior. Tay lived less than 24 hours before it was influenced by racist tweets. As Tay learned and engaged through conversation and dialogue, it very quickly learned to use these statements to spin its own racist tweets.

AI can perpetuate and reinforce a bias that already exists. For example, if an organization has traditionally hired male CEOs, a bot trained to find future CEOs would look at the past for likely candidates, based upon real data that indicates that previous CEOs were male. The machine would use male candidates as a predictor of someone who is qualified for a job.

Algorithms that only provide results "similar to" previous data create a bubble of their own, as we have experienced in news feeds that are free from conflicting viewpoints. Without opposing points, we lack necessary insights for significant decisions that foster creativity and innovation.

Related:10 Artificial Intelligence Trends to Watch in 2018

Value alignment is a teachable skill.

Mark O. Riedl, an associate professor at the College of Computing, School of Interactive Computing at Georgia Tech, proposes an attractive solution, Quixote, that involves immersing bots in literature. In hispaper, which he presented at theAAAI Conference on Artificial Intelligence, he explains that strong values can be learned, and, "that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate." He explains that stories are encoded with implicit and explicit sociocultural knowledge that could build a moral foundation in the bot and could teach the bot how to solve a problem without harming humans.

Literature can be ambiguous and abstruse, and some of the greatestbooksof all time were once regarded as nefarious by school boards, calling for their ban. The more significant question is how will we, as a society, come to a consensus regarding what is socially responsible literature reading material for our bots?

AI is not a replacement for people.

The goal has never been for AI to replace humans, but rather to support, amplify and enlighten. At a minimum, AI developers can take a more active role to ensure biases are not inadvertently created. Human oversight is still needed at all levels of development, and to ensure there is a healthy mix of opposing views to encourage diversity. Additionally, this must be supported by an equal opportunity for everyone to develop these systems.

Wavy Line
Jordi Torras

CEO and Founder of Inbenta

Jordi Torras founded Inbenta in 2005 to help clients improve online relationships with their customers using revolutionary technologies like artificial intelligence and natural language processing.

Editor's Pick

Related Topics

Money & Finance

Want to Become a Millionaire? Follow Warren Buffett's 4 Rules.

企业家是不能过度指狗万官方望太多a company exit for their eventual 'win.' Do this instead.

Business Solutions

Learn to Program an AI Chatbot for Your Business in This $30 Course

Get back-to-school savings on this AI coding course.

Business Ideas

55 Small Business Ideas to Start in 2023

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2023.

Data & Recovery

Get 1TB of Cloud Storage for Life for $119.97 With This Back-to-School Sale

This 1TB Cloud Storage Solution Is Only $119.97 for Back to School

Business News

Netflix is Hiring an AI-Focused Role—and the Starting Salary is up to $900,000

The streaming giant is looking for a leader in its machine learning department.

Leadership

This Common Leadership Habit Will Harm Your Credibility. Are You Guilty of It?

As leaders, we're always looking for ways to build credibility among peers and employees. But this easy-to-make mistake can ruin it in an instant.