resolving ai bias
May 7, 2018

Resolving AI Bias: Creators Can’t Blame Machines

The following article, by Shelbi Gomez, senior communications manager for Workfront, is inspired by a session at the Dreamforce 2017 conference. Enjoy!

It is no secret that the way we work has changed drastically over the past two hundred years. We have successfully gone through three industrial revolutions (steam, 1784; electricity, 1870; and computing, 1969) and are currently diving headfirst into the fourth industrial revolution: intelligence, artificial intelligence (AI), that is.

To find out how email, meetings, and automation are shaping the future of work, download our 2017-2018 State of Enterprise Work Report.

So what is AI?

For the sake of conversation AI is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with AI are designed for include:

  • Speech recognition
  • Learning
  • Planning
  • Problem solving

What is bias and how does it play into AI bias?

The Oxford Dictionary defines bias as "prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair."

This week at Dreamforce 17, Kathy Baxter, a user research architect at Salesforce, explained that bias in AI is different because statistical bias happens when you introduce error into data or algorithms—it can happen via conformation bias, association bias, or automation bias in your dataset.

For example, if the dataset is not representative of the entire customer population, that is going to skew the recommendations that are made.

4 tips to avoid AI bias:

  1. Ask questions: Where does the dataset come from? Who collected the dataset? What is the math that underlies the data size?
  2. Don’t be afraid to ask for help: KG Charles-Harris, Quarrio Corp CEO, says to speak to both data scientists as well as non-technical professionals, like sociologists or psychologists, to help you get to the pointed questions you need to ask at the onset of your data collection.
  3. Create an ethical culture: Cultivate an ethical mindset and build diverse teams. Not only does a diverse team help you be more creative and help the company yield better financial results, but it also helps you build a holistic picture, ask the right questions, and determine if there are harmful outcomes in your data.
  4. Remember AI is everyone’s responsibility: Liberty Madison, dataAF founder, and Ilit Raz, Joonko CEO, say that stakeholders can greatly contribute to the discussion by simply raising questions. It isn’t just the data analysts’ responsibility to uncover AI bias; it includes everyone from the company’s C-suites all the way down to your entry-level personnel. Asking questions can lead data scientists to the right problem where they discover AI biases.

Remember AI is beneficial for business. AI will never be able to replace the human touch, but AI will allow humans to work more efficiently and, according to our 2017-2018 State of Enterprise Work Report, allow us to think about work in new and innovative ways.

See "5 Work Automation Prophecies from 3 Futurists" to learn more about how AI will influence our work.

Get Workfront blog updates straight to your inbox.