Seeking Responsible Artificial Intelligence

Manoj Saxena discussed the future and ethics of artificial intelligence at the GWSB Robert P. Maxon Lecture Series

April 11, 2019

2019 Maxon Lecture at GWSW

GWSB Dean Anuj Mehrotra (l) interviews Manoj Saxena, who was the first general manager of IBM Watson, at the George Talks Business/Maxon Lecture. (Photo: Abby Greenawalt)

By B.L. Wilson

Manoj Saxena, the executive chair of CognitiveScale, which provides guidance to health care and financial companies and others in building artificial intelligence systems, said until two years ago his view was that technology was good.

“Unfortunately, progress will come at expensive mistakes that will almost pull us apart as a society, and my goal here is to get the conversation going…before we have our own version of Hiroshima,” Mr. Saxena said Tuesday at George Washington University’s Funger Hall.

A successful computer technology entrepreneur, Mr. Saxena was the first general manager of IBM Watson, the system that used computer technology to answer questions in healthcare, education, finance and an array of fields, once beating the human competition in the popular gameshow “Jeopardy.”  

Mr. Saxena was the guest lecturer for the 9th George Talks Business conversation,  “A.I.: Benefits, Ethics and Risk,” part of a series of interviews with notable alumni and leaders in business, government and non-profit arenas. It was held in conjunction with the 19th Robert P. Maxon annual lecture endowed by a gift from Dorothy Maxon in honor of her husband, a 1948 graduate of the GW School of Business in collaboration with the Institute for Corporate Responsibility.

GWSB Dean Anuj Mehrotra introduced Mr. Saxena, whom he has known for more than 30 years. Dr. Mehrotra called Mr. Saxena “professionally passionate,” and said he was “among the first in helping figure out the appropriate responsible way to understand, impact and ensure that technology and our values as humanity are in sync.”

Mr. Saxena said artificial intelligence is not really artificial but a non-biological intelligence, including natural language processing, machine vision, deep learning and neural networks that are essentially the science of engineering and building systems that learn from patterns. 

The challenge is to “get the intelligence at the scale of the machine and then get the creativity and judgment of a human being,” he said.

There are three types of artificial intelligence—automated, which does what the human brain does; augmented, which does it faster and better; and autonomous, which replaces the human brain (as in self-driving cars).

The second part is where Mr. Saxena envisions artificial intelligence “applied deep into industry is going to take off.” He noted that a recent Price Waterhouse Cooper study estimates $15 trillion over the next 11 years is going be generated by artificial intelligence. So, he said it is important for business schools like GWSB to understand that artificial intelligence is too important to be left to the technologists.

“A.I. is a strategic business capability just like email, spread sheets and electricity was,” he said. “Not being able to run your business with A.I. is like running it without electricity.

“The problem… is when you start using machines to make automated decisions you start making deals with the devil,” he said. “The deal is I am going to give you more data. I’m going to give you more authority and autonomy to start making decisions on behalf of me.”

The more we get into this intelligent system making complex decisions, he said, business schools, computer schools and art schools, need to ask, “What am I designing? Can I trust this? Can I trust it to be beneficial and fair?”

He said the implications of the design of artificial intelligence have not always been thought through carefully. 

There’s the notorious example of the soap dispenser that won’t dispense soap for dark skinned hands because the sensor in the camera was modeled for white or light skinned hands. And the Facebook chat bot that started developing its own language that was not understandable to human beings. Or an NGO’s concerns that an intelligent Barbie doll might not reflect and speak to local ethics and values.

Mr. Saxena has begun working with a nonprofit called A.I. Global, international brands and countries with mid-level and advanced economies to create tools for responsible A.I. built around five pillars: data ownership, explainability, bias and fairness, robustness and compliance.

“Right now the focus is a little too much on making money with A.I. and not about what the societal impact of A.I. is. My worry is that A.I. is going to make the rich richer and the evil more evil.”