Artificial Intelligence Is Here to Stay, so We Should Think more about It

Symposium on the interlocking futures of AI and the humanities held at the George Washington University.

April 18, 2023

robot before blackboard

(Image by Marcio, stock.adobe.com)

On Friday morning, George Washington University Provost Christopher Alan Bracey disseminated a document on the use of generative artificial intelligence to guide faculty members on how they might (or might not) allow the use of AI by their students. At the same moment, a daylong symposium titled “I Am Not a Robot: The Entangled Futures of AI and the Humanities” kicked off with remarks by its principal organizer, Katrin Schultheiss, associate professor of history in the Columbian College of Arts and Sciences.

In late 2022, said Schultheiss, the launch of ChatGPT presented educators with a significant moment of technological change.

“Here was a tool—available, at least temporarily, for free,” Schultheiss said, “that would answer almost any question in grammatically correct, informative, plausible-sounding paragraphs of text.”

In response, people expressed the fear that jobs would be eliminated, the ability to write would atrophy and misinformation would flourish, with some invoking “dystopias where humans became so dependent on machines that they can no longer think or do anything for themselves.”

But that wasn’t even the worst of the fears expressed. “At the very far end,” Schultheiss said, “they conjured up a future when AI-equipped robots would break free of their human trainers and take over the world.”

On the other hand, she noted, proponents of the new technology argued that ChatGPT will lead to more creative teaching and increase productivity.

“The pace at which new AI tools are being developed is astonishing,” Schultheiss said. “It’s nearly impossible to keep up with the new capabilities and the new concerns that they raise.”

For that reason, she added, some observers (including members of Congress) are advocating for “a slowdown or even a pause in the deployment of these tools until various ethical and regulatory issues can be addressed.”

With this in mind, she said, a group of GW faculty from various humanities departments saw a “need to expand the discourse beyond the discussion of new tools and applications, beyond questions of regulation and potential abuses of A.I.,” adding that the symposium is one of the fruits of those discussions.

“Maybe we should spend some more time thinking about exactly what we are doing as we stride forward boldly into the AI-infused future,” Schultheiss said.

Four panel discussions followed, the first one featuring philosophers. Tadeusz Zawidzki, associate professor and chair of philosophy, located ChatGPT in the larger philosophical tradition, beginning with the Turing test.

That test was proposed by English scientist Alan Turing, who asked: Could a normal human subject tell the difference between another human and a computer by reading the text of their conversation? If not, Turing said, that machine counts as intelligent.

Some philosophers, such as John Searle, objected, saying a digitally simulated mind does not really think or understand. But Zawidzki said ChatGPT passes the test.

“There’s no doubt in my mind that ChatGPT passes the Turing test,” he said. “So, by Turing’s criteria, it is a mind.” But it’s not like a human mind, which can interact with the world around it in ways currently unavailable to ChatGPT.

Marianna B. Ganapini, assistant professor at Union College and a visiting scholar at the Center for Bioethics at New York University, began by asking if we can learn from ChatGPT and if we can trust it.

“As a spoiler alert,” Ganapini said, “I’m going to answer ‘no’ to the second question—it’s the easy question—and ‘maybe’ to the first.”

Ganapini said the question of whether ChatGPT can be trusted is unfair, in a sense, because no one trusts people to know absolutely everything.

A panel on the moral status of AI featured Robert M. Geraci, professor of religious studies at Manhattan College, and Eyal Aviv, assistant professor of religion at GW.

In thinking about the future of AI and of humanity, Geraci said, we must evaluate whether the new technology has been brought into alignment with human values and the degree to which it reflects our biases.

“A fair number of scholars and advocates fear that our progress in value alignment is too slow,” Geraci said. “They worry that we will build powerful machines that lack our values and are a danger to humanity as a result. I worry that in fact our value alignment is near perfect.”

Unfortunately, he said, “our daily values are not in fact aligned with our aspirations for a better world.” One way to counteract this is through storytelling, he added, creating “models for reflection on ourselves and the future.”

A story told by the late Stephen Hawking set the stage for remarks by Aviv, an expert on Buddhism, who recalled an interview with Hawking from “Last Week Tonight with John Oliver” posted to YouTube in 2014.

“There’s a story that scientists built an intelligent computer,” Hawking said. “The first question they asked it was, ‘Is there a God?’ The computer replied, ‘There is now,’ and a bolt of lightning struck the plug so it couldn’t be turned off.”

Aviv presented the equally grim vision of Jaron Lanier, considered by many to be father of virtual reality, who said the danger isn’t that AI will destroy us, but that it will drive us insane.

“For most of us,” Aviv said, “it’s pretty clear that AI will produce unforeseen consequences.”

One of the most important concepts in Buddhist ethics, Aviv said, is ahimsa, or doing no harm. From its inception, he added, AI has been funded primarily by the military, placing it on complex moral terrain from the start.

Many experts call for regulation to keep AI safer, Aviv said, but will we heed such calls? He pointed to signs posted in casinos that urge guests to “play responsibly.” But such venues are designed precisely to keep guests from doing so.

The third panel featured Neda Atanasoski of the University of Maryland, College Park, and Despina Kakoudaki of American University.

Atanasoski spoke about basic technologies found in the home, assisting us with cleaning, shopping, eldercare and childcare. Such technologies become “creepy,” she said, when they reduce users to data points and invade their privacy.

“Tech companies have increasingly begun to market privacy as a commodity that can be bought,” she said.

“How do you ban it if it’s everywhere?”

Pop culture has had an impact on how we understand new technology, Kakoudaki said, noting that very young children can draw a robot, typically in an anthropomorphic form.

After suggesting the historical roots of the idea of the mechanical body, in the creation of Pandora and, later, Frankenstein, for example, Kakoudaki showed how such narratives reverse the elements of natural birth, with mechanical beings born as adults and undergoing a trajectory from death to birth.

The fourth panel, delving further into the history of AI and meditating on its future, featured Jamie Cohen-Cole, associate professor of American Studies, and Ryan Watkins, professor and director of the Educational Technology Leadership Program in the Graduate School of Education and Human Development.

Will we come to rely on statements from ChatGPT? Maybe, Cohen-Cole said, though he noted that human biases will likely continue to be built into the technology.

Watkins said he thinks we will learn to live with the duality presented by AI, enjoying its convenience while remaining aware of its fundamental untrustworthiness. It is difficult for most people to adjust in real time to rapid technological change, he said, encouraging listeners to play with the technology and see how they might use it, adding that he has used it to help one of his children do biology homework. Chatbot technology is being integrated into MS Word, email platforms and smartphones, to name a few places the average person will soon encounter it.

“How do you ban it if it’s everywhere?” he asked.

The symposium, part of the CCAS Engaged Liberal Arts Series, was sponsored by the CCAS Departments of American Studies, English, History, Philosophy, Religion and Department of Romance, German and Slavic Languages and Literatures. Each session concluded with questions for panelists from the audience. The sessions were moderated, respectively, by Eric Saidel, from the philosophy department; Irene Oh, from the religion department; Alexa Alice Joubin, from the English Department; and Eric Arnesen, from the history department.