“Integrity in a Time of Change” was the theme of George Washington University’s 14th Ethics in Publishing Conference, which explored the impact and contributions of artificial intelligence (AI) and other issues at a time when the publishing industry is expanding to accommodate greater diversity globally and here in the United States.
At a hybrid virtual and in-person conference held in the City View Room of the Elliott School of International Affairs, more than 700 members of the publishing community including library professionals, scholars and student speakers addressed the influence of language, multilingualism, accessibility and inclusivity in the publishing process.
In afternoon remarks, College of Professional Studies (CPS) Dean Liesl Riddle noted that the focus on ethics has always been a cornerstone of the CPS graduate program in publishing. “What began as a small seminar, integrated into the ethics capstone course for our publishing students from 2007 to 2016, has grown into the dynamic and impactful event we are all a part of today,” Riddle said.
Opening the conference, John W. Warren, director and associate professor of GW’s Graduate Program in Publishing , said that “ethics in publishing presents a paradox of simultaneously niche focus and quite broad in subject matter; almost any aspect of publishing and indeed communication has ethical issues and concerns that are worth researching, writing and speaking about, and exploring.”
The conference, organized by Warren along with Puja Telikicherla, adjunct professor at GW and licensing and subsidiary rights manager at the American Psychiatric Association, was sponsored by the CPS Graduate Program in Publishing in collaboration with the GW Journal of Ethics in Publishing.
The limited representation of minority and ethnic groups in the publishing industry, the need for more inclusivity and the growing significance of AI highlighted the two-day conference.
A key panel focused on AI and responsible publishing and the growing pervasiveness of AI, which has been used as a tool for some time to enhance the publishing editorial and production process to analyze data and develop new products. The panel was moderated by Wendy Queen, director of Project Muse, a division of Johns Hopkins University Press.
Simone Taylor, director of publishing at the American Psychiatric Association, started the discussion by noting AI as a generative tool is viewed more warily as it “threatens to transform the way we live, work and learn.”
“Large language models [that rely on content scraped from the web] and other applications of generative AI can only be as good as the data on which they’re trained,” Taylor said.
As an example, she told the story of a cotton tree in her native Freetown, Sierra Leone. Legend had it, she said, that freed slaves prayed around it in 1787. Taylor showed a slide of the tree photographed around 1914. It was felled by severe weather in 2023. And yet, she said, when ChatGPT searched for information about the life span of cotton trees, it responded, “There is no such thing.”
“My sister and I respectfully disagreed,” Taylor said.
She said there is now an opportunity for publishers to leverage high quality peer reviewed content for which tech companies such as Google and Reddit are willing to pay hundreds of millions of dollars. But with that comes challenges, Taylor said, to protect the integrity of the data and the needs of the broad populations, not just the privileged.
“How is the research conducted, reported and reviewed? What gets out there, and what then gets used in these systems,” she said.
Another concern with generative AI, she said, is that ease of access to digital material can lead to the use of the protected material of authors that publishers have an obligation to protect, ensure attribution and to allow authors and publishers to be fairly compensated for.
A team of panel members from Wiley Partner Solutions—Jennifer Workman, senior manager for business development, and Anna Jester, director for business development—discussed how they handled image manipulation, paper mills and AI technology proactively by “recognizing the [publishing industry’s] gaps and vulnerabilities.”
Workman and Jester described Research Exchange software developed by the company that allows editors to do content analysis the moment articles are submitted and to screen immediately for quality and integrity—including the identity of the author, whether it was actually written by a person, has been submitted to other journals or was plagiarized. The technology improves the efficiency of research integrity before articles are even sent out for peer review, but it is not a replacement for human judgement, the panelists said.
Christopher Kenneally, a content marketing consultant, author and former host of the podcast, “Velocity of Content,” advised publishers to embrace the technology through what he called an “integrity algorithm.” ChatGPT has no doubt changed the academic world for the better, Kenneally said, but should be approached with “diligent skepticism” of a system that has generally been estimated to have an accuracy rate of only 85%.
Trust and confidence in the technology, Kenneally said, can be achieved with a step-by-step integrity algorithm publishers and scholars have always used to guide their decision-making by examining the evidence, adhering to a rigorous code of conduct and tailoring and refining the technology to meet their needs.
“AI is going to prove to be an ally rather than a danger or nemesis,” he said, “and that means embracing it but also doing it in ways that are ethical and responsible to rights holders and to think about the biases involved.”
The conference was supported by the Association of University Presses, the Society for Scholarly Publishing (SSP), the Association of American Publishers (AAP); the Council of Science Editors (CSE), the International Society of Managing and Technical Editors (ISMTE), and the Book Industry Study Group (BISG).