Whether using machine learning to automate data entry or turning to ChatGPT to write an email, workers across a range of fields are integrating AI into their jobs. But how exactly are these systems being implemented? What guidelines are, or should be, in place? And how can we take advantage of these challenges and opportunities to ensure AI adoption that promotes innovation without causing harm? Experts from the worlds of policy, industry and research convened at the George Washington University last week to explore those questions and more during “AI at Work: Building and Evaluating Trust,” a conference hosted by the Institute for Trustworthy AI in Law and Society (TRAILS).
“This is a moment of great change, but also great potential,” GW President Ellen M. Granberg said in opening remarks Feb. 3. “Now more than ever, we need the experts that are gathered here to advance the important and collaborative conversations that will help us capitalize on the promise of AI and build systems that improve our processes and productivity and that are also safe and trustworthy.”
The event, also called TRAILSCon 2025, is TRAILS’ first major onsite conference and a natural outgrowth of the institute’s mission to investigate what trust in AI looks like, create technical AI solutions that build trust and determine which policy models effectively sustain trust. Launched in 2023 with a $20 million grant from the National Institute for Standards and Technology (NIST) and the National Science Foundation (NSF), TRAILS is a partnership between GW, the University of Maryland, Morgan State University and Cornell University.
Over the course of the two-day summit, attendees listened to conversations between thought leaders, held round table discussions and joined group workshops, with a focus on cross-disciplinary communication. Panels and workshops included “What’s at Stake,” “AI as Work” and “AI and Government,” among others.
In an opening keynote panel moderated by Professor of Engineering Management and Systems Engineering David Broniatowski, TRAILS’ co-principal investigator and GW site lead, experts from Cornell, NIST, Microsoft and the Carnegie Endowment for International Peace (CEIP) discussed the importance of designing trustworthy, usable evaluation measures for AI systems. Establishing consistent benchmarks for measurement would not only enable researchers to usefully evaluate AI’s impact on a given information system, but would also provide the general public with an understanding of what tasks these tools perform and how well they perform them.
In a Tuesday afternoon closing panel, “What’s Next for AI at Work?”, TRAILS Co-PI Susan Ariel Aaronson, a research professor in the Elliott School of International Affairs and director of the Digital Trade and Data Governance Hub, said she sees reason for hope in the frequent, high-level national and international conversations around AI-driven transformation of the workplace. The ongoing effort “to try to think about what kind of world will we have for people who must earn a living” is unresolved, but remains open in interesting ways, with new voices joining every day.
“I'm usually the person who thinks about this in not the [most] optimistic terms, but in terms of the world of work, it does give me hope because we're having this debate early on,” Aaronson said.
“One of the big concerns around trust in AI is that if people don't trust it…they're just not going to adopt it,” Broniatowski said at the same panel. “They're going to be so afraid of what it might do, or they might be afraid that others might impose it upon them…On the other hand, if we do have trust, we have the ability to work together with the developers of those tools and to develop tools that are really fit to purpose. And once you have that ability of tools to fit to purpose, that's when you really unlock all of the potential innovations and achievements that AI can possibly bring us.”