TRAILS AI Institute Announces First Round of Seed Funding

The eight funded projects, totaling just over $1.5 million, will advance cutting-edge research and scholarship that spans AI design, development and governance.

February 5, 2024

Trails

The Institute for Trustworthy AI in Law & Society (TRAILS) has unveiled an inaugural round of seed grants designed to integrate a greater diversity of stakeholders into the AI development and governance lifecycle, ultimately creating positive feedback loops to improve trustworthiness, accessibility and efficacy in AI-infused systems.

The eight grants announced on Jan. 24, totaling just over $1.5 million–were awarded to interdisciplinary teams of faculty associated with the institute. The projects include developing AI chatbots to assist with smoking cessation, designing animal-like robots to assist caregivers interacting with autistic children, and exploring how users interact with AI-generated language translation systems.

“Seed funding is a critical tool for accelerating new projects and scholarship while deepening connections between institutions,” said Pamela M. Norris, vice provost for research at GW. “The first round of projects funded by TRAILS highlights the institute’s broad range of expertise and its promise of impact on society—from broadening access to health interventions to analyzing the legal and policy implications of AI.”

All eight projects fall under the broader mission of TRAILS, which is to transform AI development from a practice driven primarily by technological innovation to one that is driven by ethics, human rights and input and feedback from communities whose voices have previously been marginalized.

After TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), lead faculty met to brainstorm how the institute could best move forward with research, innovation and outreach that would have a meaningful impact. They determined a seed grant program could quickly leverage the wide range of academic talent at TRAILS’ four primary institutions: University of Maryland, GW, Morgan State University and Cornell University.

“NIST and NSF's support of TRAILS enables us to create a structured mechanism to reach across academic and institutional boundaries in search of innovative solutions,” said David Broniatowski, an associate professor of engineering management and systems engineering at GW who leads TRAILS activities on the GW campus. “Seed funding from TRAILS will enable multidisciplinary teams to identify opportunities for their research to have impact, and to build the case for even larger, multi-institutional efforts.”

The new seed grant program funds research and innovation that is centered around TRAILS’ primary research thrusts—participatory design, methods and metrics, evaluating trust and participatory governance.

"The diverse set of interdisciplinary seed projects reflects the vision and potential impact of TRAILS,” said John Lach, dean of the GW School of Engineering and Applied Science. “Achieving trustworthy AI truly requires a holistic approach, and TRAILS provides the framework and support to make that happen."

A second round of seed funding will be announced later this year, said Darren Cambridge, who was recently hired as managing director of TRAILS to lead its day-to-day operations.

Projects selected in the first round are eligible for a renewal, while other TRAILS faculty—or any faculty member at the four primary TRAILS institutions—can submit new proposals for consideration, Cambridge said.

Ultimately, the seed funding program is expected to strengthen and incentivize other TRAILS activities that are now taking shape, including K-12 education and outreach programs, AI policy seminars and workshops on Capitol Hill, and multiple postdoc opportunities for early-career researchers.

“We want TRAILS to be the ‘go-to’ resource for educators, policymakers and others who are seeking answers and solutions on how to build, manage and use AI systems that will benefit all of society,” Cambridge said.

Six of the eight projects selected for the first round of TRAILS seed funding involve GW faculty. The projects are:

  • Chung Hyuk Park and Zoe Szajnfarber from the School of Engineering and Applied Science and Hernisa Kacorri from UMD aim to improve the support infrastructure and access to quality care for families of autistic children. Early interventions are strongly correlated with positive outcomes, while provider shortages and financial burdens have raised challenges—particularly for families without sufficient resources and experience. The researchers will develop novel parent-robot teaming for the home, advance the assistive technology and assess the impact of teaming to promote more trust in human-robot collaborative settings.
     
  • Soheil Feizi from UMD and Robert Brauneis of GW Law will investigate various issues surrounding text-to-image generative AI models like Stable Diffusion, DALL-E 2, and Midjourney, focusing on myriad legal, aesthetic and computational aspects that are currently unresolved. A key question is how copyright law might adapt if these tools create works in an artist's style. The team will explore how generative AI models represent individual artists' styles, and whether those representations are complex and distinctive enough to form stable objects of protection. The researchers will also explore legal and technical questions to determine if specific artworks, especially rare and unique ones, have already been used to train AI models.
     
  • Hal Daumé III, Furong Huang and Zubin Jelveh from UMD and Donald Braman from GW Law will propose new philosophies grounded in law to conceptualize, evaluate and achieve “effort-aware fairness,” which involves algorithms for determining whether an individual or a group of individuals is discriminated against in terms of equality of effort. The researchers will develop new metrics, evaluate fairness of datasets and design novel algorithms that enable AI auditors to uncover and potentially correct unfair decisions.
     
  • Lorien Abroms from Milken Institute School of Public Health and David Broniatowski from GW Engineering will recruit smokers to study the reliability of using generative chatbots, such as ChatGPT, as the basis for a digital smoking cessation program. Additional work will examine the acceptability by smokers and their perceptions of trust in using this rapidly evolving technology for help to quit smoking. The researchers hope their study will directly inform future digital interventions for smoking cessation and/or modifying other health behaviors.
     
  • Adam Aviv from GW Engineering and Michelle Mazurek from UMD will examine bias, unfairness and untruths such as sexism, racism and other forms of misrepresentation that come out of certain AI and machine learning systems. Though some systems have public warnings of potential biases, the researchers want to explore how users understand these warnings, if they recognize how biases may manifest themselves in the AI-generated responses, and how users attempt to expose, mitigate and manage potentially biased responses.
     
  • Susan Ariel Aaronson from the Elliott School of International Affairs and David Broniatowski from GW Engineering plan to create a prototype of a searchable, easy-to-use website to enable policymakers to better utilize academic research related to trustworthy and participatory AI. The team will analyze research publications by TRAILS-affiliated researchers to ascertain which ones may have policy implications. Then, each relevant publication will be summarized and categorized by research questions, issues, keywords and relevant policymaking uses. The resulting database prototype will enable the researchers to test the utility of this resource for policymakers over time.
     
  • Marine Carpuat and Ge Gao from UMD will explore “mental models”—how humans perceive things—for language translation systems used by millions of people daily. They will focus on how individuals, depending on their language fluency and familiarity with the technology, make sense of their “error boundary”—that is, deciding whether an AI-generated translation is correct or incorrect. The team will also develop innovative techniques to teach users how to improve their mental models as they interact with machine translation systems.
     
  • Huaishu Peng and Ge Gao from UMD will work with Malte Jung from Cornell to increase trust-building in embodied AI systems, which bridge the gap between computers and human physical senses. Specifically, the researchers will explore embodied AI systems in the form of miniaturized on-body or desktop robotic systems that can enable the exchange of nonverbal cues between blind and sighted individuals, an essential component of efficient collaboration. The researchers will also examine multiple factors—both physical and mental—in order to gain a deeper understanding of both groups’ values related to teamwork facilitated by embodied AI.