IDDP's research fellowships support projects by Ph.D. or other terminal degree holders at any stage of their career. The research should align with IDDP’s mission to help the public, journalists and policymakers understand digital media’s influence on public dialogue and opinion and to develop sound solutions to disinformation and other ills that arise in these spaces.
This is the second of three Q & A’s in which GW Today introduces the fellows to the university community.
Kai Shu is a Gladwin Development Chair Assistant Professor in the computer science department at Illinois Institute of Technology. His research and computational tool development address challenges varying from big data, to social media, to AI and issues on disinformation, responsible machine learning, trust social computing, and social media mining. He is a recipient of Arizona State University (ASU) Fulton Schools of Engineering 2020 Dean's Dissertation Award and the 2020/2015 ASU School of Computing, Informatics and Decision Systems Engineering Doctoral Fellowship. He also is a winner of the 2018 SBP Disinformation Challenge. He has interned at Microsoft Research AI, Yahoo Research and HP Labs.
Q: What is the aim of your research project with IDDP?
A: The goal of my research project is to study the scientific underpinnings of disinformation and to develop a computational framework to detect, adapt and explain disinformation for policy making. I am motivated to advance interdisciplinary research to discover knowledge, enhance understanding and inform actions for enabling trust and truth online.
Q: What is your favorite platform to study? Why?
A: I used Twitter the most for my current research as it provides publicly accessible application programming interfaces to obtain rich social media data. Even though many of the developed algorithms are general to other social media platforms, I often build proof-of-concept frameworks using Twitter data.
Q: What do you recommend social media platforms do to build trust in online environments?
A: Social media platforms are playing an important role to ensure a safe and healthy online information space. I think it is important for these tech giants to discuss and collaborate with third-party fact checkers and researchers, and to implement functionalities to identify and mitigate disinformation and other forms of information operation effectively.
Q: Are there any pertinent policy solutions you would like platforms to adopt to better identify and rectify mis- and disinformation?
A: I think the regulation policies and technologies for combating disinformation are still in the early stages. As a computational scientist, I believe we can benefit from useful policy rules to design effective platform services. The techniques we develop can also facilitate policy design.
Q: Should governments be involved in regulating online spaces? How similar should policies look between social media platforms and state governments?
A: As disinformation grows at unprecedented volumes on social media, it is now viewed as one of the greatest threats to democracy, justice, public trust, freedom of expression, journalism and economic growth. Governments can provide useful insights and can benefit from effective technologies on combating disinformation, on mitigating foreign influence and ensuring national security. A consensus on the policies between social media platforms and governments is desired in the future to better combat disinformation.
Q: Outside of your research with IDDP, what subject matters interest you?
A: My research interests include machine learning, data mining and social computing. I am also interested in inventing machine learning algorithms on weak (noisy, limited and unreliable) data, and building fair, robust and interpretable models that tackle problems in real-world applications.