Anti-Smoking Chatbots Provide Sound Advice—Most of the Time

A new study led by Milken Institute SPH’s Lorien Abroms found that 22% of chatbot responses to inquiries about how to quit smoking still contain misinformation

February 13, 2025

Person using smartphone with pop-up illustration of chatbot asking "Can I help you?"

(Adobe)

A new study from the George Washington University examining AI-powered chatbot responses to smokers seeking information about ways to kick the habit found that most of the chatbot responses followed sound public health advice. But some answers contained errors or were classified as misinformation, suggesting the need for improvements to these AI-powered tools.

Researchers evaluated three chatbots: the World Health Organization’s S.A.R.A.H.; BeFreeGPT, a chatbot developed by the researchers; and BasicGPT. The study aimed to assess whether these AI-driven bots provide reliable, evidence-based advice to help people quit smoking.

A list of the top 12 most common quit-smoking questions from Google were given to each chatbot. Responses were analyzed for their adherence to an index developed from the U.S. Preventive Services Task Force public health guidelines for quitting smoking and counseling principles.

“We know that smoking is a leading cause of preventable death globally. So improving the reliability of these AI-powered chatbots could play a significant role in enhancing smoking cessation efforts,” said Lorien Abroms, researcher and professor of prevention and community health at GW’s Milken Institute School of Public Health. “Our findings highlight the importance of developing reliable and accurate AI systems, especially when dealing with complex health behaviors like smoking cessation.”

Findings include:

  • Across the 12 most common quit-smoking questions posed to the three chatbots, 57.1% of responses followed the public health guidelines.
  • While Sarah, the WHO’s chatbot, outperformed the other two with a 72.2% adherence rate, BeFreeGPT and BasicGPT followed with significantly lower scores of 50% and 47.8%.
  • 22% of responses contained misinformation, particularly on topics like quitting “cold turkey,” or using vapes, gummies, necklaces or hypnosis to stop smoking.
  • The study found that most chatbot responses were clear and easy to understand and often recommended seeking professional counseling. However, many responses lacked important advice such as nicotine replacement therapy, how to handle cravings or the importance of social support.

“The rapid advancement of AI in health behavior change is promising, but it’s crucial that these tools adhere to evidence-based guidelines and avoid spreading misinformation,” said David Broniatowski, researcher and professor at GW’s School of Engineering & Applied Science.

The research, Assessing the Adherence of ChatGPT Chatbots to Public Health Guidelines for Smoking Cessation, was published in the Journal of Medical Internet Research. It was funded by GW Engineering’s Institute for Trustworthy AI in Law & Society.