By Kristen Mitchell
An interdisciplinary team of George Washington University researchers received a $2.2 million grant from the National Institutes of Health for a project that combines 3D body surface scanning technology and machine learning to develop an inexpensive and noninvasive way to assess the health of highly obese individuals.
Professor James Hahn from the School of Engineering and Applied Science’s Department of Computer Science is the principal investigator on the four-year project that brings together experts from SEAS, the School of Medicine and Health Sciences and the Columbian College of Arts and Sciences. Hahn, director of the GW Institute for Computer Graphics, comes to this work after years of using computer graphics, virtual reality and machine learning for medical applications.
This new project seeks to use 3D depth cameras—special cameras that measure distances to the subject as well as the color for each pixel—to capture an individual’s body shape and correlate this to health indicators associated with obesity using machine learning algorithms. The team is focused specifically on fibrosis and a fat build up in the liver known as fatty liver disease, or hepatic steatosis. Fatty liver disease can eventually lead to liver failure and is currently diagnosed with an invasive biopsy.
The research team will recruit 250 patients undergoing bariatric surgery—a weight-loss surgery—through GW’s Weight Loss and Surgery Center to participate in this study to understand how the body changes in response to rapid weight loss. The project looks specifically at bariatric patients because it provides an opportunity to perform a longitudinal study on the same individual to collect indicators of health and shapes both before and after their surgery, Hahn said.
“As they lose weight, we can then see what the relationship is between body shape and X, where X could be things like liver health or the percentage of body fat, or changes in their blood tests that are performed periodically throughout their recovery process,” Hahn said.
3D optical body scan results combined with other clinical tests and a liver biopsy of bariatric surgery subjects will serve as the training dataset for the machine learning algorithm.
Hahn and the team will capture the body shapes of patients enrolled in the study using 3D depth cameras installed at the GW Weight Loss and Surgery Center. The patients will also undergo medical imaging at the Milken Institute School of Public Health. The school’s dual-energy X-ray absorptiometry, or DEXA for short, measures bone density and body composition like body fat and muscle mass. These imaging results, combined with other clinical tests and a liver biopsy taken before bariatric surgery will serve as the training dataset for the machine learning algorithm. Patients will return four times for optical body surface scans and DEXA imaging in the year following surgery, enabling researchers to track how their body shape changes as a function of health indicators over time.
Hahn’s team plans to compile this dataset and, using two sub-approaches in machine learning, train systems able to predict health indicators based on body shape, including hepatic steatosis, he said. The ability to draw this correlation has the potential to be less expensive, less invasive and/or more sensitive than existing diagnostic methods, Hahn said.
“This whole approach has a lot of potential,” Hahn said. “This technology is not going to replace biopsies or other medical tests, but it will perhaps give physicians a new cheaper and more convenient imaging modality they can employ for diagnosis.”
Collaborator and bariatric surgeon Khashayar Vaziri, a professor of surgery at SMHS and program director of general surgery residency at GW Hospital, said this approach could help physicians diagnose liver problems sooner.
“Bariatric surgery patients are just a small subset of the obese population. This would allow health care providers to apply this technology and get a better understanding and diagnose liver disease in this patient population much, much earlier,” Vaziri said. “Diagnosis of liver disease is elusive, and this would be a very useful tool.”
Hahn described the 3D body surface scanning the team uses as similar to the depth camera technology available on newer smartphones. Inexpensive compared to most medical imaging, this type of scan could be done in a regular clinic or doctor’s office or even at home.
Collaboration
In addition to Hahn and Vaziri, co-investigators on this project include SMHS Clinical Professor of Medicine Marijane Hynes; Assistant Professor Fang Jin from the CCAS Department of Statistics; Professor of Cognitive Neuroscience John Philbeck from the CCAS Department of Psychological and Brain Sciences; Professor of Pathology Mamoun Younes from SMHS; Assistant Professor of Medicine Steven Zeddun from SMHS; and Assistant Professor Xiaoke Zhang from the CCAS Department of Statistics.
The idea for this research project bubbled up during a conversation between Hahn and Qiyue Wang, a fourth-year Ph.D. student, about whether optical scan technology developed in their lab from a previous NIH funded project could show there is a direct correlation between body shape and liver health. When Hahn connected with Vaziri, they immediately knew a collaboration could generate cutting edge research.
“This is a great opportunity for us and our collaborators to explore new domains and utilize machine learning and artificial intelligence to make a greater contribution to medicine,” Wang said.
Engineers and medical researchers think about challenges differently, Hahn said. They come from different academic backgrounds with their own institutional cultures—but learning from each other and finding ways to connect is critical to developing advanced tools and applications that can be used to enhance health care.
“This is the kind of work I love doing, it really involves partnership between the engineering side, science side and medical side,” Hahn said. “All of these experts are coming from different groups and playing as important a role as anybody else.”
For Vaziri, collaboration is essential for identifying the best ways to leverage evolving technology to ultimately improve patient health and care.
“Putting together our understanding of patient care and disease, and their understanding of technology and innovation, we can come together with our individual expertise in order to find an application that's going to be most useful,” he said.
The project will also include a psychological study to determine whether this novel technology would be accepted by patients. The team also seeks to find out if patients would be able to scan themselves accurately using a smartphone, which will give an indication of the technology’s usefulness in telemedicine.
Principal investigator James Hahn, a professor from the SEAS Department of Computer Science. (William Aktins/ GW Today)
Building on a foundation
This project builds on more than a decade of research in digital health. Past projects include a collaboration with USA Swimming prior to the 2008 Beijing Olympics to examine how elite swimmers could improve their strokes. Hahn created a 3D virtual representation of an Olympic swimmer and compared the fluid dynamics of the athlete to a dolphin—capturing body shape and using that data for different types of motion. Over time, he sought to identify how to capture body shape and motion simultaneously.
Hahn’s Motion Capture and Analysis Laboratory (MOCA), established through a $700,000 grant from the National Science Foundation’s major research instrumentation program, collaborated in 2019 on the “Virtual Jane” project, a partnership between the Jane Goodall Institute and the GW Innovation Center, to do just that. Researchers from Hahn’s lab took detailed captures of renowned animal behavior expert and conservationist Jane Goodall’s likeness and movements for a virtual reality-based educational platform.
Hahn was also awarded a five-year, $1.5 million NIH grant in 2017 to develop a virtual reality training system to improve pediatric medical resident training for neonatal endotracheal intubation. When a newborn is having trouble breathing, physicians must quickly place a tube through a baby’s mouth and into the windpipe—a common procedure students aren’t adequately trained to do successfully during residency, some experts say.
Now in the fifth year of this project, Hahn and his research team have developed a virtual reality system that includes automated evaluation and feedback using machine learning and have deployed the system to Children’s National Health System for evaluation. This will help them determine if virtual reality training is more effective in this case than traditional training with physical mannequins.
Hahn said while machine learning has the potential to change the way we approach medicine, bias and explainability pose significant challenges.
“If we can get accurate results, but we can't guarantee results hold for everybody, or we can't explain why the approach works because it is essentially a ‘black box,’ that’s a challenge for the acceptance of the approach in health care,” Hahn said.
The team is seeking a diverse pool of study participants for this new project to identify and mitigate bias in the training dataset due to underrepresented groups.
“The nice thing about our health care environment here at GW is the patient population is quite diverse,” Hahn said.
Researchers capture the likeness of conservationist Jane Goodall at Hahn’s Motion Capture and Analysis Laboratory in 2019. (William Atkins/ GW Today)
Hahn received an NIH grant in 2017 to develop a virtual reality training system to improve pediatric medical resident training for neonatal endotracheal intubation.