Article
Author(s):
Misinformation about prostate cancer is rampant online and significantly impacts patient care, study authors said at an American Urological Association press briefing. Researchers using machine learning have taken what they say is a first step to help vet the quality of online content.
Misinformation about prostate cancer is rampant online and significantly impacts patient care, study authors said at an American Urological Association press briefing. Researchers using machine learning have taken what they say is a first step to help vet the quality of online content.
YouTube and other forms of social media remain popular sources of information, with YouTube alone hosting more than 500 videos generating nearly 44 million views. Misinformation from these sources has several consequences, affecting patients’ decisions about treatment, shared decision-making, and treatment expectations, said Zeyad Schwen, MD, a urology resident at Johns Hopkins University who presented his group’s findings.
Schwen and colleagues focused their research on erectile dysfunction (ED) following radical prostatectomy and set out to characterize the quality of content on YouTube related to this topic. “We wanted to look at areas of misinformation, the inclusion of important counseling points, and also [sought to make] comparisons between the quality, the video content, and the dissemination,” Schwen said.
He and colleagues examined the first 100 YouTube videos obtained using the search criteria “radical prostatectomy and erectile dysfunction.” A total of 81 videos were available for analysis. Quality of content was rated using the 16-question DISCERN tool, a validated scoring tool that evaluates the quality of consumer health information.
A total of 34 false claims were found in 20% of videos, with some videos containing multiple false claims. Among his personal favorites, Schwen listed “There are no side effects of radical prostatectomy,” “Robotic surgery lets you see all the nerves,” “Kegel exercises improve ED,” and “Amniotic membrane prevents ED in 96% of patients.”
Two-thirds of videos of featured a physician, usually a urologist (58% of videos). About half of the videos (44%) promoted a practice or institution.
In terms of counseling points, only 12% of videos quoted the expected rate of ED, 23% cited risk factors for ED after prostatectomy, 28% explained nerve sparing, 17% discussed the delay in recovery of erections, and 35% discussed the possible need for treatments.
The median DISCERN score was 29 out of a maximum score of 80. “There was no association between DISCERN scores and false statements, source of the video, and the number of views,” Schwen said.
“The reason I think that this is a very important study is because it helps us better understand what type of mindset and what sort of information patients come to us with before [a visit],” Schwen said. “It’s very possible that they are starting from an inaccurate position.”
In a second study, urologists and computer scientists sought to develop an automated solution for identifying misinformation. A group led by Stacy Loeb, MD, MSc, used 354 publications in PubMed Central to build a prostate cancer language model. They then compared the model to transcripts from 250 YouTube videos using perplexity, a standard measure of language-model fit, in which lower perplexity values are associated with better language fit. Machine learning experiments were performed to differentiate trustworthy from misinformative videos. The sample was intentionally enriched with misinformative videos to help train the model, said Loeb, professor of urology at New York University School of Medicine, New York, New York.
The videos containing no misinformation and those containing misinformation were compared with the PubMed-based language model. The trustworthy videos had lower language perplexity than did the misinformative videos (1733 vs 7033, P<.001), indicating a better fit to the PubMed-based language model.
An algorithm using a combination of YouTube metadata, linguistic, and acoustic features was able to separate trustworthy from misinformative YouTube videos about prostate cancer with an accuracy of 74%.
“The language in trustworthy videos is closer to the published prostate cancer literature,” Loeb said. “Hopefully in the future machine learning may provide a scalable solution to help health consumers identify trustworthy online health videos.”