Article
Systematic reviews and abstracts for randomized controlled trials appearing in the urologic literature are generally of less than optimal quality.
"Results from studies we've conducted underscore a need to improve the quality standards for conducting and reporting urologic research," said senior author Philipp Dahm, MD, associate professor of urology and director of clinical research. "This will require efforts by researchers who perform systematic reviews, editors of journals that publish these reports, and meeting organizers who establish the guidelines for structured abstracts."
Based on an impression that systematic reviews were appearing more frequently in the urologic literature and recognizing how these analyses can influence evidence-based clinical practice, Dr. Dahm and colleagues undertook an investigation of the reviews' methodologic quality. They focused on four leading peer-reviewed journals: the Journal of Urology, European Urology, Urology, and BJU International, and searched for systematic reviews published between 1998 and 2007.
Lack of improvement in review quality
A total of 57 systematic reviews were analyzed. With the study period divided into three intervals (1998-2001, 2002-'05, 2006-'08), the data confirmed that the number of published systematic reviews was increasing; 10 appeared in journals published during the first 4-year interval, while there were 27 systematic reviews in the four journals during the last 3-year period of the study.
However, many of the papers failed to meet established criteria for execution and reporting, and the quality appeared to be declining, the researchers reported at the 2009 AUA annual meeting in Chicago. The average AMSTAR rating was only 4.8, indicating that the average study met fewer than half of the quality criteria, and the average score decreased from 5.4 for studies published in 1998-2001 to 4.7 for the reviews published in the more recent time period.
Some of the most common deficits were related to inclusion of studies irrespective of their methodologic quality and use of search strategies that were not appropriately comprehensive.
"We also compared the quality of systematic reviews conducted by authors having some affiliation with the Cochrane Collaboration against those by unaffiliated authors and, not surprisingly, found the former offered better methodological quality based on a two-point higher average score," Dr. Dahm told Urology Times.
Dr. Dahm and colleagues also reviewed abstracts for RCTs from the 2002 and 2003 AUA annual meetings and found that the majority failed to provide information necessary to assess methodologic quality. This analysis included 126 reports that were assessed by two independent reviewers using a pilot-tested instrument based on the Consolidated Standards for the Reporting of Trials (CONSORT) statement.
RCTs lacking in information
The appraisal revealed that all of the abstracts were missing information considered essential according to the CONSORT statement, including failure to describe how the randomization was done and whether the randomization was concealed. Although about three-fourths of the abstracts reported that the RCT was blinded, the information on blinding was usually not explicit enough for the reader to determine who in the study (ie, patient, investigator, outcomes assessor) was unaware of the treatment received by each participant, Dr. Dahm explained.
"RCTs are considered to provide the highest level of evidence for clinical decision making, but many such studies are published only in abstract form and never appear as full-text articles. It is critically important that these abstracts provide readers the information needed to assess the study's validity and results in order to determine their applicability to clinical practice," Dr. Dahm said.