24 October 2010

I Need Someone to Translate Statisticianese

I'm researching for an article I'm writing and seeking an answer to whether pharmaceutical advertising leads to more pill distribution. From the amount of money they spend on it, it's pretty obvious that pharmaceutical firms believe it does, but I was was looking for empirical data (or at least someone explaining the empirical data in terms I can understand).  I found an article in an Indian news site and from Reuters talking about a new study out that reviewed a bunch of other studies and indicated that doctors "prescribe more expensively, less appropriately and more often" because of pharmaceutical advertising.  However, I was a little suspicious because no US news sites seemed to have picked this up and I prefer original sources anyway, so I looked up the study.

As best I can tell the article's conclusion is that it is a definite possibility that pharmaceutical advertising might perhaps have caused increased cost and decreased quality in prescriptions (maybe). There is "some evidence of increased costs and decreased quality of prescribing." So, I dug into the article itself, trying to find the basis of the two news reports and I think that this portion may be what they focused on (primarily I base this on Reuters talking about the same numbers as are found in this paragraph, albeit probably terribly misconstruing them):
Of the 58 studies included in this review, 38 studies reported a single unit of analysis with 25 (66%) finding significant associations between exposure to information from pharmaceutical companies and the quality, frequency, and cost of prescribing and eight (21%) finding no associations. The remaining five (13%) had multiple measures and found significant associations on some measures but not on others. The 20 studies with more than one unit of analysis reported 49 units of analysis of which 21 (43%) found significant associations, 24 (49%) found no associations, and four (8%) found mixed results. The difference between the results of the single versus multiple unit of analysis studies is significant (p<0.05 Freeman-Halton extension of the Fisher exact test). This difference may have been caused by publication bias against publication of single unit of analysis studies when no association was found. We believe the pattern of results suggests that there was little or no reporting bias for the multiple unit of analysis studies. Because the multiple unit of analysis studies found no association more often than the single unit of analysis studies, multiple mentions of the former studies in our narrative synthesis will not exaggerate the frequency of findings of significant associations.
Okay, I need some translation for the following terms: "single unit of analysis", "multiple measures", and "multiple units of analysis." I have in my head what I think those mean, but shan't give my thoughts here because I'd rather have someone tell me fresh, rather than tip-toeing around any misunderstanding I might have.

As well, am I right in understanding that the authors applied their own perceptual, unsupported bias to deprecate the "single unit of analysis" results?

1 comment:

Windypundit said...

It looks like no one actually qualified has answered, so for whatever it's worth...

When Reuters says, "Thirty-eight studies showed that exposure to drug company information resulted in more frequent prescriptions, while 13 did not have such an association," I think they are using part of the "Methods and Findings" section in the Abstract: "38 included studies found associations between exposure and higher frequency of prescribing and 13 did not detect an association. Five included studies found evidence for association with higher costs, four found no association," and not the text you quote, which is about publication bias.

In a meta-analysis like this one, you have to make sure that the method for choosing studies is unbiased. However, if you base your survey on the published literature, you are implicitly excluding unpublished studies. Since Journals are more likely to publish a paper that finds an effect than one that doesn't, that introduces bias. However, larger and more complicated studies are likely to be published anyway.

The authors are saying they found that complex studies which examined more than one effect (multiple units of analysis) found less of an effect on prescribing than simpler studies which examined only one effect. Since the number of effects in the report is an attribute of how the study is written, not of the underlying data, both groups of studies should have similar results. That they did not implies that the simpler studies were subject to greater publication bias, biasing the meta-analysis in favor of finding an effect.

I believe the authors are deprecating the single-unit studies out of a desire to be conservative in their conclusions. Five of the authors are associated with Healthy Skepticism, an advocacy group aimed at "Improving health by reducing harm from inappropriate, misleading or unethical marketing of health products or services, especially misleading pharmaceutical promotion." By using only the most conservative data, I think they hope to avoid accusations of bias.

I think both news articles overstate the study's conclusions. It seems inconceivable to me that drug marketing has no effect on the frequency, cost, or quality of prescribing. Indeed, the authors of this study found that most studies reported either a bad effect on frequency, cost, or quality, or no effect at all, which seems to imply there really is an effect.

However, the authors' conclusion says otherwise: "The limitations of studies reported in the literature mentioned above mean that we are unable to reach any definitive conclusions about the degree to which information from pharmaceutical companies increases, decreases, or has no effect on the frequency, cost, or quality of prescribing." I guess they did the math and decided they couldn't quite get there.

Nevertheless, this is good enough for their purposes. They wanted to address drug manufacturers' claims that marketing improves the quality of prescribing by keeping doctors better informed. While they cannot conclusively rule out that possibility, they were not able to detect any improvement in prescribing. Therefore, they conclude that doctors should ignore promotional materials because they don't seem to help.

Obviously, I was kind of fascinated by this study, but I could certainly be misreading it, and I'm in no position to say whether their data, methods, or conclusions are scientifically valid or reasonable. I'm just telling you what I got from it.

I assume you are looking for a link between drug marketing and drug diversion. My guess is that if you contact the authors through the Healthy Skepticism site, they might be willing to give you more information and pointers to good studies since your interests seem to mesh with theirs.