Reporting results of studies
Dr Chris Cates' article discussing how to report study results, with emphasis on P-values and confidence intervals.
Key Concepts addressed:- 2-17 Don’t confuse “statistical significance” with “importance”
- 2-16 Confidence intervals should be reported
Details
“Passive smoking may be good for you” or so the tobacco companies would like us to believe! This idea arose from a misrepresentation of the confidence interval for data on passive smoking, and provides a good example of why we need a working knowledge of some statistics to deal with the propaganda that comes our way. There has been a shift away from the use of p values towards Confidence Intervals (CI) in many medical journals, and the British Medical Journal now expects authors of papers to present data in this way.
Confidence Intervals or P values
So what are Confidence Intervals all about and how did they get misused in this example? In general when research is undertaken the results are analysed with two separate questions in mind. The first is how big is the effect being studied (in this case how big is the risk of lung cancer for passive smokers)? The second question is how likely is it that the result is due to chance alone? The two issues are connected, because a very large effect is much less likely to have arisen purely by chance, but the statistical approach used is different depending on which question you are trying answer. The “p” value will only answer the question “what is the chance that the study could show its result if the true effect was no different from placebo”? The Confidence Interval describes how sure we are about the accuracy of the trial in predicting the true size of the effect.
Both questions relate to the fact that we cannot know what the effect would be of a treatment or risk factor on everyone in the world; any study can only look at a sample of people who are treated or exposed to the risk. We then have to assume that if, say, one hundred identical studies were carried out in the same way on different groups of patients the results found would be normally distributed around the average effect size of the treatment. The larger the number of patients included in the trial the closer the result of that trial are likely to be to the true effect in the whole population. The result of any particular trial can therefore be presented as showing an effect of a certain size, and the Confidence Interval describes the range of values between which you can be 95% certain that the true value lies.
From Dr Chris Cates, EBM Website.