How we change what others think, feel, believe and do
The effect, or effect size, is an indication of the practical importance of an experimental result.
In essence, 'effect' is the gap between two measures, although it must be measured with a statistical value. A big effect means the two measures are very different, not just 'different' (which is what 'statistically significant' means).
Experimenters hence seek not only statistical significance but also a large effect. Sadly, they do not always find both in the same place.
The most common measure of effect is the Pearson correlation, r.
r = SQRT( SSM / SST)
Where SSM is the between-groups sum of the squares, and SST is the total sum of squares.
A slightly more complex measure used to reduce bias due to sampling (as opposed to using a population) is omega, w. This is calculated as:
w = SQRT( (MSM - MSR) / (MSM + ((n-1) x MSR)) )
Cohen (1988) gives the rules of thumb for effect size for r (or omega):
Note that r is non-linear, and doubling it does not double the effect size.
A measure of men and women in a study shows a statistically significant difference in their body mass index (BMI). r is calculated as 0.05 which is not that big an effect and considered not worth reporting.
However, when the data is segmented based on age, it is found that r is 0.43. The statistical significance is not as high, but the effect is much greater than for gender difference and considered to be carried forward as the main findings of the report.
An experiment can report a statistically significant result, but this just says that the experiment has shown that there is a difference between two conditions, not that it is 'significant' in the usual sense of meaning 'big' and 'important' or otherwise earth-shattering in any way.
It is important in reporting experiments to indicate the effect as well as the significance. The American Psychological Association (APA) now recommends that all experimental reports include an indication of effect.