Sometimes, people combine data that really don't belong together - conflict all over the place!
The I-squared statistical test in a meta-analysis can pin it down. It tests the amount of variance between results of different studies to see if there is more difference than you would expect just because of chance. The test measures how much inconsistency there is in the results of trials (technically called heterogeneity). Cochrane includes a (very!) rough guide to interpreting the I-squared: 75% or more is "considerable" (read, an awful lot!).
Differences might be responsible for contradictory results - including differences in the people in the trials, the way they were treated, or the way the trials were done. Too much heterogeneity, and the trials really shouldn't be together. But heterogeneity isn't always a deal breaker. Sometimes it can be explained.
Want some in-depth reading about heterogeneity in systematic reviews? Here's an article by Paul Glasziou and Sharon Sanders from Statistics in Medicine.
Or would you rather see another cartoon about heterogeneity? Then check out the secret life of trials.
(Some of these characters also appear here.)