Monday, 21 January 2013

How to lie with statistics - asthma special


Today the BBC - and assorted other nannying fussbuckets - have been slavering over a report that claims to show how the smoking ban resulted in a massive drop in children admitted to hospital with asthma:

There was a sharp fall in the number of children admitted to hospital with severe asthma after smoke-free legislation was introduced in England, say researchers. 

A study showed a 12% drop in the first year after the law to stop smoking in enclosed public places came into force.

They are using statisitics (this is perhaps spelled L-Y-I-N-G):

Yes folks, that's their evidence - evidence that contradicts what Asthma UK say about the problem:

Asthma UK said the number of emergency admissions had remained unchanged for a decade 

(Yes folks, the BBC reported that)

One day a BBC journalist will actually do his or her job and ask some questions, poke a bit at the evidence - anything other than simply reprinting the lies contained in the nannying fussbuckets' press release.



Anonymous said...

It's quite chilling that MSM rarely questions, let alone investigates, what is really going on. Though I guess that applies to many issues these days. Perhaps it always has.

Anonymous said...

A 12% reduction is not enough to assume cause and effect.

A 12% reduction would be an RR of 0.88, which is much greater than 0.5!!!

For these reasons most scientists (which includes scientifically inclined epidemiologists) take a fairly rigorous view of RR values. In observational studies, they will not normally accept an RR of less than 3 as significant and never an RR of less than 2.

Likewise, for a putative beneficial effect, they never accept an RR of greater than 0.5

Gary K.

Anonymous said...

No comment from the stat-fiddlers comparing the absence of peak childhood asthma 50 years ago, when more than half the population smoked tobacco pretty much everywhere. Funny that !

Anonymous said...

Dear Gary,

There was no 12% reduction anyway. There was never any reduction, since the figures were all modeled!

The authors built a model which calculated an increase in RR of 1.02 p.a. over a period of 5 years (using weekly data, strongly biased by a large spike in the final year), then extrapolated this to an expected number in the following and then found only a reduction from the expected figure of 8.6% in the following year.

This is just absolute garbage! It isn't even a misuse of low RR numbers, but using a pathetic RR to estimate what the number should be, and claiming credit for the reduction! How did it get published?

Worse, their institution then bundled the next year number together with their subsequent year estimated reduction to come up with the 12% reduction - of completely imaginary people.

I would say that you couldn't make this up, but somebody actually did.