Showing posts with label scientific method. Show all posts
Showing posts with label scientific method. Show all posts

Tuesday, 26 October 2010

I doubt, therefore I am - the limits of "skepticism"

Regular visitors to this blog will have heard me talk about how doubt is at the heart of conservativism – indeed, it is only my doubting that makes me a conservative. And I am grateful to find that the rationalist position fits well with Balfour’s philosophic doubt.

Chris Snowdon in his Velvet Gove, Iron Fist blog provides a link to an interview with John Ioannidis the author of “why most research findings are false”. In what seems to me a beautiful demolition of the “skeptic” obsession with scientific method as the only response to doubt, the article reports that:



He (Ioannidis) and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.




Yet we are enjoined to believe the researchers, to accept the exclusive use of evidence rather than judgment in decision-making and to take whatever academics place before us as truth rather than as something to be questioned and challenged – to be doubted. Not merely through a self-serving and excluding process of peer review but through the prism of our understanding.



Nature, the grande dame of science journals, stated in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” What’s more, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.




Doubt must be universal. It should be the starting point for our decision-making, the guiding factor in how we monitor and the central principle in evaluation. And mail order marketers will tell you – it’s all about test and learn. The science is never settled, the truth is never known.




There is only doubt.

....

Wednesday, 6 January 2010

Soft outcomes need hard measures to be acceptable in policy-making

***

Julian Dobson writes Living with Rats – you should all read it. Not only is it well-written, insightful and thoughtful but he talks about things that matter to real people outside the bureaucratic circle – why some places are crap, why regeneration is about people rather than buildings and why too often local initiative is stifled by the demands of national government, big business or great institutions. He’s also a Hammer.

However, sometimes Julian lets his writerliness get ahead of things and this post, “Why is it so hard to be soft?”, is one example. What follows is a gentle fisk – responding to a series of questions posed by Julian. At the heart of this is the common fallacy that economics is about money (it isn’t) and that opinion or feeling is impossible to quantify (let me introduce the Opinion Poll). Julian’s words are in italics:

It's hard to measure the quality of a person's life. But you'd have to be a bit soft to think you could do it entirely by calculating their income.

Well yes of course which is why we don’t measure quality of life using income measures – we use other objective measures: what we consume, our health, our conditions of living, our ability to act freely. All of these are strong quantitative measures to which can be added the tremendous contribution of market research towards the quantitative measurement of opinion.

It's hard to assess the success of a policy against a range of desirables, such as economic output, sustainability and equality. But you'd have to be a bit soft to think the one that's easiest to measure numerically is by definition a good proxy for all the others.

No it’s not – so long as we set clear quantifiable outcomes (and such exist for economic output, sustainability and equality) then measurement is merely a matter of collecting and assessing the necessary data. If a policy does not have, say, equality objectives then to measure it against such “objectives” is mistaken and bad research – a classic flaw in policy evaluation is that the scope of the evaluation is set in hindsight without consideration of the original intentions behind the policy-making.

It's hard to measure how worthwhile and rewarding a person's job is, and harder still to measure it across a workforce. But you'd have to be a bit soft to think that the net number of jobs was more important than the sense of self-worth, motivation and aspiration people feel in doing them.

It seems obvious that, for the quarter of the working age population without a job, the “net number” (not sure what it’s netted from) of jobs is of crucial importance. And the knowledge that, if they had a job, it would be worthwhile and rewarding ain’t much comfort!

It's hard to calculate the value of a cultural asset like a theatre or art gallery or public artwork or live music venue. But you'd have to be a bit soft to think you could do it just by bums on seats or spending on merchandise and tickets.

There are real measures of cultural value beyond ticket sales, market price or merchandise. Crucially, approaches seek to capture either the opportunity cost (life without the cultural asset) or to price culture. Simply saying “oo isn’t it lovely” doesn’t wash!

It's hard to calculate how people feel about where they live and how involved they are in their communities. But you'd have to be a bit soft to imagine the price of houses or the proportion of people voting will tell you the whole story

First, house prices are a nice proxy for what people generally think of a place – that thinking may not reflect everything but it does reveal where people want to live! We can measure involvement (and be “shocked” at how low it is), we can use self-referenced definitions of community and we can use good old opinion research to ascertain what they think about where they live.

...

The point of all this isn’t to dismiss qualitative “research” – if you want to frame a question, get an instant response to an idea, provide some context to research or guide our tactical decision-making. But qualitative research must not lull us into rejecting scientific method – the testing of hypotheses with repeatable experiments with results that are quantitatively comparable.

...