Saturday, 18 June 2016

Why we don't need an Evidence Information Service (but do need better access to evidence)


Whenever I think about evidence-based policy making and the use of data by government, my mind turns to an article by Dr Vince-Wayne Mitchell:

Demographic segmentation variables are cheap and easy to measure, while psychographic variables are more expensive and harder to measure, but can provide more insight into consumers’ psychology. Suggests that a prima facie case exists for the suitability of astrology as a segmentation variable with the potential to combine the measurement advantages of demographics with the psychological insights of psychographics and to create segments which are measurable, substantial, exhaustive, stable over time, and relatively accessible. Tests the premise empirically using results from a Government data set, the British General Household Survey. The analyses show that astrology does have a significant, and sometimes predictable, effect on behavior in the leisure, tobacco, and drinks markets. Discusses managerial implications of the results in terms of market segmentation and promotion.

Dr Mitchell is a very highly regarded researcher in consumer behavious and marketing and I've no idea whether he believes in horoscopes or not. But what these results tell us is that we should treat the findings of research studies with a degree of caution. Just because it's badged as science doesn't mean that it's right or that there isn't some other research telling us something entirely different, even opposite. Indeed we know that sometimes supposedly evidence-based policy is anything but:

In 1980, after long consultation with some of America’s most senior nutrition scientists, the US government issued its first Dietary Guidelines. The guidelines shaped the diets of hundreds of millions of people. Doctors base their advice on them, food companies develop products to comply with them. Their influence extends beyond the US. In 1983, the UK government issued advice that closely followed the American example.

The most prominent recommendation of both governments was to cut back on saturated fats and cholesterol (this was the first time that the public had been advised to eat less of something, rather than enough of everything). Consumers dutifully obeyed. We replaced steak and sausages with pasta and rice, butter with margarine and vegetable oils, eggs with muesli, and milk with low-fat milk or orange juice. But instead of becoming healthier, we grew fatter and sicker.

We see this problem repeated time and time again - governments enact legislation that draws on the scientific evidence, on the environment (diesel cars), in health (vaping), in criminal justice (tagging), and farming (agricultural protection) only for the results to be either sub-optimal or else for evidence to arise showing the policy to be plain daft. In part this is because evidence isn't definitive - in the case of fat, there were a different set of scientists who government ignored saying we should look instead at carbohydrates and sugar. And the same is true on vaping - the experts used by the World Health Organisation and European Union to support severe restrictions on vaping and the sale of vaping products are countered by another set of experts with a very different position who argue that vaping should be encouraged by public health not controlled.

The problem here is that these experts move from being producers of evidence to recommenders of policy. After all, when researchers write up their findings for publication they will be expected by the journal (and probably their funders) to comment on the implications of their findings as well as to describe limitations and the areas where further research should focus. The findings, however, don't necessarily tell us what the policy should be - we may discover, for example, links between high levels of sugar consumption and type-2 diabetes but this doesn't mean we have to introduce a soda tax. Other policy solutions - or none -  are available and just as valid.

This brings me to the proposal set out by Chambers et al in The Guardian, for a new Evidence Information Service:

Our idea is to create a hub for connecting a broad network (hive-mind) of UK scientists and researchers with the political community. At present, the knowledge and expertise of more than 150,000 UK scientists and academics is being underutilised. To ensure the smartest possible democracy we need to create the largest active network of engaged scientists and researchers in the world, and then we need to use it.

Superficially this seems a great idea - make it easy for politicians (and a slightly sinister grouping entitled "policy-makers", who presumably aren't necessarily politicians) to connect with the academic and research body of knowledge. The authors go on to observe that there's an imbalance in the information available - ministers have more policy-making resource than a back bench MP or opposition spokesperson. And, as the leader of an opposition group on a large metropolitan council, I can confirm that this is true. The question is whether creating this Evidence Information Service really improves the way in which policies are decided?

I think not. Indeed, I think there are genuine risks in the proposal were it to be implemented.

Firstly, the connection made for the policy-maker isn't a connection to the evidence but is a connection "with specialist experts in that field". Now it may be that these experts simply hand across their evidence to the policy-maker who goes off to craft his policy on painting bus lanes green or whatever. Or it could be that the policy-maker asks the researcher what his or her policy prescription is rather than just for evidence. It's also likely that confirmation bias kicks in - a left wing policy-maker might seek out or prefer evidence from a sympathetic source while ignoring evidence from a source that seems counter to that policy-maker's ideology (this, of course, applies to conservative policy-makers equally).

The second problem is that the proposed Evidence Information System creates valorised and non-valorised evidence. So evidence provided by the new system is 'good' evidence whereas evidence from outside that system is not to be trusted. And because the Evidence Information System is entirely about institutional researchers (academics, in effect) the value of independent research and evidence from outside that sphere is undermined. By way of example, the proposed new system excludes commercially procured research, good quality journalism (including blogs and websites), many think tanks and opinion polling. There is an assumption that only those "150,000 UK scientists and academics" are a valid source of evidence for policy-making.

Next we have the problem of ideology. By this I don't mean socialism vs neoliberalism but rather than policy-decisions are informed by ideological issues. To use vaping as an illustration, we can see two distinct ideological positions within public health and tobacco control. To simplify a little these are essentially Harm Reduction and Gradual Prohibition. This divide applies throughout, to policy-makers and to researchers. No amount of evidence showing how vaping reduces harm will persuade someone ideologically committed to Gradual Prohibition as the purpose of tobacco control. We can pull across this issue into any area of public policy - we encouraged diesel engines because they had lower carbon emissions but now pay a price as those engines have a negative impact on urban air quality and health. In the proposed Evidence Information System there is no safeguard, no means of knowing whether research is ideologically-framed (or more likely, how that research is ideologically-framed). And we can't assume that researchers - especially in social science fields - don't fall foul of confirmation bias or ideological preference.

Lastly we need to get some idea as to what we mean by evidence (knowing from the start that this is a very contested idea). On the one hand we have data which is just that, a great pile of information that, if we're not careful, throws up evidence along the lines of the Mitchell research I opened with. Without some form of analysis, data is pretty useless but which is the better route - giving policy-makers the tools to interrogate the data or getting that interrogation mediated by (possibly biased, perhaps ideological) academic researchers? We then have - especially since it is social science research that will dominate policy-maker enquiry - the issue of 'soft' evidence. Is a qualitative study gathering the views of 50 teachers on in-class discipline more or less valid than a big analysis of Ofsted reports in 1000 primary schools? And what about discussion, op-ed and speculation, where do these fit in - they're important to academia but do they form part of the evidence base?

I applaud the attempt to get better data, information and evidence in front of those who design, decide and implement public policy but don't think that an Evidence Information System as described is the way to proceed. If the issue is asymmetric access to evidence surely the answer is to get more open data and better (easier to use perhaps) tools for using that data. I also worry that the Evidence Information System would create a different asymmetry in access to and use of evidence by valorising only academic evidence. Finally, governments can and do lean too heavily on evidence in decision-making often, as anyone familiar with England's Local Plan process would attest, to the point of sclerosis or even stasis.

Policy-making will always be a balance between having enough evidence and the need to act. It has always been something of a cop out to say, "we need more evidence", when you actually need to do something. And the interest of voters will always trump evidence in the minds of people who are elected by those voters - this is why there's no comprehensive review of London's 'green belt' and why Australia scrapped its carbon tax. Finally, we need to remember that - remember the diesel engines issue - different policy area conflict and the policy decision is not simply a matter of looking at one set of evidence but rather at a series of sets that can point to radically different decisions.

There's a good case for better connections between researchers and the real world - not just politicians but business people, writers, charities and schools - but this proposal doesn't achieve this outcome and would create a service with limited access. Far better would be to negotiate a public library license with publishers making all that evidence - and the search tools needed to use it - available in every community and for every person.


1 comment:

Chris Oakley said...

Very well argued Simon. I believe that, in the current climate, a centralised evidence information service would probably increase the trends towards activist "experts" and policy based evidence.

It is in principle a good idea because part of the problem is the lack of joined up thinking that produces "solutions" with unintended consequences, some of which you highlight above. However, I believe that it cannot work without far reaching and fundamental reforms to both government and the academic establishment. We are living in an era in which honesty, integrity, accuracy and objectivity appear to be considered unnecessary by many policy makers, elected or otherwise. The cult of the "expert" is anything but healthy in an increasingly counter-Enlightenment culture.

One thing that you didn't mention in your post was the overwhelming desire of most of the population for the "policy makers" to be told to **** off and get real jobs. I admit that my evidence is anecdotal but I firmly believe that the majority of UK citizens favour a lot less government and a lot less interference in their lives by self styled "policy makers".