Tuesday 20 January 2015

All-population approaches don't work - even for national security

****

We are familiar with the use of all-population strategies to respond to public health challenges - smoking, drinking, eating too much and so forth. The argument is essentially that, if sufficient people are engaged in a negative activity, then it is worthwhile targeting the whole population - thus we impose high duty rates on alcohol, people call for sugar or soda taxes and we impose constraints on the supply of the product (up to and including a total ban).

The problem is illustrated by the success of such a strategy in the UK's consumption of alcohol - this has fallen by around a fifth over the past decade as a result of much higher duty rates, extensive education measures and high profile campaigns from all public agencies targeting binge drinking. So the strategy is a success? Seems not to be so because, assuming that the main gain is a health gain, then the strategy has failed:

The number of hospital admissions for alcohol related harm increased by 47%—representing an increase of more than 800 each day—over the five years between 2004 and 2009, show latest figures for England.

Put simply we saw a continuing rise in ill-health related to alcohol during a period when overall alcohol consumption was falling. The all-population approach had failed (at least in terms of health outcomes - it seems to have been effective in terms of crime). This same problem is repeated when we look at other issues - overall calorie consumption and broad measures of obesity are static or falling but we are seeing rises in chronic conditions related to obesity.

So it should come as no surprise to discover people questioning all-population strategies in other areas of policy:

Surveillance of the entire population, the vast majority of whom are innocent, leads to the diversion of limited intelligence resources in pursuit of huge numbers of false leads. Terrorists are comparatively rare, so finding one is a needle in a haystack problem. You don't make it easier by throwing more needleless hay on the stack.

We cannot us lots of computer power to scan and analyse billions of data rather to replace the dull job of gathering intelligence, targeting known or likely suspects and then conducting specific surveillance. Just as with health - where we should target resources towards those who actually have a problem - the growing security infrastructure really has little impact in the prevention of terrorism or crime.

The problem is that, just as blanket restrictions of booze or boozing appeals to a certain sort, the introduction of all-population surveillance provides a comfort blanket for a nervous population. Confiscating nail clippers and half empty water bottles from tourists boarding a flight to Malaga at a regional airport contributes nothing to the fight against terrorism or organised crime but does give the impression that the authorities are deeply concern (and doing something) about these problems.

It cheers me (although does not explain why government persists with all-population measures) that there's a mathematical explanation as to why mass surveillance is a lousy way to fight terrorism.

No matter how sophisticated and super-duper are NSA’s methods for identifying terrorists, no matter how big and fast are NSA’s computers, NSA’s accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find terrorists.

And if it is true for terrorists then it is true for anything where the proportion of the population we need to target is small. Or so it seems, at least to someone who has only a basic understanding of Bayes' Theorem. So answer should be to deal with the actual problem rather than bother at the whole population in the vain hope that this will work. It seems it won't.

....

1 comment:

Junican said...

I liked the example given at the end of the Wikipedia explanation of the Bayes Theorem, which I found easy to understand.The key to understanding is that you MUST know the prior probability. In that example, 1000 people are randomly selected for a drugs test. The important prior knowledge is that 5 of them are drug users, but you don't know which ones. The test is 99% accurate, which we would all say is very accurate. What would be the probability that any individual who tested positive was in fact a drug user? Our natural reaction would be to say, "Well....Erm....99%, I suppose" In reality, the probability is around 33%. That is, there is a 67% chance that he would not be.
There is no need to do the maths! The probability revolves around the fact of the 1% inaccuracy of the test. 1% of 1000 is 10, so, since there are only 5 users, there is a 50/50 chance that the the identified person will NOT be a user. Couple that with the fact that 995 are definitely NOT users and do much the same calculation, and the result further diminishes the probability to 33/100.

Clever stuff.