Back in the 1970s a man called Richard Webber was working at the UK government's centre for Environmental Studies. This, more or less, was what he was doing:
I created Acorn, the first neighbourhood classification system, while working at the government's Centre for Environmental Studies in the 1970s.Without wanting to come over all geeky, Webber had used census data combined with the electoral register to create a classification of small neighbourhoods (based on census enumeration districts, the base geography for the national census). The principle of Webber's classification is essentially that 'birds of a feather flock together' - people in Manchester who have the same census characteristics as people in Bristol will tend to have other similar behavioural characteristics including, of course, purchasing preferences.
People ask how it came to be a commercial application. I had organised a seminar for local authorities to show how neighbourhood data could identify areas of deprivation. Quite by chance, it attracted Ken Baker, a sampling specialist from BMRB. He was a lateral thinker and he came up with the idea that the Acorn tool would be useful for market research sampling.
I meanwhile had realised that Acorn could help predict the households most likely to respond to direct mail and door drops.
I know you're asking what all this has to do with Brexit (or indeed voting in general). The thing is that, as well as Webber's classification of residential neighbourhoods capturing similarities in terms of how people might respond to different marketing offers, the system also applies to the matter of how we vote. If the 'birds of a feather' principle is correct then similar areas will have similar voting behaviour.
By the end of the 1980s, we were using geodemographics (as Webber's system became known) to combine with proprietary data on customers to refine the targeting of direct marketing campaign, to improve the selection of retail sites and to manage advertising better. Without wanting to overcomplicate, customer address data was given an ACORN code and then profiled using initially a simple index (where 100 = National Average). The indices were reported at the level of postal sector (i.e. BD11 2 or ME2 7) as these were large enough to give the index validity but small enough to facilitate fine grain targeting.
For a targeted doordrop we might then take the top 300 postal sectors and, through the Royal Mail, purchase a delivery of unaddressed mail. Typically, results showed an uplift in response of 2x or 3x depending on the client. Results were less good for targeted direct mail using the electoral register - mostly because a key behavioural characteristic, responsiveness, was not captured in census data. We played other games using expert systems and the early days of data-mining too - it was just a lot slower back then!
The generation of geodemographic systems that followed the original census-based ACORN system (e.g. SuperProfiles, MOSAIC) began to add in other large data sources such as credit data and 'psychographic' survey data - some may recall the retailed questionnaires incentivised with free prize draws that collected this information. Targeting extended beyond shared residential characteristics to details about financial services, indebtedness and preferences around holidays, cars, fmcg products and lifestyle choices (smoking, drinking, gambling, etc.).
Unsurprisingly, political parties began to make use of these systems to improve the targeting of election campaigning especially in areas where they lacked good quality 'voting intention' (VI) data. The same essential methodology was used as that I was using for mail order and financial services companies - a database of VI details was profiles against MOSAIC to provide improved campaign targeting in places with no canvass. This might be used on a national basis to decide which local council by-elections to target or at the constituency level to improve the effectiveness of limited resources thereby allowing the broadening of target seat campaigns.
Which I guess brings us to this conclusion from an article about a marketing analytics business who may or may not have been active during the recent EU Referendum:
Is it the case that our elections will increasingly be decided by the whims of billionaires, operating in the shadows, behind the scenes, using their fortunes to decide our fate?To appreciate why this is unlikely, we need to go back to how marketing analytics work, which is at the aggregate level. I appreciate, as a marketing professional, how we all want to believe advertising has become like the opening scenes in Minority Report but the reality is that aggregate data really doesn't provide the means to manipulate what we think. Rather, geodemographics, psychographics and marketing analytics enable us marketers to better target those people who are already predisposed to buy our product (or vote for our cause).
The big change from the stuff we were doing with mag tapes and mainframe computers in 1989 is the availability of information from social media (most usually Facebook). Again this is aggregate data - Facebook doesn't sell your details to marketing analytics businesses - but the sophistication of the data makes the targeting of messages all the more precise as does the ability to analyse public profiles without Facebook's permission. But even with this level of analysis, we still haven't 'manipulated' your opinion merely targeted our message more precisely to people more likely to respond positively to that message.
We are right to express concerns about whether the use of analytics has crossed over into misuse of personal data and it appears the UK's Information Commissioner is doing just that. But the likely truth (given it is not in Facebook's interest to share personal data and this is illegal in the EU) is that analytics companies are simply doing just what we were doing as direct marketers 30 years ago - using information to target our messages a little better. Back then it was all seen as a bit sinister ('where did you get my name from...') and nothing has changed except that there's more information, faster computers and social media. You can always find a computing academic (note, not a marketer) do do the full on evil empire stuff:
A rapid convergence in the data mining, algorithmic and granular analytics capabilities of companies like Cambridge Analytica and Facebook is creating powerful, unregulated and opaque ‘intelligence platforms’. In turn, these can have enormous influence to affect what we learn, how we feel, and how we vote. The algorithms they may produce are frequently hidden from scrutiny and we see only the results of any insights they might choose to publish.”The truth is a deal more prosaic. Marketers ask customers about their lives, social media use and so forth then profile this against aggregate data from various sources to produce targeting information - where geographically or behaviourally we can go looking for folk like those customers we've surveyed. I so want us marketers and ad men to be master manipulators, able to switch your mind at the push of an analytical button or the twitch of an algorhithm. But we're not like that at all, we're not even that good at using big data.
.....
1 comment:
Your link to the last quote is broken; I think the article is this one;
https://www.theguardian.com/technology/2017/mar/04/cambridge-analytics-data-brexit-trump
Where the quote comes from;
"Dr Simon Moores, visiting lecturer in the applied sciences and computing department at Canterbury Christ Church University and a technology ambassador under the Blair government"
Turns out that describing this guy as "a computing academic (note, not a marketer)" might be a bit of a stretch.
Post a Comment