Usually, getting more accurate data is a positive thing. But what do you do if better data means smaller numbers and your management have come to expect big numbers because, in their mind, it is the sign of success?
Estimated reading time: 7 minutes, 41 seconds
Accurate data should always be encouraged in communication measurement and evaluation. Learning and growing is an important part of our evolution as a profession and that means constantly aiming for a better data set.
A common problem raises its ugly head, however, when structural reward systems in organisations are geared towards ‘big’ rather than ‘accurate’. Then communication measurement becomes a numbers game and the emphasis shifts from quality to volume.
I’ll give you three quick examples from my own past:
1. Sorting media mentions by media relevance
For many organisations, all media coverage is not equally important. There will be mentions in some outlets that are critical to the organisation’s stakeholders and management and therefore shape the image and brand. And there will be mentions in outlets that have absolutely no impact on the organisation’s stakeholders – and therefore are of little or no consequence.
Rather than examining all media mentions in one big pile and treating them all as ‘equally important’, more and more organisations switch their media monitoring to a ‘tier system’ that divides media outlets into e.g. ‘tier 1’, ‘tier 2’ and ‘tier 3’ media.
They then analyse the coverage accordingly, spending more time and resources accurately examining mentions in tier 1 media and little or no time at all evaluating mentions in tier 3 media. The result is usually a much more accurate overview of the likely consequences of their media coverage – because they focus on the sources their stakeholders consume – but the volume is consequently also smaller.
2. Cross-analysing ‘topic’ with other data
In 2008-2013, I was the head of corporate PR for VisitDenmark and later a Nordic construction company called NCC. At both companies I organised a media monitoring and analysis routine that included tagging each media mention as belonging to one of 10 topics on a pre-made list (10 was the max).
Because of the way the statistics and analytics system works, no mention could go without a topic tag – which basically meant that I had nine distinct topics to prioritise, and by default everything that did not match one of the first nine topics on the list was ‘dumped’ into the 10th topic category. This category quickly became known as the ‘trash can topic’ or ‘trash can category’.
Media coverage that ended up being coded and tagged as ‘trash’ was anything that could not reasonably be said to fall under one of the other nine topics. It could be articles where our organisation was mentioned in passing but did not feature in a relevant context or it could be e.g. obituaries that mentioned a former colleague. Whatever the content, it represented zero brand or business value for us.
Analysing our coverage over time, in both organisations I discovered (rather horrified) that a full 15-20% of all our coverage every month fell in the ‘trash can topic’ category. Which meant that almost one in five pieces of media coverage was absolutely worthless.
But the alarming insights did not stop there. Cross-analysing topics with other data points revealed a number of startling facts. For instance, at NCC my predecessor had used the report slide titled ‘Top 10 Media Outlets’ from our monitoring and analysis vendor to regularly report to his boss (who became my boss) and to management. The slide is basically just a list that shows which 10 media outlets have mentioned our brand or company name the most in a given time period – like the last quarter or year.
My predecessor had pointed to the slide – which included eight of the largest national media outlets – and concluded that ‘all was well’ because we were enjoying tremendous ‘reach’ for our corporate message in all the best media outlets possible.
However, the first time I cross-analysed Top 10 Media Outlets with Topics, I discovered that in some cases as much as 79% of the so-called coverage we enjoyed in these national outlets fell in the ‘trash can topic’ category and was worthless. In truth, we were not at all ‘top of mind’ for any of the key business journalists we should be looking to talk to regularly.
3. Examining the ‘fingerprint’ of the team on your media coverage
Another insightful exercise at NCC was when I decided to split our media monitoring and analysis report from our vendor into two subsets:
- Subset 1 included every piece of media coverage that we had somehow influenced during its creation; either we had initiated it (by pitching the story to the press) or we had responded to an enquiry from the media and helped shape the reporter’s perspective on the story before it went to print.
- Subset 2 included all the media coverage that just showed up in the papers and online without us knowing about it in advance. You might say these stories happened regardless of whether we in communication did our job or not.
Organising every piece of media coverage into Subset 1 and Subset 2 was a manual process back then – one that I had to personally manage every week because our monitoring and analysis vendor was not able to make the distinction.
Once the subsets were divided, we then ran the two data sets through all the usual analysis systems, in effect generating three reports: the total coverage, Subset 1 and Subset 2.
What was really interesting (but also expected) was that the quality of the coverage in Subset 1 was superior in every way. It had a better sentiment (positive/negative tonality), a higher average PR-score, always appeared in relevant topic categories and usually featured one or more of our brand messages.
In comparison, Subset 2 was less relevant, often had neutral sentiment, did not feature our brand messages, and included all mentions in the ‘trash can’ category.
I used this side-by-side comparison to illustrate to my boss and to management the difference between having an active PR team and being passive. But it also meant taking credit for a much smaller portion of our total coverage, as Subset 1 usually only constituted 35-40% of our total news coverage in a given quarter.
The dangerous numbers game
The three examples above illustrate just a few of the insights you can gain from a more accurate data set. But if you have painted yourself into a corner where your boss and management expects and demands to see ‘big numbers’ it can be incredibly difficult to transition to smaller, more relevant numbers.
Here is my advise on how to handle that situation:
Understand the consequences of your actions higher up in the hierarchy
First of all, your decision to go for ‘better numbers’ might have consequences for people higher up in the organisation – especially if it is a big one. An organisation that is used to see ‘big numbers = success’ is likely to have a hard time understanding why what was good yesterday is not good today?
Even more important, some organisations base their pay and bonus incentives for managers on volume – which means your decision to focus on a smaller data set could result in a financial loss for someone higher up in the hierarchy, which is almost certain to come back to you in a bad way.
Make sure you alert the organisation to your intended changes well in advance – and give them a chance to ask questions, object or make suggestions
Emphasise the gain, not the loss
You should never take something away from anybody without replacing it with something even better. It is simple psychology. If the organisation has trouble comprehending why the big numbers it used to celebrate were actually less than ideal, it is your job to explain why.
A great example of how that is done – in part – is Richard Bagnall’s AMEC blog post: “The Definitive Guide – Why AVEs are Invalid”. In it, he lists 22 arguments why you should never, ever use Advertising Value Equivalence to measure communication.
But those arguments become less potent if we are not ready to tell our organisation what can take AVE’s place. That is why it is your job as the communication professional to illustrate for your organisation all that is gained by adopting the new, smaller numbers in your analysis.
I always recommend striving for a ‘culture of evaluation’ that welcomes experimentation, learning and growing – as opposed to using measurement primarily for documenting the past and as a basis for punishment and reward. Because that leads to less learning, zero risk-taking and a stale organisation.
Make the transition gradual
For a while at least, it will be a good idea to let your stakeholders receive both the old format and the new format of your media report – for comparison.
Once the organisation adjusts to using the new, more accurate data set – and adjusts its incentives and bonusses etc. accordingly – you are safe to gradually phase out the old format.
Curious?
Did you find this blog post interesting? Please check out our communication measurement & evaluation services and let us know, if Quantum can help you set up a more relevant analysis of your data. We are always happy to chat on the phone or meet for a non-committal coffee 🙂
Richard Bagnall says
Hi Jesper
Your point about the anti AVEargument being less potent if we don’t have other numbers available to replace them is exactly the reason that AMEC also produces the Integrated Evaluation Framework and taxonomy. As you know, it guides practionersof organisations of any size and budget through a process for how to come up with meaningful metrics that tell a credible and relevant measurement story. Any PR practitioner that thinks the industry will substitute a single AVE number with some new sputious made up number needs to think again – meaningful measurement is always going to need to be tailored and appropriate to the ambitions and obkjectives of the communications strategy. One number will never work in all cases.
Regards
Richard Bagnall, AMEC Chairman and co-managing partner CARMA International.
Jesper Andersen says
Exactly, Richard. The IEF and taxonomy is a fantastic tool and framework for measuring the outcome and impact, not just the activity and output of your communication. It can be found here:
https://amecorg.com/amecframework/ – in 20 different languages. Use the flag navigation bar to pick your local version 🙂
/ Jesper