This is the second post in the series on systems for information evaluation (or grading) as used in the intelligence process by the military, intelligence services and law enforcement agencies. In the first post, I discussed the origin of the current NATO standard for information grading systems. In this post I will dive into the use of information grading systems in law enforcement for which I will use examples from The Netherlands, the United Kingdom and Europol.
Grading system as used by the police in the Netherlands
Police in The Netherlands do have a model for evaluating information which in practice is meant to evaluate only information obtained through the use of covert human intelligence sources, in other words for HUMINT only. A template for the report in which the evaluation is supposed to be documented, is codified in an appendix to the Regulation on police data. The model was published in 1995 for the first time alongside the (then existing) regulation on Criminal Intelligence Units. It has however been in use since approximately 1986.
The grading model is also in its’ language specifically aimed at HUMINT. Just like the Admiralty Code, source and information are evaluated separately with the source (‘the informer’) being graded either as “A” (reliable), “B” (usually reliable), “C” (less/not reliable), or “D” (cannot be judged).
Then comes the interesting part; the information is not evaluated on ‘credibility’ as in the Admiralty Code or in the current NATO standard, however on the distance between the source and the origin of the information. The larger the distance between the source and the origin of the information, the lower grading should be applied. A grade “1” would mean that the source has observed (seen/heard/read) the information him/herself, a “2” would be granted if the source received the information directly from someone who has observed it (so ‘second-hand’). Finally a “3” means that the information was obtained by the source third-hand or further (‘hearsay’).
This approach makes sense as every transfer of information most likely will – intentionally or not – change the content. Just think of the ‘Chinese Whispers’ game you probably played as a child.
Prior to 2003 the grading model included 4 different grades possible for the evaluation of the information where grade “1” meant ‘certainly true’, “2” ‘observed by the source’, “3” ‘observed by someone else than the source and confirmed’ and lastly “4” meant ”observed by someone else than the source, not confirmed’.
It is a bit hard to imagine how a grading of a “1” (‘certainly true’) could be applied on HUMINT, although at the time also a policeman could be a source, (which however is and was a debatable practice).
Meanwhile the grades “3” and “4” were a mix of different variables, i.e. ‘distance to the origin of the information’ and ‘corroboration’, in one grade. Strangely, corroborated information received a lower grade than information witnessed firsthand. That defies the triangulation principle (while we also know how unreliable witness accounts can be) and from that perspective the 2003 change was certainly an improvement.
For the sake of completeness I have to mention that the model also contains a section where one of (now) five different handling codes should be added to control the dissemination of the report. Prior to 2003 there were 4 handling codes defined and the form is still known as the ‘4x4x4’.
I’m not aware of any specific internal police instructions for the grading nor of any specific research on the grading practices, so I can’t further elaborate on how well the systems works (or not). A report published by researchers from the Netherlands police academy does in a footnote mention that the model has been copied from the UK police, so let’s focus our attention in the next section on the system as used by the UK police.
Grading system as used by the UK police
In the UK the applied police methodology for the evaluation and dissemination of information is part of the National Intelligence Model and is summarised in the ‘Intelligence Report’. The methodology evaluates the reliability of the source, the confidence in the information and the handling sensitivity of a piece of information. The College of Policing maintains, as part of the Authorised Professional Practices, a webpage on the Intelligence report and how to apply the grading (archived copy). First, the reliability of the source is graded with a number from 1 to 3 in the following way:
And the confidence in the information is graded with a capital letter from A to E:
Also in this system the distance between the source and the origin of the information is the key element in the grading of the information although, like in the old system in the Netherlands, corroboration is not valued as it is in the Admiralty System, thus ignoring the triangulation principle.
The instructions on the College of Policing webpage go into more detail on when a certain grading should be applied. These also include instructions on the dissemination for which two possibilities exist (i.e., with or without conditions). This current ‘3x5x2’ model superseded the 5x5x5 model which was in use till 2016 and where the source and information grading codes looked like:
As can be seen, the new methodology changed significantly. The source evaluation was simplified and the criteria for the evaluation of information changed. Also, strangely, the numbers and letters have been swapped; whereas under the 5x5x5 model source reliability was communicated in capital letter, it is now communicated in numbers. And vice versa for the grading of the information. Interestingly, it appears that the new model was haphazardly introduced as the (detailed) instructions to fill out the 5x5x5 form are (at the time of writing) still online available on the College of Policing website (archived copy), as well as the pdf template of the 5x5x5 form. (archived copy)
If we go back a bit more in history, we see that the 5x5x5 form and process were formally introduced under the National Intelligence Model (ACPO 2007 ‘Introduction to intelligence-led policing’). Nontheless, I believe the system, or at least something similar, has been in use long before although I have not found a publication yet to which I can refer.
The 4×4 Europol system
At Europol information is evaluated by using a 4×4 system, in which, just like in the other grading systems, both the source and the information are independently assessed.
With Europol being headquartered in The Netherlands, it does not come as a surprise that they adopted a 4×4 system which much looks like the old Dutch system. There are however a lot of questions to ask in relation to the scores for the information evaluation as well as in relation to the ‘confirmed ‘ and ‘unconfirmed’ information.
The descriptions of the information evaluation actually mix three (or four depending how you look at it) different elements i.e., accuracy, knowledge of the source, knowledge of the officer passing it on and lastly corroboration. Also I believe the designation of ‘confirmed ‘ and ‘unconfirmed’ is tricky. Say there is information evaluated as X1, meaning that the source cannot be assessed, however ‘the accuracy of the information is not in doubt’. If there is certainty about the accuracy of the information, why could that not be ‘confirmed’ information then? Also in this system corroboration seems to have little influence on the grading, again ignoring the triangulation principle.
Conclusion
I realise that using only three European examples may not provide a complete overview of systems used by law enforcement for the grading of source and information. Nonethless, according to a 2011 UNODC manual on criminal intelligence analysis, the 4×4 system ‘is now widely accepted as common practice for law enforcement agencies‘ (p.25) and a 2017 OSCE Guidebook on Intelligence Led Policing notes that ‘the 4×4 or 5x5x5 are the most widely used systems‘ (p. 37). Both publications are a bit light on references so there might be other systems as well (NB: If you are aware of such systems that work differently, I’m happy to learn about them!).
Still, even these three examples show already that the systems differ by organisation and in time. In particular in relation to the evaluation of the information, we see sometimes multiple variables being mixed. We have seen a) accuracy, b) the distance between the source and the origin of the information and c) whether or not the information is corroborated, although this latter is in my opinion undervalued. Differences are also visible in the terminology used in the highest grading of information which is described as either ‘accuracy not in doubt’, ‘known to be true without reservation’ or ‘known directly to the source’. If you would dissects these terms, all three have a different meaning.
Another observation that can be made, relates to the type of information to which the evaluation system applies. In The Netherlands the system is aimed exclusively at information obtained from covert human intelligence sources. However, the instructions for the UK 5x5x5 model show that in the UK also information obtained by, for example, observation or technical means could be contained in the form and disseminated without disclosing the (type of) source. That is something which is in the Dutch criminal justice system not allowed, nor is it in many other systems.
That brings me to the last observation in this post. The use of evaluation systems by the military, the intelligence services and law enforcement agencies, shows that these system for a part fulfil the function of replacing the identity of the source. In relation to open sources there may be no – or certainly much less – need for such confidentiality and therefore other possible evaluation systems could be used. However, that is something for a next post. Hope you enjoyed this one, and feedback is always welcome!
(Photo credit to @csolorzanoe)
information evaluationIntelligencelaw enforcementosint