This is part two of my case against the use of self-assessment, as a reliable means of measuring staff performance.
The Latin maxim “ignoramus et ignorabimus”, meaning "we do not know and will not know", stood for a position on the limits of scientific knowledge, in the thought of the nineteenth century.
In September 1930, mathematician David Hilbert pronounced his disagreement in a celebrated address to the Society of German Scientists and Physicians, in Königsberg;
“We must not believe those, who today, with philosophical bearing and deliberative tone, prophesy the fall of culture and accept the ignorabimus. For us there is no ignorabimus, and in my opinion none whatever in natural science. In opposition to the foolish ignorabimus our slogan shall be: We must know — we will know!”
In the late twentieth century, Ex-United States Secretary of Defense, Donald Rumsfeld, whilst defending his country's position on the Iraq war, made the following (now infamous) statement; “There are ‘known knowns’. These are things we know that we know. There are ‘known unknowns’. That is to say, there are things that we now know we don’t know. But there are also ‘unknown unknowns’. These are things we do not know we don’t know”.
There are four recognised stages of competence.
1) Unconscious Incompetence
The individual neither understands nor knows how to do something, nor recognizes the deficit, nor has a desire to address it. The person must become conscious of their incompetence before development of the new skill or learning can begin.
2) Conscious Incompetence
Though the individual does not understand or know how to do something, he or she does recognize the deficit, without yet addressing it.
3) Conscious Competence
The individual understands or knows how to do something. Demonstrating the skill or knowledge requires a great deal of consciousness or concentration.
4) Unconscious Competence
The individual has had so much practice with a skill that it becomes “second nature” and can be performed easily. He or she may or may not be able to teach it to others, depending upon how and when it was learned.
"The Invisible Gorilla" experiment is one of the most famous psychological demo's in modern history. Subjects are shown a video, about a minute long, of two teams, one in white shirts, the other in black shirts, moving around and passing basketballs to one another. They are asked to count the number of aerial and bounce passes made by the team wearing white, a seemingly simple task. Halfway through the video, a woman wearing a full-body gorilla suit walks slowly to the middle of the screen, pounds her chest, and then walks out of the frame. If you are just watching the video, it’s the most obvious thing in the world. But when asked to count the passes, about half the people miss it. It is as though the gorilla is completely invisible.
(http://www.theinvisiblegorilla.com/gorilla_experiment.html).
In his popular KM blog, Nick Milton (http://www.knoco.co.uk/Nick-Milton.htm) writes in detail about the impact of this experiment and picks up on a number of key trends discussed in the book of the same name, authored by Christopher Chabris and Daniel Simons (the guys behind the original experiment).
The subtitle of the book is "ways our intuition deceives us", and the authors talk about a number of human traits (they call them illusions) which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.
To paraphrase Milton, the illusions which have most impact on Knowledge Management are;
• The illusion of memory
• The illusion of confidence
• The illusion of knowledge
Our memory of events fades over time, to the point that even firm documentary evidence to the contrary doesn't change what we remember. The implication is that if you will need to re-use tacit knowledge in the future, then you can't rely on people to remember it. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to "what feels right".
Tacit knowledge is fine for sharing knowledge on what's happening now, but for sharing knowledge with people in the future then it needs to be written down quickly while memory is still reliable.
Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written on photographic record can come as a shock, and can feel wrong, even if it’s right.
Any approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.
The illusion of confidence represents the way that people value knowledge from a confident person. This would be fine if confidence and knowledge go hand in hand, but in fact there is almost an inverse relationship. A lack of knowledge is, instead, allied to overconfidence. Lack of knowledge leads to confidence, which leads to you being seen as knowledgeable.
Each chess player is given a points rating based on their competition results, which is in fact a very effective and reliable measure of their ability. Yet 75% of chess players believe they are underrated, despite the evidence to the contrary. They are overconfident in their own ability.
In studies of groups of people coming together to solve a maths problem, you would expect the group to defer to the person with the greatest maths knowledge, wouldn't you? In fact, the group deferred to the most confident person, regardless of their knowledge. In trials, in 94% of the cases, the final answer given by the group is the first answer suggested, by the most confident person present, regardless if whether it is right or wrong.
In a Harvard study of confidence vs knowledge in a trivia test, they certainly saw overconfidence in individuals - people were confident of their answer 70% of the time, while being correct only 54% of the time! When people were put together in pairs, the counterintuitive outcome was that the pairs were no more successful than the individuals, but they were a lot more confident! When two low-confidence people were put together, their overall confidence increased by 11%, even though their success rate was no higher than before.
The Illusion of Knowledge is behind the way we overestimate how much we know. The authors refer to how people think they know how long a project will take, and how much it will cost, despite the fact that projects almost always overrun in both cost and time. "We all experience this sort of illusory knowledge, even for the simplest projects" they write. "We underestimate how long they will take or how much they will cost, because what seems simple and straightforward in our mind typically turns out to be more complex when our plans encounter reality. The problem is that we never take this limitation into account. Over and over, the illusion of knowledge convinces us that we have a deep understanding of what a project will entail, when all we really have is a rough and optimistic guess based on shallow familiarity"
"To avoid this illusion of knowledge, start by admitting that your personal views of how expensive and time-consuming your own seemingly unique project will be are probably wrong. If instead, you seek out similar projects that others have completed, you can use the actual time and cost of these projects to understand how long yours will take. Taking such an outside view of what we normally keep in our own minds dramatically changes how we see our plans"
If we are unaware of these 3 illusions, we can feel confident in our knowledge, based on our memories of the past, without realising that the confidence is false, the knowledge is poor, and the memories are unreliable and partially fictitious. Awareness of these illusions allows us also to challenge the individual who confidently declares "I know how to do this. I remember how we did it 5 years ago", because we recognise the shaky nature of confidence, knowledge and memory.
A natural human tendency is that we tend to think that we know more than we do and that
we are better than we are. We suffer from what psychologists call the “Lake Wobegon effect”.
Based on Garrison Keillor’s fictional town where “all the women are strong, all the men are good-looking and all the children are above average.” According to the author’s own survey, 63% of Americans consider themselves more intelligent than the average American.
In contrast, 70% of Canadians said they considered themselves smarter than the average Canadian. In a survey of engineers 42% thought their work ranked in the top 5% among their peers. A survey of college professors revealed that 94% thought they do ‘‘above average’’ work – a figure that defies mathematical plausibility! A survey of sales people found that the average self-assessment score (for sales demos) was 76%. The average % of demos that achieved
objectives (for the same group) was 57%. The list goes on..
So, in summary, any strategy for capturing user skills data, which relies solely on an individual's ability to self-rate themselves on a given subject, is simply doomed to fail. I leave the last word to David Dunning; “In essence, our incompetence masks our ability to recognize our incompetence”.
R
No comments:
Post a Comment