Tuesday 14 December 2010

SECI Model for Organizational Learning

How does an organization learn?

In 1995, Professors Nonaka & Takeuchi (at Hitotsubashi University, Tokyo) developed a four stage spiral model of organizational learning.

SECI:
  • Socialization
  • Externalization
  • Combination
  • Internalization

Tacit knowledge is personal, context specific, subjective knowledge, whereas explicit knowledge is codified, systematic, formal, and easy to communicate. The tacit knowledge of key personnel within an organization can be made explicit, codified in manuals, and incorporated into new systems and processes. This process is called "externalization".

The reverse process (from explicit to implicit) is called "internalization" because it involves employees internalizing an organization's formal rules, procedures, standards and other forms of explicit knowledge.

“Socialization" denotes the sharing of tacit knowledge and the term "combination" denotes the dissemination of codified knowledge.

According to Nonaka & Takeuchi, knowledge creation and organizational learning take a path of socialization, externalization, combination, internalization, socialization, externalization, combination . . . and so on, in an infinite spiral.

R

So what exactly is an 'expert'?

The term 'expert' is defined as 'somebody with a great deal of knowledge about, or skill, training, or experience in, a particular field or activity'.

In CAD & BIM circles, the label AutoCAD 'expert', Revit 'expert', MicroStation 'guru', and so on, are used pretty liberally. In some cases, the monicker is perfectly justified. In others, however...

It's often interesting to capture a user's perception of how good they think they are at a particular software app, prior to them taking a skills assessment. We like to ask the following question:

Please rate yourself on your ability to use the software, for which you are about to take a test, on a scale of 1 to 5:

1 = Very basic knowledge; not enough to work confidently on a project
2 = Basic knowledge; can get by working on a project but could do better
3 = Good knowledge; can produce a good standard of work on projects
4 = Advanced knowledge; can teach the basics to others
5 = Expert knowledge; can perform and teach others at an advanced level


Interestingly, we are mirroring a recognised format for gauging skills measurement. In the fields of education and operations research, the Dreyfus model of skill acquisition is a model of how students acquire skills through formal instruction. Brothers Stuart and Hubert Dreyfus proposed the model in 1980 in an influential report on their research at the University of California, Berkeley, Operations Research Center for the United States Air Force Office of Scientific Research.

The model proposes that a student passes through five distinct stages; novice, advanced beginner, competent, proficient and expert.

1. Novice
"rigid adherence to taught rules or plans"
no exercise of "discretionary judgment"

2. Advanced beginner
limited "situational perception"
all aspects of work treated separately with equal importance

3. Competent
"coping with crowdedness" (multiple activities, accumulation of information)
some perception of actions in relation to goals
deliberate planning
formulates routines

4. Proficient
holistic view of situation
prioritizes importance of aspects
"perceives deviations from the normal pattern"
employs maxims for guidance, with meanings that adapt to the situation at hand

5. Expert
transcends reliance on rules, guidelines, and maxims
"intuitive grasp of situations based on deep, tacit understanding"
has "vision of what is possible"
uses "analytical approaches" in new situations or in case of problems


In the worlds of music and dance, becoming an expert requires an investment of approximately 10,000 hours in practice and execution!

Consider how this breaks down:

Hours per week Hours per year Years to achieve 'expert'

4 200 50
8 400 25
12 600 16.7
16 800 12.5
20 1,000 10
24 1,200 8.3


So the next time someone professes to be an expert, ask yourself, is that what they really mean?

R

KnowledgeSmart wish list

We've been holding a series of web review meetings with customers in recent weeks, discussing how their assessment programs are coming along, and what tools they would like to see next in the KS admin dashboard.

Here are the most requested items, the majority of which will feature in the next point release (scheduled for Feb 2011);
  • More charting; would like an easier way to tell (at a glance) what the main problem areas/training topics are
  • Would like to see industry comparisons for most popular test scores
  • Want to display more results per page
  • Would like ability to invite interview candidates more easily
  • Would like to control level of test report feedback for interview candidates
  • Ability to delete test scores
  • View results for linked accounts
  • Notify admin if test expiry date passes (but test not taken)
  • Change status of users (i.e. ex-employee, interview candidate to employee, etc.)
We'll also be adding basic branding options for firms, new chart styles in the dashboard and more content management options for test modules.

In addition, personal dashboards for users, basic survey tools, a community area for sharing test content and a new coaching mode are all coming up in the next few months.

Our rolling program of new test content authoring is ongoing. This week sees the release of our first Adobe InDesign module. More RMEP content will be out shortly, plus the first in a series of Civil 3D modules goes live in January. Intermediate RAC content, a Revit families module and basic Photoshop content will follow in early 2011.

If you have additional suggestions for new features or tools, we'd welcome your feedback.

R

Unskilled and unaware; why self-assessment is fundamentally flawed (part two)

This is part two of my case against the use of self-assessment, as a reliable means of measuring staff performance.

The Latin maxim “ignoramus et ignorabimus”, meaning "we do not know and will not know", stood for a position on the limits of scientific knowledge, in the thought of the nineteenth century.

In September 1930, mathematician David Hilbert pronounced his disagreement in a celebrated address to the Society of German Scientists and Physicians, in Königsberg;
“We must not believe those, who today, with philosophical bearing and deliberative tone, prophesy the fall of culture and accept the ignorabimus. For us there is no ignorabimus, and in my opinion none whatever in natural science. In opposition to the foolish ignorabimus our slogan shall be: We must know — we will know!”

In the late twentieth century, Ex-United States Secretary of Defense, Donald Rumsfeld, whilst defending his country's position on the Iraq war, made the following (now infamous) statement; “There are ‘known knowns’. These are things we know that we know. There are ‘known unknowns’. That is to say, there are things that we now know we don’t know. But there are also ‘unknown unknowns’. These are things we do not know we don’t know”.
There are four recognised stages of competence.

1) Unconscious Incompetence
The individual neither understands nor knows how to do something, nor recognizes the deficit, nor has a desire to address it. The person must become conscious of their incompetence before development of the new skill or learning can begin.

2) Conscious Incompetence
Though the individual does not understand or know how to do something, he or she does recognize the deficit, without yet addressing it.

3) Conscious Competence
The individual understands or knows how to do something. Demonstrating the skill or knowledge requires a great deal of consciousness or concentration.

4) Unconscious Competence
The individual has had so much practice with a skill that it becomes “second nature” and can be performed easily. He or she may or may not be able to teach it to others, depending upon how and when it was learned.


"The Invisible Gorilla" experiment is one of the most famous psychological demo's in modern history. Subjects are shown a video, about a minute long, of two teams, one in white shirts, the other in black shirts, moving around and passing basketballs to one another. They are asked to count the number of aerial and bounce passes made by the team wearing white, a seemingly simple task. Halfway through the video, a woman wearing a full-body gorilla suit walks slowly to the middle of the screen, pounds her chest, and then walks out of the frame. If you are just watching the video, it’s the most obvious thing in the world. But when asked to count the passes, about half the people miss it. It is as though the gorilla is completely invisible.
(http://www.theinvisiblegorilla.com/gorilla_experiment.html).

In his popular KM blog, Nick Milton (http://www.knoco.co.uk/Nick-Milton.htm) writes in detail about the impact of this experiment and picks up on a number of key trends discussed in the book of the same name, authored by Christopher Chabris and Daniel Simons (the guys behind the original experiment).

The subtitle of the book is "ways our intuition deceives us", and the authors talk about a number of human traits (they call them illusions) which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

To paraphrase Milton, the illusions which have most impact on Knowledge Management are;

The illusion of memory
The illusion of confidence
The illusion of knowledge

Our memory of events fades over time, to the point that even firm documentary evidence to the contrary doesn't change what we remember. The implication is that if you will need to re-use tacit knowledge in the future, then you can't rely on people to remember it. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to "what feels right".

Tacit knowledge is fine for sharing knowledge on what's happening now, but for sharing knowledge with people in the future then it needs to be written down quickly while memory is still reliable.

Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written on photographic record can come as a shock, and can feel wrong, even if it’s right.

Any approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

The illusion of confidence represents the way that people value knowledge from a confident person. This would be fine if confidence and knowledge go hand in hand, but in fact there is almost an inverse relationship. A lack of knowledge is, instead, allied to overconfidence. Lack of knowledge leads to confidence, which leads to you being seen as knowledgeable.

Each chess player is given a points rating based on their competition results, which is in fact a very effective and reliable measure of their ability. Yet 75% of chess players believe they are underrated, despite the evidence to the contrary. They are overconfident in their own ability.

In studies of groups of people coming together to solve a maths problem, you would expect the group to defer to the person with the greatest maths knowledge, wouldn't you? In fact, the group deferred to the most confident person, regardless of their knowledge. In trials, in 94% of the cases, the final answer given by the group is the first answer suggested, by the most confident person present, regardless if whether it is right or wrong.

In a Harvard study of confidence vs knowledge in a trivia test, they certainly saw overconfidence in individuals - people were confident of their answer 70% of the time, while being correct only 54% of the time! When people were put together in pairs, the counterintuitive outcome was that the pairs were no more successful than the individuals, but they were a lot more confident! When two low-confidence people were put together, their overall confidence increased by 11%, even though their success rate was no higher than before.

The Illusion of Knowledge is behind the way we overestimate how much we know. The authors refer to how people think they know how long a project will take, and how much it will cost, despite the fact that projects almost always overrun in both cost and time. "We all experience this sort of illusory knowledge, even for the simplest projects" they write. "We underestimate how long they will take or how much they will cost, because what seems simple and straightforward in our mind typically turns out to be more complex when our plans encounter reality. The problem is that we never take this limitation into account. Over and over, the illusion of knowledge convinces us that we have a deep understanding of what a project will entail, when all we really have is a rough and optimistic guess based on shallow familiarity"

"To avoid this illusion of knowledge, start by admitting that your personal views of how expensive and time-consuming your own seemingly unique project will be are probably wrong. If instead, you seek out similar projects that others have completed, you can use the actual time and cost of these projects to understand how long yours will take. Taking such an outside view of what we normally keep in our own minds dramatically changes how we see our plans"

If we are unaware of these 3 illusions, we can feel confident in our knowledge, based on our memories of the past, without realising that the confidence is false, the knowledge is poor, and the memories are unreliable and partially fictitious. Awareness of these illusions allows us also to challenge the individual who confidently declares "I know how to do this. I remember how we did it 5 years ago", because we recognise the shaky nature of confidence, knowledge and memory.


A natural human tendency is that we tend to think that we know more than we do and that
we are better than we are. We suffer from what psychologists call the “Lake Wobegon effect”.
Based on Garrison Keillor’s fictional town where “all the women are strong, all the men are good-looking and all the children are above average.” According to the author’s own survey, 63% of Americans consider themselves more intelligent than the average American.

In contrast, 70% of Canadians said they considered themselves smarter than the average Canadian. In a survey of engineers 42% thought their work ranked in the top 5% among their peers. A survey of college professors revealed that 94% thought they do ‘‘above average’’ work – a figure that defies mathematical plausibility! A survey of sales people found that the average self-assessment score (for sales demos) was 76%. The average % of demos that achieved
objectives (for the same group) was 57%. The list goes on..

So, in summary, any strategy for capturing user skills data, which relies solely on an individual's ability to self-rate themselves on a given subject, is simply doomed to fail. I leave the last word to David Dunning; “In essence, our incompetence masks our ability to recognize our incompetence”.

R

Unskilled and unaware; why self-assessment is fundamentally flawed (part one)

In October, I travelled to Boston, to present a paper to the AIA HR Large Firm Roundtable. I decided to make the case against the use of self-assessment, as a reliable means of capturing management metrics for staff performance.

Over the years, many AEC firms have confidently stated to me, 'We don't need independent skills testing, we already know how good our users are'. When one enquires further, what they actually mean, is that they sent out a user survey, asking staff to rate themselves (usually out of 5) on a range of different skills topics, including AutoCAD, Revit, InDesign, et al. What they end up with is a spreadsheet (why is it always a spreadsheet??) with a bunch of names down one side, a list of skills categories across the top - and a sheet filled with 3's and 4's. Why 3's and 4's, I hear you ask? Simply because people don't want to risk the personal penalties that might go along with admitting they're a 1 or a 2. And conversely, they don't want to stick their head above the parapet by admitting to a 5 (even if they are a 5) because this can cause all sorts of new issues (more work, more people pestering them for answers to the same questions, you get the picture). So it's 3's and 4's all the way. Congratulations XYZ Architects, you have your completed spreadsheet, so you are now totally self-aware, as an organization. (Not). And the real rub here is that, more often than not, people have no clue how good they are, relative to the rest of the team, or wider industry!

So I decided to put this argument to bed, once and for all. Here's my evidence..

Let me begin with a story. Earlier this year, NY Times Online posted a series of articles by filmmaker Errrol Morris. He tells the tale of Bank robbery suspect McArthur Wheeler, who was recognized by informants who tipped detectives to his whereabouts after his picture was telecast one Wednesday night, during the Pittsburgh Crime Stoppers segment of the 11 o’clock news. At 12:10 am, less than an hour after the broadcast, he was arrested. Wheeler had walked into two Pittsburgh banks and attempted to rob them in broad daylight. What made the case peculiar is that he made no visible attempt at disguise. The surveillance tapes were key to his arrest. There he is with a gun, standing in front of a teller demanding money. Yet, when arrested, Wheeler was completely disbelieving. “But I wore the juice,” he said. Apparently, he was under the deeply misguided impression that rubbing one’s face with lemon juice rendered it invisible to video cameras.

Pittsburgh police detectives who had been involved in Wheeler’s arrest explained that Wheeler had not gone into “this thing” blindly but had performed a variety of tests prior to the robbery. Although Wheeler reported the lemon juice was burning his face and his eyes, and he was having trouble (seeing) and had to squint, he had tested the theory, and it seemed to work. He had snapped a Polaroid picture of himself and wasn’t anywhere to be found in the image. There are three possibilities:

(a) the film was bad;
(b) Wheeler hadn’t adjusted the camera correctly; or
(c) Wheeler had pointed the camera away from his face at the critical moment when he snapped the photo

Pittsburgh Police concluded that, 'If Wheeler was too stupid to be a bank robber, perhaps he was also too stupid to know that he was too stupid to be a bank robber — that is, his stupidity protected him from an awareness of his own stupidity.'

Now, this sorry tale might have been just another footnote in history, were it not for the fact that it came to the attention of David Dunning, a Cornell professor of social psychology. After reading this story in 1996, Dunning wondered whether it was possible to measure one’s self-assessed level of competence against something a little more objective – say, actual competence.

Over the next 3 years, Dunning (assisted by colleague Justin Kruger) undertook a major academic study and, in 1999, published the paper, “Unskilled and Unaware of It: How Difficulties of Recognizing One’s Own Incompetence Lead to Inflated Self-assessments”.

Dunning’s epiphany was; “When people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden; not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, like Mr. Wheeler, they are left with the erroneous impression they are doing just fine. In essence, our incompetence masks our ability to recognize our incompetence”.

Dunning & Kruger also quote the “above-average effect”, or the tendency of the average person to believe he or she is above average, a result that defies the logic of statistics. Participants scoring in the bottom quartile on tests grossly overestimated their performance and ability. Although test scores put them in the 12th percentile they estimated themselves to be in the 62nd.

Conversely, Because top performers find the tests they confront to be easy, they mistakenly assume that their peers find the tests to be equally easy. As such, their own performances seem unexceptional. In studies, the top 25% tended to think that their skills lay in the 70th–75th percentile, although their performances fell roughly in the 87th percentile.

Dunning and Kruger proposed that, for a given skill, incompetent people will:

tend to overestimate their own level of skill;
fail to recognize genuine skill in others;
fail to recognize the extremity of their inadequacy;
recognize and acknowledge their own previous lack of skill, if they can be trained to substantially improve.

As a follow up study, “Why the unskilled are unaware: Further explorations of (absent) self-insight among the incompetent”, was published in 2006. (David Dunning, Justin Kruger, Joyce Ehrlingera, Kerri Johnson, Matthew Banner).

In part two, I'll take a look at the 4 stages of competence - and how the combined illusions of memory, confidence and knowledge can impact on a firms' knowledge management strategy.

R

Friday 10 December 2010

UK BIM Roundtable - meeting notes

Thanks again to those of you who attended and contributed to the UK BIM Roundtable meeting, last month.

The meeting notes have been posted online, here: http://www.bimroundtable.com/.

Please feel free to provide any comments or feedback, as appropriate.

We anticipate running a follow up session in the Spring, focusing on one or two of the key discussion points from meeting one. More details to follow in the NY..

R

Testing Adobe InDesign skills


Another day, another milestone! We’ve had quite a few requests this year, mostly from our Architectural customers, to create some assessments for Adobe’s Creative Suite of tools.

To that end, the first module – Adobe InDesign for occasional users (CS5) – goes live this month.

In the same style as our assessment content for Autodesk and Bentley Systems' technical software apps, our system provides Adobe users with a live online skills test experience, presenting a mix of theory and task-based questions. (Theory questions can be answered without opening InDesign; task-based questions require a copy of the software). A test takes on average 30-60 mins, depending on the experience of the user. At the end, candidates can view a detailed test report, complete with feedback, coaching notes and suggestions for further training workshops.

Having spent the past 7 years or so, involved solely in the Autodesk and Bentley space, it's quite a treat to be adding a third major vendor to our portfolio of assessment offerings!

Over the coming months, we have the following additional assessments planned:

Adobe Photoshop for occasional users (CS5) – early 2011
Adobe InDesign fundamentals (CS5) – Spring 2011
Adobe Illustrator for occasional users (CS5) – Spring 2011
Adobe Photoshop fundamentals (CS5) – Spring/Summer 2011
Adobe Acrobat 9 fundamentals – Summer 2011

If you'd like to trial the new InDesign test, drop us a line or use the contact form on the main KS website. Enterprise customers will see the new modules appear in their dashboards, as soon as we go live.

R