Thursday, 14 May 2015

Why self-assessment is (probably) fundamentally flawed (part two)

This is part two of my case against the use of self-assessment, as a reliable exclusive means of measuring staff performance.

The Latin maxim “ignoramus et ignorabimus”, meaning "we do not know and will not know", stood for a position on the limits of scientific knowledge in the nineteenth century.

In September 1930, mathematician David Hilbert pronounced his disagreement in a celebrated address to the Society of German Scientists and Physicians, in Königsberg:

“We must not believe those, who today, with philosophical bearing and deliberative tone, prophesy the fall of culture and accept the ignorabimus. For us there is no ignorabimus, and in my opinion none whatever in natural science. In opposition to the foolish ignorabimus our slogan shall be: We must know - we will know!”

In the late twentieth century, Ex-United States Secretary of Defense, Donald Rumsfeld, whilst defending his country's position on the Iraq war, made the following (now infamous) statement; “There are ‘known knowns’. These are things we know that we know. There are ‘known unknowns’. That is to say, there are things that we now know we don’t know. But there are also ‘unknown unknowns’. These are things we do not know we don’t know”.
 
There are four recognised stages of competence.

1) Unconscious Incompetence
The individual neither understands nor knows how to do something, nor recognizes the deficit, nor has a desire to address it. The person must become conscious of their incompetence before development of the new skill or learning can begin.

2) Conscious Incompetence
Though the individual does not understand or know how to do something, he or she does recognize the deficit, without yet addressing it.

3) Conscious Competence
The individual understands or knows how to do something. Demonstrating the skill or knowledge requires a great deal of consciousness or concentration.

4) Unconscious Competence
The individual has had so much practice with a skill that it becomes “second nature” and can be performed easily. He or she may or may not be able to teach it to others, depending upon how and when it was learned.

"The Invisible Gorilla" experiment is one of the most famous psychological demo's in modern history. Subjects are shown a video, about a minute long, of two teams, one in white shirts, the other in black shirts, moving around and passing basketballs to one another. They are asked to count the number of aerial and bounce passes made by the team wearing white, a seemingly simple task. Halfway through the video, a woman wearing a full-body gorilla suit walks slowly to the middle of the screen, pounds her chest, and then walks out of the frame. If you are just watching the video, it’s the most obvious thing in the world. But when asked to count the passes, about half the people miss it. It is as though the gorilla is completely invisible.
(http://www.theinvisiblegorilla.com/gorilla_experiment.html).

In his popular KM blog, Nick Milton (http://www.knoco.co.uk/Nick-Milton.htm) writes in detail about the impact of this experiment and picks up on a number of key trends discussed in the book of the same name, authored by Christopher Chabris and Daniel Simons (the guys behind the original experiment).

The subtitle of the book is "ways our intuition deceives us", and the authors talk about a number of human traits (they call them illusions) which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

To paraphrase Milton, the illusions which have most impact on Knowledge Management are;

• The illusion of memory
• The illusion of confidence
• The illusion of knowledge

Our memory of events fades over time, to the point that even firm documentary evidence to the contrary doesn't change what we remember. The implication is that if you will need to re-use tacit knowledge in the future, then you can't rely on people to remember it. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to "what feels right".

Tacit knowledge is fine for sharing knowledge on what's happening now, but for sharing knowledge with people in the future then it needs to be written down quickly while memory is still reliable.

Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written on photographic record can come as a shock, and can feel wrong, even if it’s right.

Any approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

The illusion of confidence represents the way that people value knowledge from a confident person. This would be fine if confidence and knowledge go hand in hand, but in fact there is almost an inverse relationship. A lack of knowledge is, instead, allied to overconfidence. Lack of knowledge leads to confidence, which leads to you being seen as knowledgeable.

Each chess player is given a points rating based on their competition results, which is in fact a very effective and reliable measure of their ability. Yet 75% of chess players believe they are underrated, despite the evidence to the contrary. They are overconfident in their own ability.

In studies of groups of people coming together to solve a maths problem, you would expect the group to defer to the person with the greatest maths knowledge, wouldn't you? In fact, the group deferred to the most confident person, regardless of their knowledge. In trials, in 94% of the cases, the final answer given by the group is the first answer suggested, by the most confident person present, regardless if whether it is right or wrong.

In a Harvard study of confidence vs knowledge in a trivia test, they certainly saw overconfidence in individuals - people were confident of their answer 70% of the time, while being correct only 54% of the time! When people were put together in pairs, the counterintuitive outcome was that the pairs were no more successful than the individuals, but they were a lot more confident! When two low-confidence people were put together, their overall confidence increased by 11%, even though their success rate was no higher than before.

The Illusion of Knowledge is behind the way we overestimate how much we know. The authors refer to how people think they know how long a project will take, and how much it will cost, despite the fact that projects almost always overrun in both cost and time. "We all experience this sort of illusory knowledge, even for the simplest projects" they write. "We underestimate how long they will take or how much they will cost, because what seems simple and straightforward in our mind typically turns out to be more complex when our plans encounter reality. The problem is that we never take this limitation into account. Over and over, the illusion of knowledge convinces us that we have a deep understanding of what a project will entail, when all we really have is a rough and optimistic guess based on shallow familiarity"

"To avoid this illusion of knowledge, start by admitting that your personal views of how expensive and time-consuming your own seemingly unique project will be are probably wrong. If instead, you seek out similar projects that others have completed, you can use the actual time and cost of these projects to understand how long yours will take. Taking such an outside view of what we normally keep in our own minds dramatically changes how we see our plans"

If we are unaware of these 3 illusions, we can feel confident in our knowledge, based on our memories of the past, without realising that the confidence is false, the knowledge is poor, and the memories are unreliable and partially fictitious. Awareness of these illusions allows us also to challenge the individual who confidently declares "I know how to do this. I remember how we did it 5 years ago", because we recognise the shaky nature of confidence, knowledge and memory.

A natural human tendency is that we tend to think that we know more than we do and that
we are better than we are. We suffer from what psychologists call the “Lake Wobegon effect”.
Based on Garrison Keillor’s fictional town where “all the women are strong, all the men are good-looking and all the children are above average.” According to the author’s own survey, 63% of Americans consider themselves more intelligent than the average American.

In contrast, 70% of Canadians said they considered themselves smarter than the average Canadian. In a survey of engineers 42% thought their work ranked in the top 5% among their peers. A survey of college professors revealed that 94% thought they do ‘‘above average’’ work – a figure that defies mathematical plausibility! A survey of sales people found that the average self-assessment score (for sales demos) was 76%. The average % of demo's that achieved objectives (for the same group) was 57%. The list goes on..

So, in summary, any strategy for capturing user skills data, which relies solely on an individual's ability to self-rate themselves on a given subject, is simply doomed to fail. I leave the last word to David Dunning; “In essence, our incompetence masks our ability to recognize our incompetence”.

R

Tuesday, 12 May 2015

Why self-assessment is (probably) fundamentally flawed (part one)

A while back, we wrote a paper about the pro's and cons of using self assessment, as an exclusive means of gaining useful corporate intelligence and capturing management metrics for staff performance.

Over the years, many AEC firms have confidently stated, 'We don't need independent skills testing, we already know how good our teams are'. When one enquires further, what they actually mean, is that they sent out a user survey, asking staff to rate themselves (usually out of 5) on a range of different skills topics, including AutoCAD, Revit, BIM, etc. What they end up with is a spreadsheet (why is it always a spreadsheet?) with a list of names down one side, a list of skills categories across the top - and a sheet filled with 3's and 4's. Why 3's and 4's, I hear you ask? Simply because people don't want to risk the personal penalties that might go along with admitting they're a 1 or a 2. And conversely, they don't want to stick their head above the parapet by admitting to a 5 (even if they are a 5) because this can cause all sorts of new issues (more work, more people pestering them for answers to the same questions, you get the picture). So it's 3's and 4's all the way.

Congratulations XYZ Engineers, you have your completed spreadsheet, so you are now totally self-aware, as an organization. (Not really). And the real rub here is that, more often than not, people have no clue how good they are, relative to the rest of the team, or wider industry!

So, we decided to explore this concept in greater detail. Here's the evidence..

Let us begin with a story. A few years back, NY Times Online posted a series of articles by filmmaker Errrol Morris. He tells the tale of Bank robbery suspect McArthur Wheeler, who was recognized by informants who tipped detectives to his whereabouts after his picture was telecast one Wednesday night, during the Pittsburgh Crime Stoppers segment of the 11 o’clock news. At 12:10 am, less than an hour after the broadcast, he was arrested. Wheeler had walked into two Pittsburgh banks and attempted to rob them in broad daylight.

What made the case peculiar is that he made no visible attempt at disguise. The surveillance tapes were key to his arrest. There he is with a gun, standing in front of a teller demanding money. Yet, when arrested, Wheeler was completely disbelieving. “But I wore the juice,” he said. Apparently, he was under the deeply misguided impression that rubbing one’s face with lemon juice rendered it invisible to video cameras.

Pittsburgh police detectives who had been involved in Wheeler’s arrest explained that Wheeler had not gone into “this thing” blindly but had performed a variety of tests prior to the robbery. Although Wheeler reported the lemon juice was burning his face and his eyes, and he was having trouble (seeing) and had to squint, he had tested the theory, and it seemed to work. He had snapped a Polaroid picture of himself and wasn't anywhere to be found in the image.

There are three possibilities:
(a) the film was bad;
(b) Wheeler hadn’t adjusted the camera correctly; or
(c) Wheeler had pointed the camera away from his face at the critical moment when he snapped the photo

Pittsburgh Police concluded that, 'If Wheeler was too stupid to be a bank robber, perhaps he was also too stupid to know that he was too stupid to be a bank robber - that is, his stupidity protected him from an awareness of his own stupidity.'

Now, this sorry tale might have been just another footnote in history, were it not for the fact that it came to the attention of David Dunning, a Cornell professor of social psychology. After reading this story in 1996, Dunning wondered whether it was possible to measure one’s self-assessed level of competence against something a little more objective – say, actual competence.

Over the next 3 years, Dunning (assisted by colleague Justin Kruger) undertook a major academic study and, in 1999, published the paper, “Unskilled and Unaware of It: How Difficulties of Recognizing One’s Own Incompetence Lead to Inflated Self-assessments”.

Dunning’s epiphany was; “When people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden; not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, like Mr. Wheeler, they are left with the erroneous impression they are doing just fine. In essence, our incompetence masks our ability to recognize our incompetence”.

Dunning & Kruger also quote the “above-average effect”, or the tendency of the average person to believe he or she is above average, a result that defies the logic of statistics. Participants scoring in the bottom quartile on tests grossly overestimated their performance and ability. Although test scores put them in the 12th percentile they estimated themselves to be in the 62nd.

Conversely, Because top performers find the tests they confront to be easy, they mistakenly assume that their peers find the tests to be equally easy. As such, their own performances seem unexceptional. In studies, the top 25% tended to think that their skills lay in the 70th–75th percentile, although their performances fell roughly in the 87th percentile.

Dunning and Kruger proposed that, for a given skill, incompetent people will:

  tend to overestimate their own level of skill;
  fail to recognize genuine skill in others;
  fail to recognize the extremity of their inadequacy;
  recognize and acknowledge their own previous lack of skill, if they can be trained to substantially improve.

As a follow up study, “Why the unskilled are unaware: Further explorations of (absent) self-insight among the incompetent”, was published in 2006. (David Dunning, Justin Kruger, Joyce Ehrlingera, Kerri Johnson, Matthew Banner).

In part two, we'll take a look at the 4 stages of competence - and how the combined illusions of memory, confidence and knowledge can impact on a firms' knowledge management strategy.

R

Wednesday, 22 April 2015

Information is Beautiful

Skills assessments generate a lot of useful data. We have always been in the business of gathering information. And we have thousands of records to analyse. A key theme for KS this year is discovering new and interesting ways to anonymously present the findings of our global data capture.

As well as test topic, score & time values, we also capture optional additional background information about KS users, including:

Primary Industry/Discipline
Primary Role
Country
State (NB if US/CAN/Aus selected in 'Country' field)
Self rating (1-5)
How many years have you used BIM/CAD/Engineering software?
How often do you use BIM/CAD/Engineering software?
How did you primarily learn to use BIM/CAD/Engineering software?
Where did you first learn to use BIM/CAD/Engineering software? (Country/State) What BIM/CAD/Engineering software do you regularly use?
Please specify any other software.

Our goal in the coming months is to create a series of topical benchmarking stories from the information captured.

For example:

Average Revit Architecture test score/time for Architects based in California, using the software for 5 years or more, self-taught, part-time users, who learned in the USA.

Average Revit Structure test score/time for Structural Engineers based in NSW, Australia, using the software for 2-5 years, formally trained, full-time users, who learned in Australia.

Average MicroStation score/time for self-taught vs formally trained users.

Average AutoCAD score/time for < 5 year users vs > 10 year users.

And so on.

As you can see, there are dozens of possible permutations for interesting themes and stories buried in the data.  We just need to analyse and identify the best ones!

Here is one of our inspirations for creating amazing infographics:

Information is Beautiful by David McCandless.




Over the coming months, we'll share with you the most interesting stories, data-mined from our industry-leading skills assessment software.

R

Sunday, 29 March 2015

Creating a Company Skills Matrix



A skills matrix, or competency framework, is a table that displays all required skills, tasks or abilities for a team or organisation, in one easy to view interface.

A training/competency matrix is a tool used to document and compare required competencies for a position with the current skill level of the people performing the roles. It is used in a gap analysis for determining where individuals have important training needs and as a tool for managing people development.

Benefits

•         Provides a comprehensive list of key skills the company has identified as worth cultivating.
•         Aids in managing training budgets because it identifies skill gaps across your organization.
•         Assists with planning by helping identify and target new skill areas that you might need.
•         Helps managers with development planning by identifying required skills.
•         Allows managers to add skills to employee profiles and set, monitor and change their level of expertise.
Allows employees to view the skills and skill levels their manager has attached to their profile.
Provides managers with clear areas for employee development that are directly connected to the organization’s identified HR priorities.
Enables managers to automatically include skills in review meetings.

Irrespective of what labels or names these levels are given, basically the levels mean: beginner level, learner level, skilled level, expert or master-skill level.


How to Build a Skills Matrix

1.    List the key roles in your organization.
2.    List the competencies required for each role.
3.    List the names of each individual for each role.
4.    Determine how you want to code the matrix to indicate skill level, training needed, etc.

Possibly use a colour coding system where:
Red = No skills in this area (1)
Amber = Partly trained in this area (2,3)
Green = Fully trained in this area (4,5)
Blank = N/A

5.    Fill out the matrix for each person to indicate current skills and any skill gaps.
6.    Look for patterns, opportunities, and areas of need.

In our summer release, we will be adding skills matrix functionality to our user pages area, enabling KS customers to create an easy to access database of searchable user skills and abilities.

R

Friday, 13 February 2015

KS Spring Release - Individual User Pages



The most popular KS wishlist request in recent months has been to make it easier for individuals and admins to view assessment history and results data on a per user basis.

This also ties in with our wider goal to make the KS assessment tools more integrated with other Learning and HR systems, where data is shared across two or more platforms.

So we have created a new personal dashboard for individual users to log in and monitor their personal KS history, within their current organisation.

This resource will make it easier for individuals to manage their KS user profile, invite history, learning evidence and assessment results, including benchmarking data. Annual appraisals and performance reviews will be enhanced by the ability to capture multiple HR, training and learning resources in a single place.

The dashboard login page now has a new option, called My KnowledgeSmart.


Here is the KS user homepage, split into 4 main areas: My Details, My Achievements, My Assessments and My Scores.


The My Details page allows users to set their own password, edit personal information and update the 5 datafields which KS admins use to filter wider user and results data. They can also add a picture, to personalise their KS page.


The My Achievements page offers an opportunity to upload any useful documents and records relating to their personal development and learning history. For example, training completion certificates, exam results, certification evidence, CPD records, resume or professional record of achievement data.


Users can make documents 'public' or 'private' and create an individual URL for sharing learning resources online:


The My Assessments page enables users to view KS assessments they have taken or have been invited to take. They can view summary reports for completed sessions by clicking on the assessment name and resume open sessions by clicking on the name of 'Not started' or 'In progress' sessions and entering their KS user name and password in the following login screen.


Clicking on the 'Rosette' icon displays a new certificate, which users can share using social media or via email.



The My Scores page is a place for users to view all KS scores in one place.


Available data includes: benchmark comparisons, performance charts and skills gap summaries.



KS admins can access individual user pages via their main KS dashboard > Users page. There is a new icon which opens up the user page in a separate window.


KS admins can control access to user pages on a per user basis:


KS admins can also control user page access on a per account basis:


KS admins can also disable the 'sharing' tool, which appears when the certificate is viewed:


If a user record is deleted or set to 'Ex-employee' status, their personal page will no longer be accessible. User data is not portable, from one company to another. KS data is account-specific and owned by the organisation, not the user.

We anticipate this area of the KS system developing further, as the year progresses. If you have any ideas for integration with other systems, or if you have suggestions for new tools, charts, reports, and so on, please let us know and we'll add them to the summer wishlist.

R

Sunday, 18 January 2015

KS Library Topics - Assessing Revit Architecture Skills



KnowledgeSmart skills assessment software helps firms of architects and engineers to measure software capability.

We have created an extensive range of assessments, covering the most popular design, engineering and BIM software titles, used by firms working in the global construction industry.

Assessments cover a range of popular vendors, including: Autodesk, Bentley Systems, Adobe, McNeel, Trimble and Graphisoft.

We work with a global network of world class authors. They are subject matter experts, published, thought leaders and working as consultants to industry, so they understand the practical application of the tools, as well as the menus and mouse-clicks. Popular KS authors include: CASE Inc, Chris Senior, Darryl McClelland, Eric Chappell, Envision CAD, Evolve, John Evans, Josh Modglin, Joel Harris, Paul Aubin, Paul Woddy, Revit Factory, Robert Manna, Scott Moyse, Scott Onstott, Tony Tedder, Thomas Weir and White Frog.

For our most popular assessments, we tend to create a variety of different modules, covering a range of different levels and abilities.

The current number one topic in our library is Autodesk Revit Architecture.  Here is a brief list of assessments available in our 'RAC Bundle'.  All of the following titles are available, as part of a KS Professional 1 license:

Revit Architecture for occasional users
Revit Architecture fundamentals
Revit Architecture – Xpress
Revit Architecture for Interiors
Revit Architecture advanced
Revit Content Creation
Revit Project Process
Revit Architecture – Extra qu's 1
Revit Architecture – Extra qu's 2
KS Community – Revit Architecture (White Frog)
KS Community – Revit Process & Workflow
KS Community – BIM Management (NB this one is software neutral, but included in the bundle)

So how does it all work?  In simple terms, it is a web based, practical test of knowledge and ability, so users will need access to a copy of the software, for which they are being assessed. They are presented with a series of task based and knowledge based questions, entering answers into the KS browser screen. At the end, a summary report with detailed feedback and coaching notes can be displayed to the user.  KS system administrators can access detailed reports, highlighting results, benchmark statistics and training recommendations.

Let's take a look at an actual example of our Revit Architecture fundamentals assessment, in this short video:

video

KS customers have full editorial control over all KS library material, so they are ultimately in the driving seat, as far as the material that is presented to their teams goes. And they have the option of writing custom question from scratch, using the KS authoring tools, if they want to capture knowledge on in-house work flows, standards and processes.

R

Sunday, 14 December 2014

Top DAUG 2014 - The Results



This year, KnowledgeSmart and AUGI again teamed up to provide an interactive skills assessment, across 10 popular Autodesk software tracks: 3ds Max, AutoCAD, AutoCAD Civil 3D, AutoCAD Plant 3D, Inventor, Navisworks, Revit Architecture, Revit MEP, Revit Structure and Vault.

Once again, we had some great prizes up for grabs.  Track winners walked away with a $100 USD Amazon voucher (a combination of high scores and fastest times determined the winner of each track), a glass trophy and the overall winner won a HP laptop and free pass to AU2015.


We spent several hours on Tuesday, setting up 16 networked PC's in the exhibition hall, at the AUGI stand.

 Next, 144 copies of Autodesk software needed checking, then all the KS sample data sets had to be uploaded to each machine and checked again.  Big thanks to Tony Brown for his assistance in accomplishing this task.




Here's how we looked when everything was finished:


The main competition ran over 3 days (1 x 3 hour slot on day one, 2 x 3 hour slots on day two and a final 1 hour slot on day three). Contestants had to answer 8-10 questions, using the 2015 version of each software title. Each session was limited to just 12 minutes, so people had to work fast! Special thanks to awesome AUGI team members, Kristin, Bob, Michael, Donnia and Richard, for briefing users and making sure everything ran smoothly.

Here are the Top DAUG's in action:


By the end of the contest, we posted 215 results.

Throughout the competition, we displayed a rolling list of the top 10 contestants for each track, on the AUGI big screen.

The Results

Congratulations to the following contestants, who won their respective tracks:


And a special mention to the overall winner of AUGI Top DAUG 2014:

Tracy Chadwick



Analysis

So, let's take a detailed look at the results of this year's Top DAUG competition.

Overall

No. of Tests Completed:  215
Overall Average:  59% in 9 mins 14 secs
(NB the average score for 2013 was 48%).



Track 1 - 3ds Max

Track Winner: Herb Wagner
Winning Score: 55% in 11 mins 35 secs

Top 2 Contestants:


No. Completed: 2
Group Average: 35% in 11 mins 45 secs



Track 2 - AutoCAD

Track Winner: Tracy Chadwick
Winning Score: 100% in 3 mins 50 secs

Top 10 Contestants:


No. Completed: 68
Group Average: 63% in 8 mins 2 secs



Track 3 - AutoCAD Civil 3D

Track Winner: Christopher Fugitt
Winning Score: 88% in 5 mins 55 secs

Top 10 Contestants:


No. Completed: 21
Group Average: 66% in 9 mins 12 secs



Track 4 - AutoCAD Plant 3D

Track Winner: Patrick Alonzo
Winning Score: 72% in 11 mins 30 secs

Top 4 Contestants:


No. Completed: 4
Group Average: 52% in 10 mins 38 secs



Track 5 - Inventor

Track Winner: JD Mather
Winning Score: 81% in 11 mins 25 secs

Top 10 Contestants:


No. Completed: 21
Group Average: 41% in 11 mins 9 secs



Track 6 - Navisworks

Track Winner: Michael Doty
Winning Score: 60% in 6 mins 0 secs

Top 10 Contestants:


No. Completed: 13
Group Average: 49% in 9 mins 45 secs



Track 7 - Revit Architecture

Track Winner: Erik Eriksson
Winning Score: 99% in 10 mins 40 secs

Top 10 Contestants:


No. Completed: 47
Group Average: 57% in 10 mins 51 secs



Track 8 - Revit MEP

Track Winner: Ken Fields
Winning Score: 100% in 9 mins 15 secs

Top 10 Contestants:


No. Completed: 24
Group Average: 64% in 7 mins 32 secs



Track 9 - Revit Structure

Track Winner: Rebecca Frangipane
Winning Score: 85% in 8 mins 30 secs

Top 9 Contestants:


No. Completed: 9
Group Average: 70% in 9 mins 51 secs



Track 10 - Vault

Track Winner: Bryson Anderson
Winning Score: 60% in 6 mins 50 secs

Top 6 Contestants:


No. Completed: 6
Group Average: 53% in 6 mins 17 secs



Most Popular Tracks

The most popular tracks, in order of completed tests, were as follows:

AutoCAD - 68 results
Revit Architecture - 47 results
Revit MEP - 24 results
Inventor - 21 results
AutoCAD Civil 3D - 21 results
Navisworks - 13 results
Revit Structure - 9 results
Vault - 6 results
AutoCAD Plant 3D - 4 results
3ds Max - 2 results

Total = 215 results


Honourable Mentions

100% Club

The following people achieved a maximum score of 100% in their track:

Tracy Chadwick - AutoCAD
Glen Sullivan - AutoCAD
Christine Noble - AutoCAD
Ken Fields - Revit MEP
(Plus Erik Eriksson scored a tantalisingly close 99% in the Revit Architecture track!)


Top Performers

The following people demonstrated impressive versatility by achieving above average scores in multiple tracks:

Chris Brown - AutoCAD, Inventor and Navisworks
Tracy Chadwick - AutoCAD, Revit Architecture, Inventor and Vault
Seth Cohen - AutoCAD and Civil 3D
Jason Dupree - AutoCAD and Inventor
Ken Fields - AutoCAD and Revit MEP
John Fout - Navisworks, Revit Architecture and Revit Structure
Rebecca Frangipane - Navisworks, Revit Architecture and Revit Structure
Brent McAnney - AutoCAD, Civil 3D and Vault
Kate Morrical - AutoCAD and Revit Structure
Mohaimie Mosmin - AutoCAD and Inventor
David Rushforth - Navisworks, Revit Architecture and Revit MEP
Maxime Sanschagrin - Revit Architecture, Revit MEP and Revit Structure
Jim Swain - AutoCAD and Inventor
Nick Tanner - AutoCAD, Navisworks and Revit Architecture


Boys vs Girls

Finally, let's take a look at the demographic breakdown of the competition. Out of 215 contestants, 185 were male and 30 female. The average overall performance for each group breaks down like this:

Girls: 64% in 9 mins 25 secs
Boys: 58% in 9 mins 11 secs




So, that's Top DAUG finished for another year. A thoroughly enjoyable 3 days at AU2014. 215 completed tests, across 10 popular tracks. Once again, the overall standard was extremely high, with some outstanding individual performances from our track winners and top 10 contestants.

Congratulations to all our winners. Thanks to the AUGI team for their support. Lastly, a big thank you to everyone who took part in this year's contest.

See you all in 2015 at the Venetian for more fun & games!

R