Thursday, 10 March 2011

New charting options

This month sees the launch of the latest release of the KS tools, with a host of new features to be found in the admin dashboard.

One of the key updates relates to the range of charting and management reporting options now available.

Here's a brief summary..

Search & Group Results
The first task for an admin, is to carve up the raw results data into more manageable blocks.  There are a range of searching and grouping options available, including; score ranges, date ranges, elapsed time, test topic, user names, training keywords, plus 5 additional custom datafields.



Performance Scatter
This chart displays group results in performance quartiles.  The upper left quadrant (Q1) contains results, where the users completed their assessment accurately and fast.  The upper right quadrant (Q2) shows results which have high accuracy, but slower completion times.  Bottom left (Q3) represents lower scores, but in a fast time.  Lastly, the bottom right quartile (Q4) shows test scores which are inaccurate and slow.  Hint: you don't want your core users anywhere near quartile 4!



Training Requirements
This chart highlights training topics for a given group of test results. The logic analyzes all of the results for a group, references the training tags assigned to questions presented during a test and lists those tags in priority order.
Red indicates the tasks which have been answered incorrectly by most people in a given group. Orange is the next highest priority, followed by Yellow. Green training tags are the topics which have been answered correctly by most of the group, so represent the least urgent issues.
For example, 10 people are presented with one or more questions, which include the tag, ‘Lines’. If 7 of the 10 people answer one or more ‘Lines’ questions incorrectly, then the keyword, ‘Lines’, will be flagged at 70% and appear in Red on the chart.



Question Performance
This chart looks at how each individual question in a library has performed, in any given test. The logic analyzes all of the results for each test and presents an aggregate percentage score for each question, divided by the total number of results for that test on the account. 
For example, 10 people answer a question called ‘Lines’. If 7 of the 10 people answer the question 100% correctly, 1 person scores 50% and the other 2 score 0%, then the question will score an aggregate of 75% and appear in Yellow on the chart. 



Group Scores
This chart displays user performance for any given group, in descending order. The X-axis shows the % score attained and the Y-axis displays user names.


Group Comparisons
This chart allows firms to compare performance, from one group to another, across (up to) 9 sets of data at a time.  Group vs group comparisons can be used to compare a range of results data.  For example; pre and post-training performance, different project teams, firm offices from different geographic locations, data representing different job titles or industry disciplines, in-house data vs interview candidates, and so on.


Global Comparisons
This chart allows AEC firms to compare in-house performance against a wider set of anonymous industry aggregate results data, for selected tests. Currently, we are publishing results for; AutoCAD 2D, MicroStation 2D and Revit Architecture.  When some of our other test titles accumulate a sufficient body of results data, we will publish additional benchmark figures.


This release is scheduled for go-live mid-March.  The new range of charting and reporting options should make identifying skills gaps and training needs much easier for CAD & BIM administrators and training and learning development professionals.

We'll be revisiting charting options again later in the year, so if you have any additional requests for chart styles and reports, please let us know and we'll add them to the Summer/Fall roadmap.

R

No comments:

Post a Comment