Remember the old days where any professional could attend a face-to-face workshop and upskill without being tested? They could focus on lively discussions with peers, networking and solving current issues in their sector. No-one bothered with assessment, unless they were learning some kind of trade or the training was aimed at compliance (which is just a tiny part of what’s out there). Tests were considered school stuff and unwelcome in most professional development contexts.
E-learning has replaced many of these workshops. Now the same professionals’ every single action online is scrutinised: number of pages viewed, time spent online, number of contributions in the forums, quiz questions answered correctly… They are even graded!
Why? Because we’ve got to use those tools that are available to us, right?
Nope, just ignore them.
Our job is to listen, empower, guide, support, nurture… How? By showing respect and providing a safe environment in which people’s experiences really matter.
Go ahead and evaluate training, but I suggest you use Brinkerhoff’s success case method rather than Kirkpatrick’s. Brinkerhoff’s method is much more useful in most adult learning contexts. Evaluating training isn’t the same as assessment. Evaluation focuses on how learners were able to make a difference after the training. This is what really matters. All other things we might be tracking are a waste of time and put people off e-learning.
1 Comment. Leave new
Thanks for sharing your thoughts Anouk. I believe that the very numbers collected in order to check the learners progress aren’t that much valuable. Such statistics have to go together with the context otherwise they don’t bring any value.