I don’t know about you, but I often hardly take notice of the learning objectives when I start a course. If the course theme inspires me and I decide to register, I take the attitude of ‘take me along on your journey and I’ll decide for myself what I take home…’
Yet, just like any learning designer out there, I spent a fair amount of time getting the learning objectives right at the start of a new project. This usually requires a few focused conversations with the client.
A contradiction that invites some reflection.
Learning objectives (or learning outcomes if you prefer) serve a number of purposes:
(1) They help the learning designer to get clarity around what to include in courses and how to construct the learning activities.
(2) They are meant to help the learner achieve the objectives because it is said to focus their attention.
(3) And they are useful when we evaluate courses: have the learners achieved the objectives we have set for the course?
I can see how objectives help in design (1), but I’m not sure about (2) and (3). Let’s look at the learner’s perspective first. At risk of generalising anecdotal evidence, I doubt that learners continuously refer back to the objectives to check whether their focus is right. Focus comes with engagement and inspiration: get my attention and I’ll listen and respond.
Point (3) is a tricky one in a training world ruled by ROI obsession. It’s useful to take a step back.
The predetermined objectives approach was first introduced in a school-based setting in 1932 by Ralph Tyler, an influential American educator, specialised in assessment and evaluation. It’s interesting to note that his view of evaluation as the measurement of predetermined behaviours – tried and tested in American high-schools – is still the model we use to evaluate adult learning today.
Yet, as Stephen Brookfield writes (page 267 of his book ‘Understanding and Facilitating Adult Learning’), “It is school based, it does not allow for unintended outcomes, it is authoritarian […] it discourages flexibility, and it takes no account of differences in students’ experiences, interests and abilities. To this extent it is unsuitable as a general purpose model of evaluation for adult learning.”
He also adds, “Depending on how it is employed, it may contradict many of what are deemed to be central principles of effective practice of facilitating learning – a democratic ethos, the collaborative determination of curricula, flexibility of format and direction, and the encouragement of self-direction among learners.”
If we were to uncritically accept the pre-determined objectives approach, any result outside of what was intended by the course sponsor and designer would be made irrelevant.
This rings so true looking at the bulk of conversations across the sector. Are we giving the learning objectives too much weight?
When at the end of eWorkshops (which are asynchronous collaborative problem-based learning events) we ask the learners open questions about what they do differently after the completion of the workshop and invite them to share more on how the learning affected them, our reactions have been more than once “Oh, really? That’s interesting. We wouldn’t have thought…”
I’m not talking about a typical compliance course, but referring to training topics that go much deeper. Take human trafficking, gender-based violence, teen mental health and so many more. These are difficult subjects and applying the skills depend on people’s situations. There aren’t right and wrong answers and people’s experiences are an important component in the learning process. These are courses we cannot possibly ‘package’, unless all we want is concept mastery (and of course we don’t).
We are lucky to work with clients who get this. They focus their evaluation on the stories shared by their target group rather than blindly ticking the boxes of the preset objectives. It would be a shame to ignore the sometimes eye-opening reflections and results that were not intended.
Yet, the evaluation models most talked about in the e-learning sector (Kirkpatrick and Phillips) do exactly that. At level one the learners are asked whether they (dis)liked the training and why (the smile sheet). At level 2 learners are tested before the training and tested again after the training to measure the resulting increase in knowledge and capability (the tests are based on the objectives). At level 3 the evaluator evaluates the change of behaviour (against the pre-set objectives) and at level 4 the evaluator evaluates the results (against the pre-set objectives).
Where in this are the learners’ voices?
This brings us to a range of interesting evaluation models, including ‘goal free evaluation’, which was first proposed in 1972 by Michael Scriven (the same philosopher who made the distinction between formative and summative evaluation). He criticized the equating of the unplanned aspects of a programme with unimportant side effects, incidental outcomes, or unanticipated achievements. He says goals are only a subset of anticipated effects.
And more recently Brinkerhoff’s Success Case Method brought a breath of fresh air to the training evaluation landscape. It is especially useful for assessing soft skill, social and other subtle outcomes and impacts that are hard to measure. The Success Case Method mixes storytelling with rigorous evaluation methods to combine the credibility of scientific findings with the emotional impact of stories. There is plenty scope to capture the unplanned results of our training.
I’d love to hear your take on this!