0 Items: 0

Want to start reading immediately? Get a FREE ebook with your print copy when you select the "bundle" option. T&Cs apply.

Improve Training with Evidence-informed Learning Design

Imagine if the Burj Khalifa, the Taj Mahal, or the Eiffel Tower had been built based on what people believed would stand, or what they felt was pretty, or on what made them feel good. That would have been insane, and the buildings surely would not still be standing. Yet as learning professionals, we often work that way, designing learning experiences for our clients based on aesthetics and/or our intuition, feelings, and beliefs.

It’s time for change. Like the designers of these famous buildings, which by the way are all still standing, we as learning professionals also need a solid foundation based on evidence and in our case that’s evidence from the learning sciences. Only then can we successfully become an evidence-informed practice and design/build quality things that last.

After all, we’re here to help organisations and the people who work there. The last thing we should do is spend time and money on things that we either don’t know whether they work or which we know don’t work, such as learning experiences based on myths and misconceptions. Also, from an ethical point of view, we hurt learners and waste our clients’ money when we incorporate myths, anecdotes from ‘gurus’, hypes, and misconceptions into our designs or, more importantly, when we dismiss the evidence that is available to us.

Although training isn’t always the answer to people’s performance problems, it’s still quite relevant in organisations and if it’s needed, then it better be effective! For those who frown upon training and associate it with an ‘information dump’: That’s not what we mean when we say ‘training’.

What is training?

Training incorporates the whole process of solving a performance challenge from needs analysis (including analysing the business problem) to evaluation. This automatically implies that it’s also about whether training is the best solution! Salas and his colleagues (2012) make crystal clear that training is not a one-off event. It includes the whole kit and caboodle, involving:

planned and systematic activities designed to promote the acquisition of knowledge, skills, and attitudes. Effective training takes place when trainees are intentionally provided with pedagogically sound opportunities to learn targeted knowledge, skills, and attitudes through instruction, demonstration, practice, and timely diagnostic feedback on their performance. (p 77)

What about evidence-informed? What’s the difference with evidence-based?

Evidence-based and evidence informed practice

In short, evidence-based practice is grounded in medicine/health care and integrates (1) the best available research evidence bearing on whether and why a treatment works, (2) clinical expertise of the health care professional, and (3) patient values (Sackett et al, 1996, see the left-hand image).

Evidence-informed (see the right-hand image) is also ‘based on scientific research’. However, the learning sciences usually can’t deliver the same clarity of evidence as clinical practice does. This is because in learning environments, we’re dealing with many different, often difficult to control, variables that interact with each other. Also, we often use more qualitative data which provides less generalizable evidence than quantitative data. Lastly, we sometimes can’t measure what we need to measure and then we use a proxy. For example, we ask learners to tell us how much mental effort a task required because we can’t directly measure their mental effort.

mirjam3.jpg

These caveats with respect to scientific evidence in our field might suggest that working in an evidence-informed way isn’t possible or not worth trying. This is definitely NOT the case. On the contrary. We should use the evidence that we do have based on our practical wisdom and based on the context we work in. This combination helps us improve training effectiveness. The question is how?

Determine the quantity and quality of evidence

While our book offers extensive support on how to ‘sieve’ the quantity and quality of scientific evidence, we’ll unveil a tip of the iceberg. For this we use Dan Willingham’s four steps which help prevent you from getting fooled or confirm that something might be true.

Step 1: Strip it and flip it

The first part, ‘strip it’, means that you critically look at the language used (is it vague, is it emotional, is it ‘hyped up’?). Take this claim (it’s made up, but similar statements are out there):

This training helps people learn effectively as it provides an unusual, fun, and memorable experience. Research shows that an unusual, fun, and/or memorable experience stimulates the release of dopamine in learners’ brains, giving them feelings of pleasure, satisfaction, and motivation.

First, this statement is vague and even if it’s true, how is the release of dopamine related to the effectiveness of the training? It’s also emotional in that it uses words that we associate with ‘positive’ things (fun, pleasure, satisfaction). Finally, it’s hyped-up in the sense that it’s very hip and trendy to use neuroscience to make claims about learning (in our book, Daniel Ansari explains why this is a dangerous thing).

The second part, ‘flip it’, means that you try to turn the argument upside down. For example, would it also be possible that a) the release of dopamine can also be associated with negative emotions? or b) mundane, boring, and non-experimental situations also stimulate the release of dopamine?

Step 2: Trace it

This comes down to: Don’t just trust what people say. This doesn’t mean you have to extensively research everything, but you need to dig a bit deeper and ask yourself what kind of evidence there actually is for the claim. What resources were used? Are there references to empirical research that you can check? Just take a critical look.

Step 3: Analyse it

This step requires some basic methodological and statistical knowledge, but a critical eye can bring you quite a long way. Willingham suggests that if something sounds too good to be true, then it probably is.

Step 4. Should I do it?

For training, most of the time this would be about: Should I apply this method, implement this strategy, buy this tool? And so forth…

Following these steps can help us to spot myths and to determine evidence that is worthwhile using in our designs to ensure that we’re not wasting any resources and providing a training that proves to be effective, efficient, and enjoyable to learners. And buying (and reading J) the book will bring you even further.

References

Salas, E, Tannenbaum, S I, Kraiger, K and Smith-Jentsch, K A (2012) The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13(2), 74-101.

Willingham, D T (2012) Measured approach or magical elixir? How to tell good science from bad, American Educator, 36(3), 4-12.

Get exclusive insights and offers

For information on how we use your data read our privacy policy


Related Content



Subscribe for inspiring insights, exclusive previews and special offers

For information on how we use your data read our privacy policy