Many moons ago I was an English teacher in rural Japan. It was an amazing experience for many reasons, not the least of which was the intensive exposure to the Japanese language (fellow ‘gaijin’ were tough to come by!)
Whenever I meet someone who had a similar immersive language learning experience, our conversation inevitably unearths a common challenge – our ability to understand vastly outpaced our ability to speak!
The intuitive rationale for this discrepancy is ‘understanding’ can be achieved by merely knowing a few key words and paying attention to context, whereas ‘speaking’ carries the additional burden of expression.
But buried within the understanding/speaking gap is a deeper insight about the wiring of our brains – an insight that is essential to constructing effective learning experiences.
Human memory is optimized for ‘recognition’ not ‘recall’.
Try recalling the name of your 8th grade Math teacher – it may take a few seconds, (FYI mine was Mr. Outerbridge – took about 10 seconds J). However if I were to read out a list of names that included your Math teacher’s, research suggests you’re likely to recognize the correct one almost instantly.
Why? Our memories are not stored in a lifeless filing system – they are fused together with all the “surface features” of the situation in which they were created. With respect to names, these features not only include the sound of the name read aloud, but also how it looks written-down, the owner’s voice, face, and more.
Case in point: as you were recalling the name of your Math teacher, did you automatically try to create a mental picture of him or her? Perhaps you remembered him or her in context – in a classroom, at the front, with you seated somewhere among the masses?
Therefore environmental cues are a key mechanism by which one can trigger memory. The implications for learning are far reaching. In brief, the closer a learning environment resembles an application environment, the easier it is for participants to recognize opportunities to use what they know.
Because simulations can reflect the environment of real-world application scenarios better than other instructional methods, there is a clear case for their role in promoting legitimate performance improvement.
The caveat is that not all simulations are born equal. There are specific designs that improve transfer and others that can frustrate it. Indeed, the instincts of many simulation game designers tend to yield more of the latter than the former.
Such instincts include:
- Generous use of fantasy: To create instant learner engagement, some simulation game designers opt to teach management lessons in metaphorical worlds (such as underwater kingdoms or Hollywood-esque terror plots). Unfortunately, these worlds’ salient characteristics are strikingly different from most learners’ daily routine and therefore can limit subsequent transfer.
- Generous use of abstractions: In the tradition of Entertainment Titles, some simulation designers will minimize complexity by rolling up concrete concepts into abstract representations. My least favourite is the “CEO dashboard” – an approach inspired by flight simulators. Typically such dashboards enable manipulation of levers to increase “employee satisfaction” or “innovation” but say very little about how that is actually accomplished.
- Generous use of competition: Competition is a simulation designer’s cheap parlour trick for building engagement. It has its place, however if the learning topic does not inherently involve competition back-on-the-job (e.g. communication skills), then the designer is once again unintentionally widening the gap between the learning and application environments.
Aiding recognition through matching the surface features of the learning and application environments is just one of several techniques that can increase the likelihood learners will use what they know. In future posts, I’ll address two other methods: 1) the purposeful use of multiple examples, and 2) allowing individuals to struggle with data.