In my last post, I described how the “U-shaped” development curve applies to effective leadership training. We have observed this learning curve in our simulations. Individuals may exhibit confidence in their actions in the first round when a  situation they are faced with solving is relatively straightforward. Then, as we provide new information and present impacts from previous decisions, those same individuals start to miss simple decisions.

I believe that this new information and results causes them to question what they thought they knew and understood. Initially, they thought that by doing X, then Y would happen. However, when they experienced an entirely different outcome, a gap in their mental model became exposed. During this period, participants search to fill this gap and miss seemingly simple decisions.

If we test the business simulation participants in the first hour of the round, it will appear that they “get it.” However, as they go through progressive rounds and the situation becomes more complex, they begin to experience cognitive disconnects that separate them from the new material, as well as from the information they thought they knew. Measurements done at this point often show subpar thinking—not because the participant doesn’t get it, but because he is in the middle of constructing a new mental model. The danger within a business simulation is that if a participant’s gap becomes too big or he exposes too many gaps, he may become deeply frustrated and give up.

It is for this very reason we created new measurement techniques for effective leadership training. These techniques focus more on providing useful feedback and less on taking snapshots of what participants know at that very moment. Issues with traditional leadership development assessment techniques include:

  •    They don’t provide the organization or the business simulation participant with any practical feedback data. The most typical “test” is a classic “smile sheet,” which is simply a measure of whether the participants liked the course and felt that it met the stated objectives.
  •    They merely measure an employee’s ability to recall information, which doesn’t indicate integration of knowledge. These tests may even report misleading information if they are given while the participant is at the low point in her U-shaped learning, where she is reconstructing her mental model.
  •    They rarely take into account an employee’s natural abilities and preferences, and instead focus on the easy-to-measure “snapshot in time.”

The goal is not just to test what participants know, but also to give them insightful feedback that will help them continue to develop their self-awareness and thinking abilities. In other words, leadership development assessments should not be integrated into a course to check whether the participant “got it.” Assessments should be integrated within a business simulation to surface and illuminate limiting beliefs.

The key then is to provide participants with various forms of feedback and with enough time to incorporate their new thinking and experiences into their newly forming mental models.

As we work with learners in our professional development training programs, we continue to evolve three assessment techniques: cognitive analysis, mode analysis, and behavioral analysis. Each of these techniques captures various data points as the participant progresses through the simulation. This approach reduces the impact of the U-shaped curve as participants’ thinking and behavior are charted over time.

In my next post, I’ll describe how each of these three assessment techniques lead to effective leadership training.

Michael Vaughan is the CEO of The Regis Company, a global provider of custom business simulations and experiential learning programs. Michael is the author of the books The Thinking Effect: Rethinking Thinking to Create Great Leaders and the New Value Worker and The End of Training: How Business Simulations Are Reshaping Business.

Mike Vaughan

Author Mike Vaughan

More posts by Mike Vaughan