Skip to Content Skip to Main Navigation

What Happened to the “E” in ADDIE?

by John Cecil

Instructional Designers hear about ADDIE a lot: it’s the easiest-to-remember of the ID models taught in school and probably the only one asked about in job interviews.  Occasionally, we see ADDIE in action in the real world:  I once worked in a company with project folders for Analysis, Design, Development, Implementation, and Evaluation. That last one – the Evaluation folder – was empty.

Most companies do have a step called “evaluation,” however, and it comes in two types: survey and client feedback.

Survey Says…

In live training, a survey is typically a list of rating questions with general statements like “This training helped me in my work,” followed by the choices “Very Much, Some, Not Very Much, Not at All.”

The learners complete the survey at the end of the day. They’re in a hurry, and they don’t want to offend anyone – after all, the trainer let them out 10 minutes early!  So the “evaluation” is returned, stacked and compiled. Almost always, the trainer gets a positive, and fairly useless, review.

The online version of this survey – emailed out after live training, or popping up at the end of an online course – is similar. Now at their desks in front of their computers, the learners are in even more of a hurry, and are usually only given the same vague questions to click through. Even if they wanted to say more, there is usually no short-answer option.

Client Feedback 

The other type of evaluation you may get is client feedback. Obviously, this is valuable. You want the client to like it, and you want them to ask for more. But a “thumbs-up” from the client does not really tell you much about the effectiveness of the training.  After all, the client probably didn’t even take the whole course.

What We Really Need to Know 

If authentic, meaningful evaluation doesn’t usually take place, how do we know that the course is any good? How do we know if the learners paid attention?  How do we know if people learned what they were supposed to learn?

As instructional designers, what we really need to know is simple and obvious: Did the training do any good?

  • Did the tutorial on proper wing nut rotation actually lead to wing nuts being rotated properly?
  • Did the module on sexual harassment lead to lower incidents of sexual harassment?
  • Did the safety training help to create a safer workplace?

These questions can’t be answered by the assessment.  We don’t just need to know if they followed the training. We need to know if the training actually had an effect on the learner’s behavior.

Two Better Ways to Evaluate

  1. Ideally, a real evaluation should take place months after the completion of the training.  Substantive questions should be addressed: did the training help you, and how?  Did you return to view parts of the training?  Were there parts that were especially helpful?  Was there a project that you initiated based on the training?  Did behaviors change?
  2. Also, surveys should have more meaningful and relevant questions. Create questions that are specific, and don’t be afraid of compiling long, open-ended responses. Ask fewer ratings questions, and more short-answer.

ADDIE doesn’t always have to have the “E”.  In fact, authentic evaluation is frequently left behind in the real-world development process. Training done this way may still meet the learning objectives. It may still teach something and it may change behaviors.

But how will you know?

About the Author

John Cecil is an experienced instructional designer and project manager with particular skills in software training. You can connect with John through Writing Assistance, Inc. at or