A full-cycle instructional design project — from needs analysis through Kirkpatrick evaluation — built for real classroom deployment using ADDIE and Articulate 360.
Overview
This is a full-cycle instructional design project for a STEM-based early childhood education center — a school that uses discovery learning, the Engineering Design Process, and scientific inquiry to prepare children ages 2–5 to think like engineers and scientists.
When new lead teachers were hired, there was no structured way to bring them into the environment. They arrived, observed for a day or two, and figured the rest out as they went — which meant classrooms varied widely in how the school's philosophy was actually implemented, and teachers spent their first weeks guessing at expectations that should have been made explicit from the start.
I came to this project as both the designer and a practicing STEM educator at this center — which gave me direct access to the gaps and the context to understand why they existed. I took the project through all four phases of ADDIE, producing a complete training architecture and eleven professional deliverables built for Articulate 360.
The built Articulate Rise 360 course — all four modules with branching scenarios and knowledge checks.
The Problem
My starting point wasn't "what should the training cover." It was "what is actually failing, and why." Before I touched any content, I spent time in the environment — observing, taking notes, and asking hard questions about what new teachers were walking into on day one.
The core problem: New lead teachers at this STEM early childhood center lacked any standardized onboarding experience — which meant every new hire arrived with different assumptions about the school's philosophy, daily structure, and facilitation approach. Some defaulted to direct instruction. Others were uncertain how to operate the school's coding tools. Most had never worked in a discovery-learning environment before. The result was classrooms that looked and felt inconsistent — and children who were not getting the experience their families enrolled them for.
The Phase 1 analysis surfaced seven distinct performance gaps — each traced to a specific root cause, not just a surface behavior. The most urgent: new teachers had no orientation to the school's discovery learning philosophy, no model of what a complete instructional day should look like, and no practice with the coding tools before they were expected to use them with children.
This wasn't a hiring problem or a motivation problem. Teachers weren't failing — the system was. There was simply no structured way to get someone from "new hire" to "ready to teach" in this specific environment. That's a training design problem, and it has a training design solution.
My Process
I used ADDIE as the structural framework, but treated each phase as iterative rather than sequential. Decisions made in Phase 1 shaped Phase 2. Phase 2 constrained Phase 3. By the time I reached evaluation, every instrument traced back to a specific gap documented at the start.
I began with a full needs analysis rather than jumping to content. I conducted a learner analysis to understand who lead teachers actually are — their backgrounds, experience levels, prior training in ECE, and comfort with STEM facilitation. Then I built a task analysis mapping exactly what a lead teacher does across each of the four daily learning blocks.
The most valuable output was the gap analysis — a structured document that named each performance gap, described what it looked like in the classroom, and traced it to a specific root cause. This became the foundation for every subsequent design decision. The phase concluded with a one-page design brief to align with the school director before any content was created.
Before writing a single screen of content, I built a Learning Blueprint — the working document that captures every instructional decision before development starts. This included writing all learning objectives in Bloom's Taxonomy behavioral language (19 objectives across 4 modules), sequencing the course, and mapping out the assessment strategy from the beginning.
Two theories drove the design: Knowles' Andragogy, because lead teachers are adults who bring real classroom experience and disengage quickly from content that doesn't feel relevant to their actual job; and Kolb's Experiential Learning Cycle, because skills like inquiry facilitation can't be learned passively — they need to be practiced, not just explained.
Phase 3 produced the documents a developer — or I myself — would use to actually build the course. I wrote a complete screen-by-screen storyboard for Module 1 including exact on-screen text, narration scripts, visual and media direction, interaction specifications, and designer rationale notes for every screen.
I also produced an Articulate 360 Tool Selection Guide documenting which tool handles each interaction type and why — including the critical distinction between what Rise 360 handles natively and what requires branching scenario logic. A full Style Guide covers colors, typography, voice, interaction standards, and WCAG 2.1 accessibility requirements. The phase concluded with an 8-week Development Plan with a Gantt-style timeline, master asset list, SME review workflow, and full accessibility compliance checklist.
Evaluation was scoped in Phase 2 — not added at the end. By the time I reached Phase 4, I wasn't designing instruments from scratch; I was operationalizing decisions I had already made. I built tools for all four Kirkpatrick levels: a 12-item Reaction Survey (Level 1), a knowledge baseline pre-assessment paired with scenario-based knowledge checks (Level 2), a 15-item behavioral observation checklist for Day 30 classroom visits (Level 3), and an Organizational Impact Tracker measuring retention, director coaching load, and instructional consistency over time (Level 4).
The piece I'm most deliberate about is the Continuous Improvement Protocol — a set of specific decision rules that tell the director and designer exactly what to do when data signals a problem. I also built an Evaluation Report Template so that after each cohort, stakeholders receive a clean summary of what the data showed and what changes are recommended.
Deliverables
Each deliverable was built to a professional standard — the kind I'd walk through in a stakeholder review. Organized by phase so you can see how the project moved from analysis through evaluation.
Phase 1 — Analysis
Learner analysis, task analysis, 7-gap analysis with root causes, and design brief.
Phase 2 — Design
19 Bloom's-level objectives, course map, assessment strategy, modality rationale, and theoretical framework.
Phase 3 — Development
Four modules with branching scenarios, scenario knowledge checks, reflection prompts, and interactive activities. Published as SCORM 1.2.
Screen-by-screen production document: on-screen text, narration scripts, interaction specs, visual direction, and designer rationale.
Component-by-component tool decisions, Rise block types, WCAG 2.1 color specs, typography, voice & tone, and interaction standards.
8-week Gantt timeline, master asset list, SME review workflow, and a 30-item WCAG 2.1 accessibility checklist.
Phase 4 — Evaluation
Full Kirkpatrick four-level plan with measurement methods, success indicators, governance structure, and continuous improvement protocol.
12-item rated survey across Relevance, Quality, Engagement, and Confidence — plus 2 open-ended questions and 3 live-session items.
5-question knowledge baseline administered before Module 1 to calculate measurable learning gain.
15 behavioral indicators across 5 domains, administered at Day 30. Includes pass threshold and agreed next steps.
Stakeholder-ready report with Executive Summary, all-level data tables, and a recommendation plan paired with an organizational impact tracker.
Theory & Tools
Design decisions without rationale are just preferences. These are the four frameworks I leaned on most heavily — and specifically how each one shaped this project.
Designer's Reflection
This project sits at the intersection of two things I know well — early childhood education and instructional design — and that made it harder, not easier. When you're inside a system, you normalize things that an outside designer would immediately flag. I had to work against that instinct constantly.
The phase I'm most satisfied with is the gap analysis. Resisting the urge to jump straight to content and instead asking "why is this happening?" for each gap changed the entire shape of the project. The learning objectives I wrote in Phase 2 were sharper because of it, and the evaluation instruments in Phase 4 were more targeted because of it.
If I were doing this again, I'd run structured SME interviews earlier — before finalizing the gap analysis, not after. My observations were accurate, but observation alone has blind spots. A direct conversation with the school director and a current lead teacher would have surfaced things I had absorbed as normal because I was part of the environment.
I'd also pull in a learner for storyboard review before going to the SME. The person most qualified to tell you whether a module will land is the person who would actually take it.