Instructional Design · Full ADDIE Cycle · Early Childhood Education · Atlanta, GA

Lead Teacher Onboarding
for a STEM-Based Early Childhood
Education Center

A full-cycle instructional design project — from needs analysis through Kirkpatrick evaluation — built for real classroom deployment using ADDIE and Articulate 360.

Role Sole Instructional Designer
Framework ADDIE + Kirkpatrick
Tool Articulate Rise 360
Deliverables 11 Professional Documents
Seat Time ~6 hours blended

Overview

The project

This is a full-cycle instructional design project for a STEM-based early childhood education center — a school that uses discovery learning, the Engineering Design Process, and scientific inquiry to prepare children ages 2–5 to think like engineers and scientists.

When new lead teachers were hired, there was no structured way to bring them into the environment. They arrived, observed for a day or two, and figured the rest out as they went — which meant classrooms varied widely in how the school's philosophy was actually implemented, and teachers spent their first weeks guessing at expectations that should have been made explicit from the start.

I came to this project as both the designer and a practicing STEM educator at this center — which gave me direct access to the gaps and the context to understand why they existed. I took the project through all four phases of ADDIE, producing a complete training architecture and eleven professional deliverables built for Articulate 360.

4+
ADDIE Phases
fully documented
11
Professional
deliverables
4
Rise 360
eLearning modules
6h
Estimated
seat time

Live eLearning Course

The built Articulate Rise 360 course — all four modules with branching scenarios and knowledge checks.

View the course

The Problem

No onboarding. No consistency.
No baseline.

My starting point wasn't "what should the training cover." It was "what is actually failing, and why." Before I touched any content, I spent time in the environment — observing, taking notes, and asking hard questions about what new teachers were walking into on day one.

The core problem: New lead teachers at this STEM early childhood center lacked any standardized onboarding experience — which meant every new hire arrived with different assumptions about the school's philosophy, daily structure, and facilitation approach. Some defaulted to direct instruction. Others were uncertain how to operate the school's coding tools. Most had never worked in a discovery-learning environment before. The result was classrooms that looked and felt inconsistent — and children who were not getting the experience their families enrolled them for.

The Phase 1 analysis surfaced seven distinct performance gaps — each traced to a specific root cause, not just a surface behavior. The most urgent: new teachers had no orientation to the school's discovery learning philosophy, no model of what a complete instructional day should look like, and no practice with the coding tools before they were expected to use them with children.

This wasn't a hiring problem or a motivation problem. Teachers weren't failing — the system was. There was simply no structured way to get someone from "new hire" to "ready to teach" in this specific environment. That's a training design problem, and it has a training design solution.


My Process

Phase by phase —
how I got there

I used ADDIE as the structural framework, but treated each phase as iterative rather than sequential. Decisions made in Phase 1 shaped Phase 2. Phase 2 constrained Phase 3. By the time I reached evaluation, every instrument traced back to a specific gap documented at the start.

Phase 1

Analysis — Understanding before designing

I began with a full needs analysis rather than jumping to content. I conducted a learner analysis to understand who lead teachers actually are — their backgrounds, experience levels, prior training in ECE, and comfort with STEM facilitation. Then I built a task analysis mapping exactly what a lead teacher does across each of the four daily learning blocks.

The most valuable output was the gap analysis — a structured document that named each performance gap, described what it looked like in the classroom, and traced it to a specific root cause. This became the foundation for every subsequent design decision. The phase concluded with a one-page design brief to align with the school director before any content was created.

Learner Analysis Task Analysis Gap Analysis Design Brief
Phase 2

Design — Blueprint before build

Before writing a single screen of content, I built a Learning Blueprint — the working document that captures every instructional decision before development starts. This included writing all learning objectives in Bloom's Taxonomy behavioral language (19 objectives across 4 modules), sequencing the course, and mapping out the assessment strategy from the beginning.

Two theories drove the design: Knowles' Andragogy, because lead teachers are adults who bring real classroom experience and disengage quickly from content that doesn't feel relevant to their actual job; and Kolb's Experiential Learning Cycle, because skills like inquiry facilitation can't be learned passively — they need to be practiced, not just explained.

Apply Analyze Evaluate
Bloom's Taxonomy 19 Objectives Course Map Knowles' Andragogy Kolb's ELC Kirkpatrick L1–L4
Phase 3

Development — Designing for production in Articulate 360

Phase 3 produced the documents a developer — or I myself — would use to actually build the course. I wrote a complete screen-by-screen storyboard for Module 1 including exact on-screen text, narration scripts, visual and media direction, interaction specifications, and designer rationale notes for every screen.

I also produced an Articulate 360 Tool Selection Guide documenting which tool handles each interaction type and why — including the critical distinction between what Rise 360 handles natively and what requires branching scenario logic. A full Style Guide covers colors, typography, voice, interaction standards, and WCAG 2.1 accessibility requirements. The phase concluded with an 8-week Development Plan with a Gantt-style timeline, master asset list, SME review workflow, and full accessibility compliance checklist.

Articulate Rise 360 Review 360 Full Storyboard Style Guide WCAG 2.1 AA SCORM 1.2
Phase 4

Evaluation — Measuring what actually matters

Evaluation was scoped in Phase 2 — not added at the end. By the time I reached Phase 4, I wasn't designing instruments from scratch; I was operationalizing decisions I had already made. I built tools for all four Kirkpatrick levels: a 12-item Reaction Survey (Level 1), a knowledge baseline pre-assessment paired with scenario-based knowledge checks (Level 2), a 15-item behavioral observation checklist for Day 30 classroom visits (Level 3), and an Organizational Impact Tracker measuring retention, director coaching load, and instructional consistency over time (Level 4).

The piece I'm most deliberate about is the Continuous Improvement Protocol — a set of specific decision rules that tell the director and designer exactly what to do when data signals a problem. I also built an Evaluation Report Template so that after each cohort, stakeholders receive a clean summary of what the data showed and what changes are recommended.

Kirkpatrick L1–L4 Reaction Survey Observation Checklist Org. Impact Tracker Evaluation Report Continuous Improvement

Deliverables

11 deliverables.
One complete project.

Each deliverable was built to a professional standard — the kind I'd walk through in a stakeholder review. Organized by phase so you can see how the project moved from analysis through evaluation.

Phase 1 — Analysis

📋
Phase 1 — Analysis

Needs Analysis Report

Learner analysis, task analysis, 7-gap analysis with root causes, and design brief.

Phase 2 — Design

🗺️
Phase 2 — Design

Learning Blueprint

19 Bloom's-level objectives, course map, assessment strategy, modality rationale, and theoretical framework.

Phase 3 — Development

🖥️
Phase 3 — Development

Rise 360 eLearning Course — All 4 Modules

Four modules with branching scenarios, scenario knowledge checks, reflection prompts, and interactive activities. Published as SCORM 1.2.

📄
Phase 3 — Development

Module 1 Full Storyboard

Screen-by-screen production document: on-screen text, narration scripts, interaction specs, visual direction, and designer rationale.

🎨
Phase 3 — Development

Articulate 360 Tool Selection Guide & Style Guide

Component-by-component tool decisions, Rise block types, WCAG 2.1 color specs, typography, voice & tone, and interaction standards.

📅
Phase 3 — Development

Development Plan & Accessibility Checklist

8-week Gantt timeline, master asset list, SME review workflow, and a 30-item WCAG 2.1 accessibility checklist.

Phase 4 — Evaluation

📊
Phase 4 — Evaluation

Evaluation Framework

Full Kirkpatrick four-level plan with measurement methods, success indicators, governance structure, and continuous improvement protocol.

📝
Phase 4 — Evaluation

Level 1 Reaction Survey

12-item rated survey across Relevance, Quality, Engagement, and Confidence — plus 2 open-ended questions and 3 live-session items.

🔍
Phase 4 — Evaluation

Level 2 Pre-Assessment

5-question knowledge baseline administered before Module 1 to calculate measurable learning gain.

Phase 4 — Evaluation

Level 3 Classroom Observation Checklist

15 behavioral indicators across 5 domains, administered at Day 30. Includes pass threshold and agreed next steps.

📈
Phase 4 — Evaluation

Evaluation Report Template + Level 4 Impact Tracker

Stakeholder-ready report with Executive Summary, all-level data tables, and a recommendation plan paired with an organizational impact tracker.


Theory & Tools

Why I made
the choices I made

Design decisions without rationale are just preferences. These are the four frameworks I leaned on most heavily — and specifically how each one shaped this project.

Knowles' Andragogy
Adult learners bring prior experience and need to understand the relevance of content before engaging with it. This drove the learner-promise framing in Module 1, the use of reflection prompts throughout, and the scenario-first approach to all knowledge checks.
Kolb's Experiential Learning Cycle
Learning happens through four stages: Concrete Experience → Reflective Observation → Abstract Conceptualization → Active Experimentation. Every module cycles through all four stages, with eLearning handling the first three and the live practice session serving as Active Experimentation.
Bloom's Taxonomy
All 19 learning objectives are written at the Apply, Analyze, or Evaluate level — not Remember or Understand. The performance gap wasn't knowledge, it was application. A teacher can know what discovery learning is and still deliver direct instruction on Monday morning.
Kirkpatrick Four-Level Model
Evaluation was scoped before development started. Every objective was written with a measurable outcome already in mind, every assessment was designed to generate usable data, and every evaluation instrument in Phase 4 traces directly back to a gap named in Phase 1.
Articulate Rise 360
Review 360
SCORM 1.2
WCAG 2.1 AA
Kirkpatrick Model
ADDIE Framework

Designer's Reflection

What I'd do
differently next time

This project sits at the intersection of two things I know well — early childhood education and instructional design — and that made it harder, not easier. When you're inside a system, you normalize things that an outside designer would immediately flag. I had to work against that instinct constantly.

The phase I'm most satisfied with is the gap analysis. Resisting the urge to jump straight to content and instead asking "why is this happening?" for each gap changed the entire shape of the project. The learning objectives I wrote in Phase 2 were sharper because of it, and the evaluation instruments in Phase 4 were more targeted because of it.

If I were doing this again, I'd run structured SME interviews earlier — before finalizing the gap analysis, not after. My observations were accurate, but observation alone has blind spots. A direct conversation with the school director and a current lead teacher would have surfaced things I had absorbed as normal because I was part of the environment.

I'd also pull in a learner for storyboard review before going to the SME. The person most qualified to tell you whether a module will land is the person who would actually take it.