Skip to content

Workflow: 90-Day AI Fluency Program Implementation

A week-by-week implementation workflow for rolling out the 90-Day AI Fluency Program, from pre-launch preparation through post-program scaling.

Based on the 90-Day AI Fluency Program framework --- Chapter 8

When to Use This Workflow

  • When you have executive sponsorship and are ready to begin the 90-day program
  • When scaling from a successful pilot to the next cohort
  • When adapting the program for a specific department or function

Time to Complete

2 weeks pre-launch + 12 weeks execution + 2 weeks evaluation = approximately 16 weeks total.

Pre-Launch: Weeks -2 to 0

Week -2: Provision AI tool licenses. Select 20--50 participants across 3+ functions. Assign ground troops (1 per 15--20 participants). Create a dedicated cohort channel.

Week -1: Measure baselines (tool usage, output per person, cycle times, satisfaction). Prepare materials. Brief ground troops. Confirm executive sponsor for kickoff.

Phase 1: Foundation (Weeks 1--4)

Week 1 --- First Win: 60-minute kickoff. Ensure every participant completes a basic AI task on day one. Success: 100% complete one task.

Week 2 --- Daily Practice: Low-stakes daily tasks (email drafting, meeting summaries). Ground troops check in with each participant. Success: 80%+ used AI 3+ times.

Week 3 --- Expanding Comfort: Longer document analysis, brainstorming, information synthesis. Share explicit examples of AI limitations. Success: Participants articulate what AI is bad at.

Week 4 --- Phase Review: Launch recurring office hours. Executive sponsor hears early wins. Measure adoption rate and confidence. Success: 90%+ adoption; reported time savings.

Phase 2: Integration (Weeks 5--8)

Week 5 --- Domain Use Cases: Function-specific tasks (engineering: code review; finance: analysis; marketing: content; customer service: response templates; operations: process docs). Success: 2+ use cases per participant.

Week 6 --- Prompt Libraries: Each participant saves 5+ working prompts. Teams share within functions, then cross-functionally. Success: Shared library with 80%+ contributions.

Week 7 --- Peer Learning: Structured show-and-tell (30 min, biweekly). Launch communities of practice. Optional half-day hackathon. Success: Unprompted tip sharing.

Week 8 --- Phase Review: Measure output changes and cycle time improvements against baselines. Identify future ground troops. Adjust use cases based on what drove adoption. Success: Measurable improvement in 1+ metric per function.

Phase 3: Mastery (Weeks 9--12)

Week 9 --- Complex Workflows: Multi-step tasks requiring AI + human judgment. Technical teams start agent pipelines. Success: Completed multi-step workflow.

Week 10 --- Autonomy: Reduce ground troop intervention. Focus on "when not to use AI." Participants document top 3 use cases with time savings. Success: Can explain when AI is the wrong tool.

Week 11 --- Teaching: Pair each participant with 2--3 future cohort members. Create short guides of best use cases. Success: Taught one colleague a specific technique.

Week 12 --- Conclusion: Final measurement against baselines. Executive review with ROI data. Graduate recognition. Confirm ground troops for Cohort 2. Success: Documented ROI supporting expansion.

Post-Program: Weeks 13--14

Week 13: Compile program report. Document lessons learned. Collect testimonials.

Week 14: Refine materials. Graduates begin Cohort 2 preparation as ground troops, each supporting 15--20 new participants.

Break-Even Expectations

Deployment Type Break-Even Expected ROI
Focused single-team 3--5 months 300%+ over 3 years
Multi-team rollout 6--9 months 300%+ over 3 years
Enterprise-wide 9--12 months 300%+ over 3 years

Full chapter: Chapter 8: Teams for AI-First Companies