AI Adoption & Implementation Portfolio

Joaquin Wilson, MBA

Digital Transformation, Strategy Consulting & Customer Success  ·  Ex-Accenture, KPMG, IBM
Bilingual EN/ES  ·  Orlando, FL

I'm an AI adoption strategist who has spent seven years embedded inside the problem. Federal programs, Big 4 consulting, field operations — every role taught me the same thing: technology doesn't fail, people do. I build the systems and train the teams that change that.

Joaquin Wilson, MBA
Verified Outcomes
90%
Client retention rate across at-risk accounts
97h
Billable hours saved through training systems
77%
Field survey time reduction (120 min to 27 min)
91%
New hire first-submission accuracy rate
AI Adoption Manager Implementation Consultant Customer Success (SaaS) Federal Consulting Enterprise SaaS B2B Field Operations ServiceNow Bilingual EN/ES

The work, in plain terms.

I'm an AI adoption strategist who has spent seven years embedded inside the problem. Federal programs, Big 4 consulting, field operations. Every role taught me the same thing: technology doesn't fail, people do. I build the systems and train the teams that change that.

At Accenture Federal Services, I worked directly under the program manager for a federal Digital Transformation initiative, supporting 9 application development programs. My job was platform adoption: getting federal teams to change how they worked, document what they did, and trust a system they didn't ask for. I replaced manual documentation cycles with Claude-assisted workflows, cut reporting build time from 3.5 hours to under 50 minutes, and maintained 100% on-time delivery to federal leadership across the engagement.

Most recently, following the conclusion of a federal consulting engagement impacted by DOGE-related cost reduction initiatives, I was brought into a Global Equipment Manufacturer to stabilize a hurricane-disrupted territory with 30 at-risk client accounts. Over a five-month engagement, I retained 27 of those accounts, introduced AI-assisted workflows to a field team with no prior digital productivity experience, cut survey time by 77%, and reduced ticket resolution from 9 days to 3. None of that was familiar industry territory. That was the point.

I use AI the same way I use any other tool: to close a specific gap, with a measurable result, and a process someone else can follow after I leave.


Three problems. Three systems. Real numbers.

Every case study follows the same structure: the broken process, what I built, and the result. No polished outputs without showing the work behind them.

Case Study 01 — Flagship
★ Global Equipment Manufacturer
Operational Stabilization & AI-Enabled Recovery in a Post-Disruption Territory

Deployed into a hurricane-disrupted territory with 30 at-risk client accounts and a mandate to stabilize operations in five months. Retained 27 of 30 accounts, cut ticket resolution time by 67%, reduced field survey time by 77%, and introduced AI workflows to a team with no prior digital productivity experience.

27/30
At-risk client accounts retained
67%
Ticket resolution time reduced
77%
Field survey time reduction
Case Study 02
Accenture Federal Services
AI-Driven Program Operations at Federal Scale

Replaced manual documentation cycles, fragmented communications, and reactive reporting across 9 federal application development programs using Claude-assisted workflows. Cut reporting build time by 68% and maintained 100% on-time delivery to federal leadership.

9
Federal programs under active support
68%
Reduction in documentation drafting time
100%
On-time financial reporting delivery
Case Study 03
Big 4 Advisory Firm
Closing the Bench Gap: A Knowledge Retention System for New Hire Consultants

New analysts completed onboarding then sat idle for 6 weeks before live projects began. Built a 16-session webinar series and a self-serve documentation library from scratch with zero budget and no mandate.

97h
Total billable hours saved
91%
First-submission accuracy rate
$0
Budget required to build the system

Prompt Library
16 production-ready prompts. All copyable.

The actual frameworks I built across federal consulting, enterprise advisory, and field operations. Documentation, reporting, stakeholder comms, change management, customer success — pick a category and test one yourself.


Credentials & Education
✓ Verified
Google Prompting Essentials
Google · AI prompt design & application
Apr. 2026
✓ Verified
Prompt Engineering
Vanderbilt University · Advanced AI workflow design
Mar. 2026
✓ Verified
CPOSP — Certified Product Owner Scrum Professional
Six Sigma Global Institute
Apr. 2025
✓ Verified
Certified SAFe 6 Practitioner
Scaled Agile, Inc. · Enterprise delivery
Dec. 2024
Agile Business Analyst
SoftEd · Agile BA practices
Oct. 2024
✓ Verified
Lean Six Sigma Green Belt
Six Sigma Global Institute · Process improvement
Jul. 2024
ICAgile Certified Professional
ICAgile · Agile project management
May 2024
BCG Strategy Consulting Job Simulation
Boston Consulting Group
Sep. 2023
✓ Verified
MBA Intern — Summit Program
IBM · Enterprise consulting, MBA track
Aug. 2021
✓ Verified
Enterprise Design Thinking Practitioner
IBM · Human-centered design methodology
2021

What colleagues and managers say.

Written by managers, peers, and mentors across Accenture Federal Services, KPMG, and IBM.

"I had the pleasure of managing and mentoring Joaquin Wilson during his time on our project. He consistently demonstrated exceptional analytical skills, a keen eye for detail, and a proactive attitude in tackling complex business challenges."
William Conway
Management Consulting · Accenture
Managed Joaquin directly · Apr. 2025
"I've had the pleasure of working alongside Joaquin for several months, and I can't recommend him highly enough. His ability to balance big-picture thinking with attention to detail made him an incredible teammate and a natural leader. Any team would be lucky to have Joaquin."
Mariam Badejo
Change Management & Engagement · Strategy and Advisory
Worked with Joaquin on the same team · Apr. 2025
"I've had the pleasure of working with Joaquin on mutual projects. Joaquin is always open to learning and is dedicated to providing a quality work product. I could count on Joaquin to meet deadlines and to work hard. Any team would be fortunate to have Joaquin."
Brian Christensen, CPA
Senior Manager, Tax · KPMG US
Managed Joaquin directly · Jul. 2023
"Joaquin worked for me at IBM. He has a strong desire to continuously learn and is growth-minded. He is courageous and will suggest new ideas, and is a hard worker. I look forward to seeing what Joaquin does as his career progresses!"
Maureen Borbely Pulscher
Global Workforce Transformation · IBM
Managed Joaquin directly · Mar. 2024
"Throughout the summer, Joaquin showed persistent proactiveness in his professional development, an eagerness to solicit and receive feedback, and creativity in applying his previous work experience skills to his role at IBM. Joaquin demonstrated a consistent willingness to learn and a passion for improving his skill set and value in his role."
Richard Hernandez
Strategic Partnerships · Amazon
Joaquin's mentor at IBM · Sep. 2021
"Joaquin is the hardest working colleague I have ever worked with. He settles for nothing but his best and day in and day out works very hard to show why he is an incredible professional. Joaquin will also push you to become the best you can be and I witnessed that first hand myself."
Michael Geer
IBM LinuxONE Specialist · IBM
Worked with Joaquin on the same team · Jul. 2021
"Joaquin is not afraid to ask for help. He is also coachable. Joaquin seeks criticism so that he can make his final presentation superb. Joaquin is the positive energy that excites the team to come to each and every meeting. Joaquin would be an asset to any team."
Reese Farquhar
IBM Account Technical Leader · IBM
Worked with Joaquin on the same team · Aug. 2021
"Joaquin is a very proactive and hardworking individual. He was always respectful of my time, but never shied away from reaching out to ask a question when needed. He is very personable and easy to work with. Joaquin would make a great addition to any organization."
Alec McCain
Account Executive · Tech Sales & SaaS
Joaquin's mentor at IBM · Aug. 2021

Open to the right conversation.

Targeting AI Adoption Manager, Implementation Consultant, and Customer Success roles. If you need someone who can walk into a team, diagnose where adoption is failing, and build systems that stick, let's talk.

Orlando, FL Open to Relocation Remote-Ready Available Now Bilingual EN/ES
jwils153d@gmail.com
← Back to Portfolio
Portfolio / Case Studies / 02
Case Study 02

AI-Driven Program Operations at Federal Scale

Supporting 9 application development programs inside a federal Digital Transformation initiative, I replaced manual documentation cycles, fragmented communications, and reactive reporting with AI-assisted workflows built on Claude, ChatGPT, and Gemini.

Engagement Details
OrganizationAccenture Federal Services
ClientLarge Federal Agency, Digital Transformation Division
RoleSenior Analyst, Product Management Support & Org. Readiness
DurationDec. 2023 — May 2025
Programs Supported9 application development programs
Team Scale50+ cross-functional team members
AI StackClaude (primary) · ChatGPT · Gemini
ClearanceInterim Public Trust
68%
Reduction in documentation drafting time
50+
Team members tracked across reporting cycles
100%
On-time financial reporting delivery rate
5-Pillar Case Study
1
Pillar One
Diagnostic Framing — The Operational Bottlenecks

The client's Digital Transformation team was running 9 simultaneous application development programs under a single program manager. Three bottlenecks were costing hours every week.

Documentation was built from scratch every cycle. Quick reference guides, agile session materials, and stakeholder presentation decks were drafted manually with no reusable templates. A single QRG update could consume 3 to 4 hours of analyst time.

Reporting for 50+ people was a manual collection exercise. The weekly status report, monthly status report, and PTO tracker required individual inputs consolidated by hand. On-time delivery depended entirely on manual coordination.

Stakeholder communications had no consistent standard. All-hands meeting content, onboarding materials, and cross-functional updates were written from scratch each time.

Measurable Goal

Cut documentation production time by at least 50%, achieve 100% on-time reporting delivery, and establish reusable AI-assisted templates that any analyst could operate without starting from zero.

2
Pillar Two
Prompt Iteration Logs — Evidence of Judgment

The first versions of my prompts produced generic output. A prompt asking Claude to write a quick reference guide returned documentation that lacked the agency's specific terminology, stakeholder hierarchy, and compliance framing. It required more editing than writing from scratch. The fix was constraint layering.

Version 1 — Naive Prompt
"Write a quick reference guide for the agile release process for our federal project team."
Version 4 — Production Prompt
"You are a senior federal program analyst supporting a Digital Transformation initiative at a Large Federal Agency. Write a QRG for the agile release readiness process. Audience: mid-level analysts new to product management in a federal context. Tone: direct, procedural, no jargon. Format: numbered steps with decision points clearly marked. Never include recommendations requiring security clearance escalation without flagging them as SME-review items. Use the following process inputs: [inputs]."
What Changed

Adding role context, audience definition, tone constraints, and a structural format reduced revision cycles from an average of 3 rounds to 1. Claude's output was usable on the first pass in roughly 80% of documentation tasks after Version 3. Status report consolidation dropped from 3.5 hours to under 50 minutes.

3
Pillar Three
Hallucination Guardrails & Governance

Federal documentation carries compliance risk that commercial work does not. I built governance rules into every production prompt.

Standing System Rules

1. Never generate policy language. Flag any output requiring policy interpretation as "SME Review Required" and stop.

2. Never infer missing inputs. Return a list of required inputs before proceeding.

3. Never use language that implies organizational authority. Use procedural framing only.

4. All financial figures must be sourced from provided inputs. No estimated numbers in reporting outputs.

Every AI-generated document went through a human review step before distribution. My role was to define what the AI was allowed to produce, catch what it got wrong, and make the judgment call on what required escalation.

4
Pillar Four
Systematic Evaluation
Test ScenarioExpected BehaviorActual ResultStatus
QRG prompt with missing process stepFlag gap, request clarificationReturned input gap list, did not fabricatePASS
Status report with 3 members missing inputsMark as "Pending — Input Required"Correctly flagged 3, formatted remaining 47PASS
All-hands draft referencing unverified decisionExclude or flag as unverifiedInitially included inferred language — caught in review, constraint addedFLAGGED → FIXED
Agile session materials with no prior templateGenerate using role and audience constraintsUsable draft on first pass, 1 minor revisionPASS
Financial projection with one program data missingStop, list missing data, do not estimateReturned structured request for missing inputsPASS
Human Judgment Boundary

The all-hands communication test was the clearest example of where I had to intervene. Claude inferred an organizational decision from context not explicitly stated. I caught it in review, added a constraint, and re-ran. AI handles production volume. I handle the accuracy boundary.

5
Pillar Five
Visual Proof & Business Impact
68%
Reduction in avg. documentation drafting time
~2.5h
Saved per weekly reporting cycle
100%
On-time financial reporting delivery rate
80%
First-pass usability after prompt v3+
9
Programs with standardized templates
3→1
Avg. revision rounds per document
Scale Hypothesis

If the prompt library and reporting workflow were deployed across all 9 programs simultaneously with dedicated adoption support, projected time savings would exceed 40 analyst-hours per month — material cost avoidance at federal billing rates without a headcount increase.

← Back to Portfolio
Portfolio / Case Studies / 03
Case Study 03

Closing the Bench Gap: A Knowledge Retention System for New Hire Consultants

At a Big 4 Advisory Firm, new analysts completed onboarding and then sat idle for up to 6 weeks before joining a live engagement. I identified the gap, built a 16-session live training program, and created a self-serve documentation library. 97 hours saved. 91% first-submission accuracy. Zero budget required.

Engagement Details
OrganizationBig 4 Advisory Firm
RoleAnalyst, Advisory Practice
DurationJan. 2022 — Jul. 2023
Initiative TypeSelf-directed knowledge management & training system
Format16 live webinars + internal documentation library
Budget Required$0 — fully self-initiated
MandateNone
91%
First-submission accuracy rate
16
Live webinar sessions delivered
$0
Budget required to build and run the system
5-Pillar Case Study
1
Pillar One
Diagnostic Framing — The Bench Gap Problem

The firm's onboarding process was structured and thorough. The problem was what happened after it ended. New analysts completed training, then waited up to 6 weeks on the bench before being staffed on a live engagement. By week four, procedural knowledge had degraded significantly. By week six, analysts were essentially starting from scratch.

The cost showed up in two places. Senior managers were catching errors on deliverables that should have been clean at the analyst level. And new hires were losing confidence, asking questions that had already been answered in onboarding, slowing down the teams they joined.

The Core Problem

No bridge existed between initial onboarding and first live project. Knowledge decayed during the bench period with no reinforcement mechanism, no self-serve reference system, and no structured way for analysts to stay sharp while waiting to be staffed.

Nobody assigned me to fix this. I was on the bench myself, doing real client work, and I noticed the pattern. So I built the bridge.

2
Pillar Two
Prompt Iteration Logs — Building the Training System

The first version was a single walkthrough session — informal, no recording, no follow-up materials. Better than nothing, but it didn't solve the underlying problem. The second iteration added structure: a recurring 16-part webinar series, each one built around a specific task type using actual client work as demonstration material.

Version 1 — Informal Walkthrough
  • Single ad hoc session
  • No agenda or recording
  • Attendance by word of mouth
  • No way to revisit content
Version 2 — Structured System
  • 16-session series with defined topics
  • Real client work as demonstration material
  • Written guides uploaded after each session
  • Searchable library organized by task type
What Changed

Adding a persistent, searchable reference library transformed a one-time training event into a self-serve knowledge system. The webinars drove comprehension. The documentation library made that knowledge retrievable during live project work. That behavior shift was where the 97 hours came from.

3
Pillar Three
Guardrails & Governance
Standing Design Rules

1. Every demonstration must use real client work as the source material. No hypotheticals.

2. Every document must include a "When to escalate" section. New hires need to know when a task exceeds analyst-level judgment.

3. No documentation goes into the library without a senior manager spot-check. One review per document before publishing.

The escalation rule was the most important one. A new hire who confidently completes a task that should have been flagged creates more work than one who asks for help.

4
Pillar Four
Systematic Evaluation
Evaluation PointBefore SystemAfter SystemResult
First-submission accuracyFrequent senior corrections91% approved without revision+91% ACCURACY
Senior manager correction time~6–8h per cohort per monthUnder 2h per cohort per month~97h SAVED
New hire questions to senior managersFrequent, repetitiveMajority redirected to libraryREDUCED
Documentation accuracyN/A100% cleared review before publishingPASS
Human Judgment Boundary

I produced the content. A senior manager verified it before it went into the library. That separation kept the system credible. An inaccurate guide used at scale does more damage than the problem it was meant to solve.

5
Pillar Five
Visual Proof & Business Impact
97h
Total billable hours saved across cohorts
91%
First-submission accuracy rate
16
Webinar sessions built on real client work
6wk
Bench period covered by the system
$0
Budget required
Self-init.
No mandate, no assignment
Scale Hypothesis

Deployed across three practice groups running concurrent bench periods, the per-cohort savings compound. At 97 hours saved per cycle, that represents roughly 300 billable hours recovered per quarter with no additional headcount.

← Back to Portfolio
Portfolio / Case Studies / 01 — Flagship
Case Study 01 — Flagship

Operational Stabilization & AI-Enabled Recovery in a Post-Disruption Territory

Following the conclusion of a federal consulting engagement impacted by DOGE-related cost reduction initiatives, I was brought into a global equipment manufacturing organization to stabilize a high-risk territory devastated by hurricane damage. Over a defined five-month scope, I retained 27 of 30 at-risk client accounts, reduced ticket resolution time by 67%, cut field survey time by 77%, and introduced AI-assisted workflows across a team with no prior exposure to digital productivity tools.

Engagement Details
OrganizationGlobal Equipment Manufacturer
RoleSenior Account Manager — Crisis Recovery Engagement
DurationSep. 2025 — Early 2026 (5-month scope)
ContextPost-hurricane territory recovery — 30 at-risk client accounts
AI StackClaude · ChatGPT · Microsoft Copilot · AI route planning
IndustryEquipment Manufacturing & Field Services
ApproachConsulting frameworks + AI-enabled productivity in a non-tech field environment
67%
Reduction in average ticket resolution time (9 days to 3 days)
77%
Field survey time reduction — 120 min to 27 min
96%
SLA compliance rate achieved after workflow systems deployed
5-Pillar Case Study
1
Pillar One
Diagnostic Framing — The Engagement Context & Operational Bottlenecks

This engagement was not a standard account management role. Following the conclusion of a federal consulting contract impacted by government cost-reduction initiatives under the Department of Government Efficiency (DOGE), I transitioned into a global equipment manufacturing organization to support a high-risk territory experiencing severe operational disruption after hurricane-related damage impacted service continuity, client responsiveness, and field execution.

The five-month engagement had a defined stabilization mandate: reduce service backlog, improve execution efficiency, protect client retention, and introduce operational structure to a team that had been operating reactively since the disruption. Approximately 30 client accounts were identified as at risk due to ongoing service failures and operational instability. Although the industry differed significantly from my prior background in consulting, technology, and analytics-driven environments, I applied transferable consulting frameworks, operational problem-solving methodologies, and AI-enabled productivity strategies to accelerate impact within a compressed recovery timeline.

Engagement Mandate

Stabilize field operations, reduce open service backlog, protect at-risk client accounts, and introduce scalable AI-assisted workflows to a team with no prior exposure to digital productivity tools — all within a five-month engagement scope.

Three core operational bottlenecks were identified in the first two weeks of the engagement:

Problem One
The 2-Hour Paper-Based Diagnostic Survey

Every client visit meant showing up with a clipboard, writing notes by hand, and mentally juggling 40+ inspection checkpoints across cab interiors, door systems, machine rooms, and controllers. Account managers were deciding on the fly what to look at next. A thorough survey took 2 hours, quality varied by rep, and the paper notes had to be manually converted into a report afterward. The process had no structure, no consistency, and no way to scale.

Problem Two
The Single-Person Proposal Bottleneck

The branch manager wrote excellent proposals. His tone and client framing were what closed deals. No one else on the team could replicate it. Every account manager proposal required repeated revision cycles, and the manager's time was the bottleneck.

Problem Three — The Invisible Ticket Backlog

ServiceNow was the platform. Email was the process. When a ticket was created, the only notification account managers and managers received was a system-generated email that quickly got buried in inboxes. There was no centralized view of what was open, what was overdue, or who owned what. Tickets sat for days without follow-up not because people didn't care, but because nothing made them visible.

I built a ServiceNow ticket tracker in Excel with dropdown menus for team, category, priority, and status — pulling the same data that lived in the platform into a format the team could actually use in a meeting. Then I designed a structured weekly cadence to run alongside it: a Monday morning 10-minute triage to review all open tickets, a Wednesday check-in to track progress and follow up with IT support, and a Friday close-out meeting to confirm resolved tickets and flag anything still open. Three touchpoints, 10 minutes each, zero additional headcount.

67%
Reduction in average ticket resolution time
9d → 3d
Average open ticket lifespan before and after
30 min
Total weekly meeting time to run the full cadence
96%
SLA compliance rate achieved after system deployed
Weekly Ticket Cadence — Designed by Joaquin Wilson
Monday — 10 min
Full triage of all open tickets. Assign priority and owner. Set resolution target for the week.
Wednesday — 10 min
Mid-week progress check. Follow up with IT support on any blocked tickets. Escalate if needed.
Friday — 10 min
Close-out review. Confirm resolved tickets. Final IT follow-up. Flag anything rolling into next week.
Tangible Proof — The Ticket Tracker & Executive Dashboard

Below is the ServiceNow Open Tickets tracker built in Excel — category dropdowns, priority levels, SLA status, resolution times, and real-time age tracking across all open tickets. Alongside it, an IT Operations Executive Summary dashboard showing SLA compliance, open ticket counts, and team-level breakdown — built so any manager could see the full picture in under 30 seconds.

ServiceNow Open Tickets Tracker.xlsx
ServiceNow Ticket Tracker built by Joaquin Wilson
IT Operations Executive Summary Dashboard.xlsx
IT Operations Executive Dashboard built by Joaquin Wilson
ServiceNow ticket tracker and IT Operations Executive Dashboard — built in Excel to give account managers and leadership real-time visibility into open tickets, SLA compliance, and resolution performance.
How I Built the Fix

Since I came from a consulting background, not the elevator industry, I started where any good analyst starts: research. I used AI to run deep market research on the most common upgrade opportunities and lead signals in commercial elevator maintenance, including obsolete door edge types, aging controllers, overheating machine rooms, and cab interior wear patterns. That research became the backbone of the form.

I combined that research with my consulting skills and Generative AI to design a structured digital survey in Excel. Every inspection category became a dropdown menu with standardized condition ratings. The form guided account managers through each section in a fixed sequence, eliminated the "what do I look at next" hesitation, and auto-flagged upgrade priorities by category. Paper and pen were gone. The mental load of running the survey dropped to near zero.

120 min
Average survey time before the digital form
27 min
Average survey time after adoption across 5 account managers
40+ pts
Inspection checkpoints standardized with dropdown logic
Tangible Proof — The Actual Tool

Below is a screenshot of the Executive Survey Form (v2) built in Excel. Each section — Cab Interior, Door System, Machine Room — uses dropdown menus with condition-specific options. Account managers move through the form in sequence, select from standardized choices, and the form flags upgrade opportunities automatically.

SurveyForm_Executive_v2.xlsx
Executive Survey Form - Excel diagnostic tool built by Joaquin Wilson
Executive Survey Form v2 — built in Excel using dropdown logic, AI-assisted market research, and consulting methodology. Deployed across a 5-person field team.
Measurable Goal

Cut diagnostic survey time to under 30 minutes with no loss in quality, and make the branch manager's proposal style transferable to every account manager without requiring his direct involvement in every revision.

2
Pillar Two
Prompt Iteration Logs — Building Both Systems

System One: The Diagnostic Form

Version 1 — Manual Checklist
  • Free-text entries for each component
  • No standardized condition ratings
  • No automatic upgrade flagging
  • Average completion: 90 minutes
Version 3 — Production Form
  • Dropdown menus for all component conditions
  • Standardized ratings: Good / Monitor / Upgrade Required
  • Auto-flagged upgrade priorities by category
  • Average completion: 27 minutes

System Two: The Proposal Prompt Library

Version 1 — Generic Prompt
"Write a proposal for an equipment upgrade at a commercial office building. The client needs a new door operator and controller replacement."
Version 4 — Few-Shot Production Prompt
"You are writing a client proposal for a field equipment services company. Study the tone, structure, and persuasion style of the following 4 successful proposals by the branch manager: [examples]. Write a new proposal for [client/building type] addressing [upgrade needs]. Match his direct, client-focused tone exactly. Lead with the client's operational risk, not product features."
What the Few-Shot Approach Solved

Generic prompts produced generic proposals. The few-shot approach gave Claude a concrete style target. After version 3, proposals matched the manager's tone closely enough that he stopped requesting revisions. The writing style became a team asset, not a single person's skill.

3
Pillar Three
Guardrails & Governance
Standing Rules — Diagnostic Form

1. The form flags upgrade priorities but never makes the final recommendation. The account manager reviews every flag before it appears client-facing.

2. Machine room and controller assessments require physical inspection confirmation. The form cannot override on-site judgment for safety-adjacent components.

3. The output is an internal working document. It feeds the proposal but is never shared directly with the client.

Standing Rules — Proposal Prompt Library

1. Every AI-generated proposal is reviewed by the account manager before submission.

2. Pricing, contract terms, and technical specifications are always entered manually. The AI handles tone and structure only.

3. If the client has prior relationship history, that context is added manually before the prompt runs.

4
Pillar Four
Systematic Evaluation
Test ScenarioExpected BehaviorActual ResultStatus
Diagnostic form on building with 3 flagged componentsFlags all 3, no missed itemsAll 3 flagged correctly, priority order accuratePASS
Form used by colleague unfamiliar with original checklistCompleted without guidanceCompleted in 31 min, 1 clarification on machine room sectionPASS
Proposal prompt on new client type not in examplesMaintain tone, adjust contextTone held, but opening led with product specs — caught in review, constraint addedFLAGGED → FIXED
Proposal prompt on client with prior service historyNeutral draft, manager adds context manuallyCorrectly produced neutral draft; context added manuallyPASS
Route planning with AI on 5-visit dayOptimized by traffic and geographyReduced estimated drive time by 34 minutesPASS
Human Judgment Boundary

The proposal prompt test revealed one failure mode: without an explicit instruction to lead with client risk, Claude defaulted to product feature language — exactly what the manager's style avoided. One constraint added to the system prompt fixed it. That catch happened in testing, not in front of a client.

5
Pillar Five
Visual Proof & Business Impact
77%
Reduction in diagnostic survey time per visit
93min
Time saved per site visit across all 5 account managers
0
Proposal revisions required after prompt library v4
4
Peers trained and adopted with no mandate
34min
Avg. drive time saved per day via AI route planning
Self-init.
Both systems designed and deployed without assignment
Scale Hypothesis

At 5 account managers saving 93 minutes per site visit and an average of 3 visits per day, the diagnostic form alone frees roughly 23 hours of field time per week. Applied to a regional team of 20 account managers, that represents over 90 hours of recovered selling time per week — without adding headcount.

Prompt Library

Real prompts. Real workflows. Tested results.

These are the actual prompt frameworks I built and used across federal consulting, enterprise advisory, and field operations. Every one is copyable and production-ready. Click any prompt to copy it and test it yourself.

Documentation
Quick Reference Guide (QRG) Generator

Produces a structured, audience-specific QRG for any process or platform. Built for federal and enterprise contexts where compliance language and escalation protocols matter.

Production Prompt
You are a senior program analyst supporting a [INDUSTRY] organization. Write a Quick Reference Guide for [PROCESS NAME]. Audience: [ROLE] who are new to this process. Tone: Direct, procedural, no jargon. Format: Numbered steps. Flag every decision point clearly. Mark any step requiring escalation or SME review as [SME REVIEW REQUIRED]. Rules: - Never generate policy language. If a step requires policy interpretation, flag it and stop. - Never infer missing information. If inputs are incomplete, return a list of what is needed before proceeding. - Never use language implying organizational authority (e.g. "leadership decided"). Use procedural framing only. Source material: [PASTE PROCESS INPUTS HERE]
Reporting
Weekly Status Report Consolidator

Transforms raw individual team inputs into a formatted status report. Reduced a 3.5-hour weekly manual build to under 50 minutes across a 50+ member federal program team.

Production Prompt
You are a program analyst consolidating weekly status inputs for a [PROGRAM NAME] team of [NUMBER] members. Task: Combine the individual inputs below into one formatted Weekly Status Report. Output format: 1. Executive Summary (3 sentences max) 2. Accomplishments This Week (bullet list by workstream) 3. Risks and Blockers (flag anything needing escalation) 4. Next Week Priorities 5. Pending Inputs (list any team members whose inputs are missing — do not estimate or infer their status) Rules: - All financial figures must come directly from the inputs. Do not estimate. - If a team member's input is missing, mark their section as "Pending — Input Required." - Do not use language implying decisions were made unless explicitly stated in the inputs. Team inputs: [PASTE INDIVIDUAL INPUTS HERE]
Proposal Writing
Few-Shot Proposal Style Replicator

Replicates a high-performer's proposal tone and structure using few-shot examples. Used at a Global Equipment Manufacturer to eliminate revision cycles entirely across a 5-person field team.

Production Prompt
You are writing a client proposal. Study the tone, sentence structure, and persuasion style of the following [NUMBER] successful proposals written by [ROLE/NAME]: [PASTE EXAMPLE PROPOSALS HERE — minimum 3 for best results] Now write a new proposal for the following client and situation: - Client: [CLIENT NAME / BUILDING / ORGANIZATION TYPE] - Core need: [WHAT THEY NEED] - Key upgrade or service: [SPECIFIC OFFERING] Style rules (match the examples exactly): - Lead with the client's operational risk, not the product features - Direct, confident tone — no filler phrases - Short paragraphs, concrete language - Close with a clear next step Do not include pricing, contract terms, or technical specifications — those will be added manually.
Stakeholder Comms
All-Hands Meeting Communication Draft

Generates a professional, tone-consistent all-hands communication from raw bullet points. Built for federal and enterprise environments where language precision and attribution matter.

Production Prompt
You are drafting an all-hands project communication for [PROGRAM / ORGANIZATION NAME]. Audience: [DESCRIBE AUDIENCE — e.g. cross-functional team of 50+ analysts and managers] Tone: Professional, clear, inclusive. No corporate jargon. Format: Short intro paragraph, 3-5 key updates as bullets, closing with next steps. Rules: - Only include information explicitly provided below. Do not infer or add context. - Never state that leadership "decided" or "announced" anything unless explicitly in the inputs. - If any update is sensitive or incomplete, flag it as [NEEDS REVIEW BEFORE SENDING] rather than including it. - No more than 250 words total. Source inputs: [PASTE YOUR BULLET POINTS / NOTES HERE]
Documentation
Agile Session Materials Builder

Creates structured agile session materials — agenda, talking points, and facilitation guide — tailored to the team's role and maturity level. Used weekly across 9 federal application development programs.

Production Prompt
You are a senior agile program analyst preparing materials for a [CEREMONY TYPE — e.g. Sprint Review, Retrospective, PI Planning] session. Team context: - Role of attendees: [e.g. federal product owners, developers, stakeholders] - Agile maturity: [Beginner / Intermediate / Advanced] - Program phase: [e.g. Release 3, Sprint 6] - Key focus this session: [e.g. deployment readiness, backlog refinement] Deliverables needed: 1. Session agenda (time-boxed) 2. 3-5 facilitation talking points per agenda item 3. One "parking lot" section for items needing follow-up Rules: - Keep language practical and action-oriented — no theory - Flag any agenda item that requires a subject matter expert to be present - Format for easy screen share during the session Source inputs: [PASTE ANY PRIOR NOTES, ACTION ITEMS, OR CONTEXT HERE]
Adoption & Training
AI Tool Adoption Training Plan

Builds a structured, role-specific training plan for rolling out a new AI tool or platform to a team. Designed for zero-mandate voluntary adoption — the kind that actually sticks.

Production Prompt
You are an AI adoption strategist building a training plan for a team that needs to adopt [TOOL / PLATFORM NAME]. Team profile: - Role: [e.g. account managers, analysts, consultants] - Current AI comfort level: [Low / Medium / High] - Team size: [NUMBER] - Time available for training: [e.g. 2 sessions of 45 minutes each] Objectives: - Get the team using [TOOL] independently within [TIMEFRAME] - Focus on [TOP 2-3 USE CASES most relevant to their daily work] Deliverables: 1. Session-by-session training outline 2. One "quick win" task per session that proves value immediately 3. A reference card they can use after training without asking for help 4. Escalation guide: when to use the tool vs. when human judgment is required Tone: Practical, no hype. Treat the team as professionals who are busy and skeptical of new tools. Show them why it saves them time before asking them to change their habits.
Documentation
Resume Bullet Tailor (Google Formula)

Rewrites resume bullets to match a specific job description using the Google recruiter formula. ATS-optimized, metric-dense, no fabrication — only rewording what is already true.

Production Prompt
You are an expert resume writer. Rewrite the bullet points below to match the job description provided, using the Google recruiter formula: Formula: [Action Verb] + [What you did] + [Tool or method] + [Measurable result] Word count: 15-20 words max per bullet Rules: - Never fabricate metrics or experience - Only reword what is already true in the original bullets - Use keywords from the job description naturally — do not keyword stuff - Quantify everything possible: %, $, #, or time saved - No em dashes, no buzzwords (leverage, synergize, spearhead, utilize) Job description: [PASTE JOB DESCRIPTION HERE] Original resume bullets: [PASTE YOUR CURRENT BULLETS HERE] Output: Rewritten bullets only, no commentary.
Adoption & Training
New Hire Knowledge Gap Analyzer

Identifies where new hire onboarding breaks down and recommends targeted interventions. The framework behind the KPMG training system that saved 97 billable hours.

Production Prompt
You are an organizational learning consultant. Analyze the onboarding process below and identify where knowledge retention is likely to break down. Context: - Role being onboarded: [JOB TITLE] - Time between onboarding completion and first live project: [e.g. 4-6 weeks] - Current onboarding format: [e.g. 2-day virtual training, self-paced LMS modules] - Most common errors managers catch in first 90 days: [LIST IF KNOWN] Deliverables: 1. Top 3 knowledge decay points in the current process 2. One intervention for each point (format: what to build, how long it takes, who maintains it) 3. A "bridge activity" the new hire can do during the bench period to stay sharp 4. One metric to track whether the intervention is working Constraints: - Interventions must require no additional budget - Must be buildable by one senior analyst without dedicated L&D support - Focus on practical retention, not engagement theater
Process Improvement
Process Bottleneck Analyzer

Diagnoses where a business process is losing time, money, or quality — and returns a ranked list of fixes with effort-to-impact scores. Works for any team size or industry.

Production Prompt
You are a Lean Six Sigma Green Belt consultant analyzing a broken business process. Process being analyzed: [PROCESS NAME] Industry / context: [e.g. federal consulting, SaaS implementation, field sales] Team size affected: [NUMBER] Current average completion time: [e.g. 4 hours per cycle] Known pain points (if any): [LIST WHAT YOU ALREADY KNOW] Task: Identify the top 3 bottlenecks in this process and rank them by impact. For each bottleneck, provide: 1. What is breaking and why 2. The downstream cost (time, rework, or revenue impact) 3. One fix — specific, buildable in under 2 weeks without additional budget 4. Effort-to-impact score (Low/Medium/High effort vs Low/Medium/High impact) Rules: - Base your analysis only on the inputs provided. Do not invent problems. - If critical information is missing, list what you need before proceeding. - Prioritize fixes that require no new headcount or technology purchases. Process description: [DESCRIBE THE CURRENT PROCESS STEP BY STEP]
Customer Success
Client Health Score & Churn Risk Analyzer

Evaluates account health signals and flags churn risk before it becomes a retention problem. Built for CSMs managing multi-account portfolios where early warning is everything.

Production Prompt
You are a senior Customer Success Manager reviewing account health for [CLIENT NAME / ACCOUNT TYPE]. Account data: - Contract value: [ARR or contract amount] - Time since last meaningful engagement: [e.g. 3 weeks] - Product adoption rate: [e.g. 40% of licensed seats active] - Support tickets in last 30 days: [NUMBER and nature — billing, technical, or training] - Last NPS or CSAT score: [IF AVAILABLE] - Upcoming renewal date: [DATE] - Champion contact status: [Active / Quiet / Changed roles] Task: Assess this account's health and churn risk. Output: 1. Health Score: Red / Yellow / Green with one-sentence justification 2. Top 2 churn risk signals from the data above 3. One immediate intervention (what to do this week) 4. One long-term play (what to build over the next 60 days) 5. Suggested talking points for the next check-in call Rules: - Do not sugarcoat. If the account is at risk, say so clearly. - Base every recommendation on the data provided, not general best practices. - Flag any data gaps that would change the risk assessment.
Stakeholder Comms
Executive Briefing Document Builder

Converts raw project data, notes, and status updates into a clean executive briefing. Built for leaders who need the answer in 90 seconds — not a wall of text.

Production Prompt
You are a senior analyst preparing an executive briefing for [EXECUTIVE TITLE — e.g. Deputy CIO, VP of Operations]. Audience: A decision-maker who has 90 seconds to read this and needs to know: what is happening, what is the risk, and what do they need to do. Briefing format: 1. Situation (2 sentences max — what is happening right now) 2. Impact (1-2 bullets — what is at stake if nothing changes) 3. Recommendation (1 clear action with a deadline) 4. Supporting data (3 bullets max — only the numbers that matter) 5. Next update: [DATE] Rules: - No passive voice. No hedging. No "it is recommended that." - Every sentence must earn its place. Cut anything that does not change a decision. - If the data does not support a clear recommendation, say so and list what is missing. - Maximum 200 words total. Raw inputs: [PASTE YOUR NOTES, DATA, AND CONTEXT HERE]
Change Management
Change Impact Assessment Generator

Maps who a change affects, how much, and what resistance to expect — before the rollout happens. Prevents the most common implementation failure: launching before the people side is ready.

Production Prompt
You are a change management analyst assessing the people impact of an upcoming organizational change. Change being implemented: [DESCRIBE THE CHANGE — e.g. new platform rollout, process redesign, org restructure] Organization type: [e.g. federal agency, mid-size SaaS company, enterprise consulting firm] Timeline: [WHEN does this go live] Affected groups: [LIST ROLES OR TEAMS IMPACTED] Task: Produce a change impact assessment. Output: 1. Impact matrix: For each affected group, rate impact as Low / Medium / High and explain why in one sentence 2. Top 3 resistance risks: Who will push back, and what is the most likely objection 3. Readiness gaps: What do affected teams NOT know yet that they need to know before go-live 4. Communication recommendation: What to say, to whom, and in what order 5. One early win: An action in the first 2 weeks that builds trust and reduces resistance Rules: - Treat resistance as a data point, not a character flaw. People resist for real reasons. - Do not recommend generic training. Recommend specific, role-targeted interventions. - Flag any group where impact assessment is uncertain due to missing data.
Customer Success
SaaS Implementation Risk Log Builder

Turns messy implementation notes into a structured risk log with severity ratings and mitigation actions. Keeps enterprise deployments on track when complexity starts to compound.

Production Prompt
You are an implementation consultant managing a [PLATFORM NAME] deployment for [CLIENT TYPE]. Implementation phase: [e.g. Discovery, Configuration, UAT, Go-Live] Go-live date: [DATE] Stakeholders involved: [LIST KEY ROLES — e.g. IT lead, business owner, end users] Known issues so far: [PASTE YOUR NOTES OR LIST] Task: Build a structured implementation risk log from the inputs above. For each risk, provide: 1. Risk description (one sentence — what could go wrong) 2. Likelihood: Low / Medium / High 3. Impact on go-live: Low / Medium / High 4. Owner: [Who is responsible for resolving this] 5. Mitigation action: Specific, with a target completion date Additional output: - Overall go-live readiness: On Track / At Risk / Not Ready — with one-line justification - One escalation item that needs a decision from leadership before [DATE] Rules: - Do not soften risk descriptions. A risk rated High needs to read like one. - If a mitigation requires client action (not just vendor action), say so explicitly. - Flag any risk where the owner is unclear — unowned risks are the ones that kill timelines.
Reporting
Data Story Builder for Non-Technical Audiences

Converts raw numbers and analysis into a plain-language narrative that non-technical stakeholders can act on. Because a dashboard nobody understands changes nothing.

Production Prompt
You are a data analyst translating findings into a business narrative for [AUDIENCE — e.g. VP of Sales, Operations Director, federal program manager]. Audience profile: - Technical comfort: [Low / Medium — they understand business, not data] - What they care about most: [e.g. cost, speed, client satisfaction, compliance] - Decision they need to make: [WHAT ACTION SHOULD THIS DATA INFORM] Raw data or findings: [PASTE YOUR NUMBERS, TABLES, OR ANALYSIS HERE] Task: Write a data narrative — not a report, a story with a point. Output format: 1. The headline finding (one sentence — the single most important thing) 2. What the data shows (3 bullets max — only the numbers that support the headline) 3. What this means for the business (translate to dollars, time, or risk) 4. The recommended action and why now 5. What we are NOT saying (one sentence — pre-empt the most likely misinterpretation) Rules: - Never lead with methodology. Lead with the finding. - Round numbers for clarity. Precision is for appendices. - If the data does not support a clear conclusion, say so and explain what additional data is needed.
Stakeholder Comms
Meeting Debrief & Action Item Extractor

Turns messy meeting notes or transcripts into a clean debrief with owned action items and a follow-up communication draft. Eliminates the post-meeting black hole where decisions go to die.

Production Prompt
You are a senior analyst processing notes from a [MEETING TYPE — e.g. stakeholder review, project kickoff, executive briefing]. Meeting context: - Attendees: [LIST NAMES AND ROLES] - Purpose: [WHAT THE MEETING WAS SUPPOSED TO ACCOMPLISH] - Duration: [e.g. 60 minutes] Raw notes or transcript: [PASTE HERE] Task: Produce a structured meeting debrief. Output: 1. Decisions made (bullet list — only confirmed decisions, not discussion) 2. Action items (table format: Action | Owner | Due Date | Priority) 3. Open questions (items discussed but not resolved — needs a follow-up) 4. Follow-up email draft (to send to all attendees within 24 hours — 150 words max) Rules: - Only list an item as a decision if it was explicitly agreed on. Flag anything ambiguous as an open question. - Every action item must have an owner. If ownership was not assigned in the meeting, flag it as [OWNER TBD]. - The follow-up email should sound human, not like a bot generated it.
Adoption & Training
Platform Adoption Scorecard

Measures how well a team is actually using a platform — not just how many licenses were bought. Surfaces the gap between deployment and real adoption so you know exactly where to intervene.

Production Prompt
You are an AI adoption analyst evaluating platform adoption for [PLATFORM NAME] across a team of [NUMBER] users. Usage data available: - Active users (last 30 days): [NUMBER out of total licensed] - Features being used: [LIST WHICH MODULES OR FEATURES ARE ACTIVE] - Features not being used: [LIST UNUSED FEATURES THAT SHOULD BE ACTIVE] - Support tickets related to the platform (last 30 days): [NUMBER and category] - Last training session: [DATE] - Adoption goal set at deployment: [e.g. 80% active users by 90 days] Task: Produce an adoption scorecard. Output: 1. Adoption Score: 1-10 with one-sentence justification 2. Top 2 adoption gaps (specific features or behaviors not yet embedded) 3. Root cause analysis: Is the gap a training problem, a workflow problem, or a resistance problem? Explain. 4. 30-day intervention plan: 3 specific actions with owners and success metrics 5. What success looks like in 60 days (measurable, not vague) Rules: - Low adoption is not a user failure. Diagnose the system, not the people. - Every intervention must be specific enough to assign to a real person. - Flag any data gap that would change the score or recommendations.