I'm an AI adoption strategist who has spent seven years embedded inside the problem. Federal programs, Big 4 consulting, field operations — every role taught me the same thing: technology doesn't fail, people do. I build the systems and train the teams that change that.
I'm an AI adoption strategist who has spent seven years embedded inside the problem. Federal programs, Big 4 consulting, field operations. Every role taught me the same thing: technology doesn't fail, people do. I build the systems and train the teams that change that.
At Accenture Federal Services, I worked directly under the program manager for a federal Digital Transformation initiative, supporting 9 application development programs. My job was platform adoption: getting federal teams to change how they worked, document what they did, and trust a system they didn't ask for. I replaced manual documentation cycles with Claude-assisted workflows, cut reporting build time from 3.5 hours to under 50 minutes, and maintained 100% on-time delivery to federal leadership across the engagement.
Most recently, following the conclusion of a federal consulting engagement impacted by DOGE-related cost reduction initiatives, I was brought into a Global Equipment Manufacturer to stabilize a hurricane-disrupted territory with 30 at-risk client accounts. Over a five-month engagement, I retained 27 of those accounts, introduced AI-assisted workflows to a field team with no prior digital productivity experience, cut survey time by 77%, and reduced ticket resolution from 9 days to 3. None of that was familiar industry territory. That was the point.
I use AI the same way I use any other tool: to close a specific gap, with a measurable result, and a process someone else can follow after I leave.
Every case study follows the same structure: the broken process, what I built, and the result. No polished outputs without showing the work behind them.
Deployed into a hurricane-disrupted territory with 30 at-risk client accounts and a mandate to stabilize operations in five months. Retained 27 of 30 accounts, cut ticket resolution time by 67%, reduced field survey time by 77%, and introduced AI workflows to a team with no prior digital productivity experience.
Replaced manual documentation cycles, fragmented communications, and reactive reporting across 9 federal application development programs using Claude-assisted workflows. Cut reporting build time by 68% and maintained 100% on-time delivery to federal leadership.
New analysts completed onboarding then sat idle for 6 weeks before live projects began. Built a 16-session webinar series and a self-serve documentation library from scratch with zero budget and no mandate.
The actual frameworks I built across federal consulting, enterprise advisory, and field operations. Documentation, reporting, stakeholder comms, change management, customer success — pick a category and test one yourself.
Written by managers, peers, and mentors across Accenture Federal Services, KPMG, and IBM.
Targeting AI Adoption Manager, Implementation Consultant, and Customer Success roles. If you need someone who can walk into a team, diagnose where adoption is failing, and build systems that stick, let's talk.
jwils153d@gmail.comSupporting 9 application development programs inside a federal Digital Transformation initiative, I replaced manual documentation cycles, fragmented communications, and reactive reporting with AI-assisted workflows built on Claude, ChatGPT, and Gemini.
The client's Digital Transformation team was running 9 simultaneous application development programs under a single program manager. Three bottlenecks were costing hours every week.
Documentation was built from scratch every cycle. Quick reference guides, agile session materials, and stakeholder presentation decks were drafted manually with no reusable templates. A single QRG update could consume 3 to 4 hours of analyst time.
Reporting for 50+ people was a manual collection exercise. The weekly status report, monthly status report, and PTO tracker required individual inputs consolidated by hand. On-time delivery depended entirely on manual coordination.
Stakeholder communications had no consistent standard. All-hands meeting content, onboarding materials, and cross-functional updates were written from scratch each time.
Cut documentation production time by at least 50%, achieve 100% on-time reporting delivery, and establish reusable AI-assisted templates that any analyst could operate without starting from zero.
The first versions of my prompts produced generic output. A prompt asking Claude to write a quick reference guide returned documentation that lacked the agency's specific terminology, stakeholder hierarchy, and compliance framing. It required more editing than writing from scratch. The fix was constraint layering.
Adding role context, audience definition, tone constraints, and a structural format reduced revision cycles from an average of 3 rounds to 1. Claude's output was usable on the first pass in roughly 80% of documentation tasks after Version 3. Status report consolidation dropped from 3.5 hours to under 50 minutes.
Federal documentation carries compliance risk that commercial work does not. I built governance rules into every production prompt.
1. Never generate policy language. Flag any output requiring policy interpretation as "SME Review Required" and stop.
2. Never infer missing inputs. Return a list of required inputs before proceeding.
3. Never use language that implies organizational authority. Use procedural framing only.
4. All financial figures must be sourced from provided inputs. No estimated numbers in reporting outputs.
Every AI-generated document went through a human review step before distribution. My role was to define what the AI was allowed to produce, catch what it got wrong, and make the judgment call on what required escalation.
| Test Scenario | Expected Behavior | Actual Result | Status |
|---|---|---|---|
| QRG prompt with missing process step | Flag gap, request clarification | Returned input gap list, did not fabricate | PASS |
| Status report with 3 members missing inputs | Mark as "Pending — Input Required" | Correctly flagged 3, formatted remaining 47 | PASS |
| All-hands draft referencing unverified decision | Exclude or flag as unverified | Initially included inferred language — caught in review, constraint added | FLAGGED → FIXED |
| Agile session materials with no prior template | Generate using role and audience constraints | Usable draft on first pass, 1 minor revision | PASS |
| Financial projection with one program data missing | Stop, list missing data, do not estimate | Returned structured request for missing inputs | PASS |
The all-hands communication test was the clearest example of where I had to intervene. Claude inferred an organizational decision from context not explicitly stated. I caught it in review, added a constraint, and re-ran. AI handles production volume. I handle the accuracy boundary.
If the prompt library and reporting workflow were deployed across all 9 programs simultaneously with dedicated adoption support, projected time savings would exceed 40 analyst-hours per month — material cost avoidance at federal billing rates without a headcount increase.
At a Big 4 Advisory Firm, new analysts completed onboarding and then sat idle for up to 6 weeks before joining a live engagement. I identified the gap, built a 16-session live training program, and created a self-serve documentation library. 97 hours saved. 91% first-submission accuracy. Zero budget required.
The firm's onboarding process was structured and thorough. The problem was what happened after it ended. New analysts completed training, then waited up to 6 weeks on the bench before being staffed on a live engagement. By week four, procedural knowledge had degraded significantly. By week six, analysts were essentially starting from scratch.
The cost showed up in two places. Senior managers were catching errors on deliverables that should have been clean at the analyst level. And new hires were losing confidence, asking questions that had already been answered in onboarding, slowing down the teams they joined.
No bridge existed between initial onboarding and first live project. Knowledge decayed during the bench period with no reinforcement mechanism, no self-serve reference system, and no structured way for analysts to stay sharp while waiting to be staffed.
Nobody assigned me to fix this. I was on the bench myself, doing real client work, and I noticed the pattern. So I built the bridge.
The first version was a single walkthrough session — informal, no recording, no follow-up materials. Better than nothing, but it didn't solve the underlying problem. The second iteration added structure: a recurring 16-part webinar series, each one built around a specific task type using actual client work as demonstration material.
Adding a persistent, searchable reference library transformed a one-time training event into a self-serve knowledge system. The webinars drove comprehension. The documentation library made that knowledge retrievable during live project work. That behavior shift was where the 97 hours came from.
1. Every demonstration must use real client work as the source material. No hypotheticals.
2. Every document must include a "When to escalate" section. New hires need to know when a task exceeds analyst-level judgment.
3. No documentation goes into the library without a senior manager spot-check. One review per document before publishing.
The escalation rule was the most important one. A new hire who confidently completes a task that should have been flagged creates more work than one who asks for help.
| Evaluation Point | Before System | After System | Result |
|---|---|---|---|
| First-submission accuracy | Frequent senior corrections | 91% approved without revision | +91% ACCURACY |
| Senior manager correction time | ~6–8h per cohort per month | Under 2h per cohort per month | ~97h SAVED |
| New hire questions to senior managers | Frequent, repetitive | Majority redirected to library | REDUCED |
| Documentation accuracy | N/A | 100% cleared review before publishing | PASS |
I produced the content. A senior manager verified it before it went into the library. That separation kept the system credible. An inaccurate guide used at scale does more damage than the problem it was meant to solve.
Deployed across three practice groups running concurrent bench periods, the per-cohort savings compound. At 97 hours saved per cycle, that represents roughly 300 billable hours recovered per quarter with no additional headcount.
Following the conclusion of a federal consulting engagement impacted by DOGE-related cost reduction initiatives, I was brought into a global equipment manufacturing organization to stabilize a high-risk territory devastated by hurricane damage. Over a defined five-month scope, I retained 27 of 30 at-risk client accounts, reduced ticket resolution time by 67%, cut field survey time by 77%, and introduced AI-assisted workflows across a team with no prior exposure to digital productivity tools.
This engagement was not a standard account management role. Following the conclusion of a federal consulting contract impacted by government cost-reduction initiatives under the Department of Government Efficiency (DOGE), I transitioned into a global equipment manufacturing organization to support a high-risk territory experiencing severe operational disruption after hurricane-related damage impacted service continuity, client responsiveness, and field execution.
The five-month engagement had a defined stabilization mandate: reduce service backlog, improve execution efficiency, protect client retention, and introduce operational structure to a team that had been operating reactively since the disruption. Approximately 30 client accounts were identified as at risk due to ongoing service failures and operational instability. Although the industry differed significantly from my prior background in consulting, technology, and analytics-driven environments, I applied transferable consulting frameworks, operational problem-solving methodologies, and AI-enabled productivity strategies to accelerate impact within a compressed recovery timeline.
Stabilize field operations, reduce open service backlog, protect at-risk client accounts, and introduce scalable AI-assisted workflows to a team with no prior exposure to digital productivity tools — all within a five-month engagement scope.
Three core operational bottlenecks were identified in the first two weeks of the engagement:
ServiceNow was the platform. Email was the process. When a ticket was created, the only notification account managers and managers received was a system-generated email that quickly got buried in inboxes. There was no centralized view of what was open, what was overdue, or who owned what. Tickets sat for days without follow-up not because people didn't care, but because nothing made them visible.
I built a ServiceNow ticket tracker in Excel with dropdown menus for team, category, priority, and status — pulling the same data that lived in the platform into a format the team could actually use in a meeting. Then I designed a structured weekly cadence to run alongside it: a Monday morning 10-minute triage to review all open tickets, a Wednesday check-in to track progress and follow up with IT support, and a Friday close-out meeting to confirm resolved tickets and flag anything still open. Three touchpoints, 10 minutes each, zero additional headcount.
Below is the ServiceNow Open Tickets tracker built in Excel — category dropdowns, priority levels, SLA status, resolution times, and real-time age tracking across all open tickets. Alongside it, an IT Operations Executive Summary dashboard showing SLA compliance, open ticket counts, and team-level breakdown — built so any manager could see the full picture in under 30 seconds.
Since I came from a consulting background, not the elevator industry, I started where any good analyst starts: research. I used AI to run deep market research on the most common upgrade opportunities and lead signals in commercial elevator maintenance, including obsolete door edge types, aging controllers, overheating machine rooms, and cab interior wear patterns. That research became the backbone of the form.
I combined that research with my consulting skills and Generative AI to design a structured digital survey in Excel. Every inspection category became a dropdown menu with standardized condition ratings. The form guided account managers through each section in a fixed sequence, eliminated the "what do I look at next" hesitation, and auto-flagged upgrade priorities by category. Paper and pen were gone. The mental load of running the survey dropped to near zero.
Below is a screenshot of the Executive Survey Form (v2) built in Excel. Each section — Cab Interior, Door System, Machine Room — uses dropdown menus with condition-specific options. Account managers move through the form in sequence, select from standardized choices, and the form flags upgrade opportunities automatically.
Cut diagnostic survey time to under 30 minutes with no loss in quality, and make the branch manager's proposal style transferable to every account manager without requiring his direct involvement in every revision.
System One: The Diagnostic Form
System Two: The Proposal Prompt Library
Generic prompts produced generic proposals. The few-shot approach gave Claude a concrete style target. After version 3, proposals matched the manager's tone closely enough that he stopped requesting revisions. The writing style became a team asset, not a single person's skill.
1. The form flags upgrade priorities but never makes the final recommendation. The account manager reviews every flag before it appears client-facing.
2. Machine room and controller assessments require physical inspection confirmation. The form cannot override on-site judgment for safety-adjacent components.
3. The output is an internal working document. It feeds the proposal but is never shared directly with the client.
1. Every AI-generated proposal is reviewed by the account manager before submission.
2. Pricing, contract terms, and technical specifications are always entered manually. The AI handles tone and structure only.
3. If the client has prior relationship history, that context is added manually before the prompt runs.
| Test Scenario | Expected Behavior | Actual Result | Status |
|---|---|---|---|
| Diagnostic form on building with 3 flagged components | Flags all 3, no missed items | All 3 flagged correctly, priority order accurate | PASS |
| Form used by colleague unfamiliar with original checklist | Completed without guidance | Completed in 31 min, 1 clarification on machine room section | PASS |
| Proposal prompt on new client type not in examples | Maintain tone, adjust context | Tone held, but opening led with product specs — caught in review, constraint added | FLAGGED → FIXED |
| Proposal prompt on client with prior service history | Neutral draft, manager adds context manually | Correctly produced neutral draft; context added manually | PASS |
| Route planning with AI on 5-visit day | Optimized by traffic and geography | Reduced estimated drive time by 34 minutes | PASS |
The proposal prompt test revealed one failure mode: without an explicit instruction to lead with client risk, Claude defaulted to product feature language — exactly what the manager's style avoided. One constraint added to the system prompt fixed it. That catch happened in testing, not in front of a client.
At 5 account managers saving 93 minutes per site visit and an average of 3 visits per day, the diagnostic form alone frees roughly 23 hours of field time per week. Applied to a regional team of 20 account managers, that represents over 90 hours of recovered selling time per week — without adding headcount.
These are the actual prompt frameworks I built and used across federal consulting, enterprise advisory, and field operations. Every one is copyable and production-ready. Click any prompt to copy it and test it yourself.
Produces a structured, audience-specific QRG for any process or platform. Built for federal and enterprise contexts where compliance language and escalation protocols matter.
Transforms raw individual team inputs into a formatted status report. Reduced a 3.5-hour weekly manual build to under 50 minutes across a 50+ member federal program team.
Replicates a high-performer's proposal tone and structure using few-shot examples. Used at a Global Equipment Manufacturer to eliminate revision cycles entirely across a 5-person field team.
Generates a professional, tone-consistent all-hands communication from raw bullet points. Built for federal and enterprise environments where language precision and attribution matter.
Creates structured agile session materials — agenda, talking points, and facilitation guide — tailored to the team's role and maturity level. Used weekly across 9 federal application development programs.
Builds a structured, role-specific training plan for rolling out a new AI tool or platform to a team. Designed for zero-mandate voluntary adoption — the kind that actually sticks.
Rewrites resume bullets to match a specific job description using the Google recruiter formula. ATS-optimized, metric-dense, no fabrication — only rewording what is already true.
Identifies where new hire onboarding breaks down and recommends targeted interventions. The framework behind the KPMG training system that saved 97 billable hours.
Diagnoses where a business process is losing time, money, or quality — and returns a ranked list of fixes with effort-to-impact scores. Works for any team size or industry.
Evaluates account health signals and flags churn risk before it becomes a retention problem. Built for CSMs managing multi-account portfolios where early warning is everything.
Converts raw project data, notes, and status updates into a clean executive briefing. Built for leaders who need the answer in 90 seconds — not a wall of text.
Maps who a change affects, how much, and what resistance to expect — before the rollout happens. Prevents the most common implementation failure: launching before the people side is ready.
Turns messy implementation notes into a structured risk log with severity ratings and mitigation actions. Keeps enterprise deployments on track when complexity starts to compound.
Converts raw numbers and analysis into a plain-language narrative that non-technical stakeholders can act on. Because a dashboard nobody understands changes nothing.
Turns messy meeting notes or transcripts into a clean debrief with owned action items and a follow-up communication draft. Eliminates the post-meeting black hole where decisions go to die.
Measures how well a team is actually using a platform — not just how many licenses were bought. Surfaces the gap between deployment and real adoption so you know exactly where to intervene.