10+ years leading enterprise technology operations, platform migrations, and digital transformation across federal and non-profit organizations. I turn operational complexity into repeatable, scalable outcomes.
I bridge the gap between technical teams and business outcomes — owning complex programs from first principles through to scalable, documented systems.
ITIL-based service management, incident and escalation ownership, SLA governance, and 24/7 operational continuity across distributed teams and platforms.
End-to-end product and platform delivery — from requirements and vendor management through UAT, phased rollout, and post-launch support operations.
Cross-functional program coordination, stakeholder alignment, release planning, and Agile/DevOps delivery across complex, multi-system environments.
Every project in my portfolio tells the full story — problem, decisions, execution, and impact.
I'm Jevon Butler — an IT Operations and Project Manager based in Colorado Springs with over a decade of experience leading technology programs that matter. From maintaining 24/7 communications infrastructure for the Air Force Satellite Control Network, to building USADA's digital testing ecosystem for Olympic athletes, to scaling TRAILS' technology operations to serve 30,000+ mental health professionals nationwide — my career has been defined by operational ownership in high-stakes environments.
I'm at my best when a program is complex, the stakes are real, and there isn't a pre-built playbook to follow. I thrive in the space between technical teams and business leadership — translating requirements into systems, and systems into outcomes.
I started my career in the United States Air Force, where I led IT operations for global satellite control missions — managing 18,000+ assets, directing Tier 1–3 incident response, and owning compliance reporting for 627+ regulatory requirements. That environment taught me operational discipline, documentation rigor, and how to lead technical teams under real pressure.
After the Air Force, I brought that foundation into the non-profit and mission-driven technology space. At USADA, I owned the full enterprise application ecosystem for the U.S. Anti-Doping program — leading platform migrations, building integrations with WADA's global systems, and maintaining 99.9% on-time delivery across 100+ production deployments. At TRAILS, I've led the technology operations scaling effort for a national mental health platform, implementing Salesforce Nonprofit Cloud, building hardware lifecycle programs from scratch, and establishing the data governance infrastructure the organization needed to grow responsibly.
I'm based in Colorado Springs, CO. I'm currently pursuing my PMP certification and continuing to develop my SQL skills for data-informed operations work. I'm passionate about mission-driven technology — the idea that the right systems, built well, can meaningfully amplify the impact of organizations doing important work.
Each project tells the full story — the problem, the decisions, the execution, and the measurable impact. Click any card to read the full case study.
Designed and implemented an end-to-end device management framework — replacing fragmented manual processes with a secure, repeatable system that cut provisioning effort in half.
Led the end-to-end platform transition from a proprietary testing app to a globally standardized system — integrating with WADA's ADAMS while keeping live athlete testing operations running without interruption.
Migrated 41,354 records from a fragmented Airtable system to Salesforce — establishing a unified system of record for a national network of 12,000+ schools and 30,000+ mental health professionals.
Led cross-functional delivery of a modern athlete compliance platform — improving Whereabouts reporting accuracy by 30% while replacing a low-adoption legacy system under strict WADA regulatory requirements.
Led delivery of a custom mobile platform that replaced paper-based doping control workflows — eliminating manual data entry, accelerating lab coordination, and establishing legally defensible digital records.
Served as central IT coordination point for live satellite launch operations — maintaining 100% system availability across 10+ command and control networks with zero critical failures during launch windows.
A visual summary of my professional history. Download the full resume for the complete picture.
Senior operational leader for enterprise applications and IT service delivery at a fast-scaling national mental health non-profit. Responsible for platform modernization, vendor coordination, data governance, and building the IT infrastructure from the ground up to support a national workforce and 12,000+ partner schools. Full project case studies are in the portfolio.
Owned the full enterprise application ecosystem for the U.S. Anti-Doping program — including a proprietary digital testing platform supporting Olympic, Paralympic, and 45+ National Governing Bodies. Led multiple platform builds and migrations, maintained 99.9% on-time delivery across 100+ production deployments, and managed multi-vendor technology relationships and SLA oversight.
Led IT operations and systems sustainment for global Air Force missions, overseeing infrastructure supporting an $8.2B Satellite Control Network across 10+ geographically separated units. Directed lifecycle management of 18,000+ IT assets and led incident response across Tier 1–3 support. Program recognized as Best Practice by Air Force Space Command Inspector General.
Whether you're exploring a role, a collaboration, or just want to talk about IT operations and project delivery — I'd love to hear from you.
Fill out the form and I'll get back to you within 24 hours.
How I designed and implemented an end-to-end device management framework at a fast-scaling national non-profit — replacing fragmented, manual processes with a secure, repeatable system that cut provisioning effort in half.
TRAILS was growing fast — from a small regional initiative into a national mental health platform serving over 12,000 schools and 30,000+ professionals. With that growth came a problem that quietly compounded every week: the organization had no formal system for managing the devices its employees used.
Devices were tracked inconsistently, if at all. When someone joined, IT provisioning was improvised. When someone left, device recovery was informal — creating real data security exposure. As a remote-first, distributed workforce, the stakes of getting this wrong were higher than they would be in a traditional office environment.
The underlying risk: Without standardized offboarding and device recovery, departing employees could retain access to sensitive program data, partner information, and organizational systems — a compliance and security liability that grew with every new hire.
When I assessed the landscape, the gaps broke down into five distinct problem areas:
Individually, each gap was manageable. Together, they represented a meaningful operational and security risk for an organization scaling nationally with sensitive student mental health data in scope.
This wasn't a case of applying a standard playbook. TRAILS was an early-stage non-profit with lean resources, a distributed workforce, and no existing IT infrastructure to build on. Every design decision had to balance security rigor against operational simplicity.
Rather than documenting a manual setup checklist, I prioritized implementing MDM (JAMF for macOS, Intune via Entra ID for Windows) so that device configuration, security policy enforcement, and software deployment happened automatically at enrollment. The upfront implementation cost was higher, but it eliminated human error from every future provisioning event and made the program scalable without adding headcount.
TRAILS was primarily a macOS environment. Connecting procurement through Apple Business Manager meant new devices could be automatically enrolled in MDM out of the box — no physical access required. This decision directly enabled remote onboarding at scale, which was non-negotiable for a distributed workforce.
I made the deliberate choice to tackle offboarding and access revocation workflows first, before completing the full procurement-to-reissuance cycle. The security exposure from weak offboarding was the highest-urgency risk. Getting that right before optimizing the rest of the lifecycle was the right call for a scaling non-profit handling sensitive data.
Rather than building a parallel IT-owned process, I partnered with HR to embed device workflows directly into their onboarding and offboarding checklists. This created shared accountability, improved follow-through on device returns, and reduced the coordination overhead that had previously caused delays and missed steps.
Audited existing device inventory, interviewed stakeholders across HR, operations, and leadership, and documented the full scope of risk and operational gaps before designing anything.
Designed and implemented secure device recovery, data wiping, and access revocation processes integrated with Entra ID and Google Workspace. Partnered with HR to embed these steps into the formal offboarding checklist.
Implemented JAMF for macOS and Intune for Windows. Configured enrollment profiles, baseline security policies, and automated software deployment. Connected Apple Business Manager for zero-touch macOS provisioning.
Established a centralized asset register with ownership records, lifecycle status, and assignment history. Defined standards for intake, tagging, and record-keeping that any team member could maintain consistently.
Produced SOPs, onboarding/offboarding runbooks, and vendor coordination guidelines to ensure the program would run consistently without depending on any single individual.
The biggest organizational challenge wasn't technical — it was getting consistent follow-through on device returns during offboarding. Solving that required embedding IT steps into HR's process rather than maintaining a separate IT checklist. Once that accountability was shared, completion rates improved significantly.
The program launched on schedule and delivered measurable improvements across every dimension we tracked.
Beyond the numbers, this program eliminated a category of security risk that was genuinely concerning for an organization handling sensitive student mental health data. It also gave leadership reliable visibility into the device fleet for the first time — which directly informed budgeting and procurement planning going forward.
How I led the end-to-end platform transition from a proprietary testing application to a globally standardized system — integrating with WADA's international data infrastructure while keeping live athlete testing operations running without interruption.
USADA's in-house testing application, DCO Mobile, was purpose-built for domestic doping control operations — and it worked well within those boundaries. But as anti-doping operations became increasingly international, the platform's limitations became a real operational liability. DCO Mobile wasn't designed for direct data exchange with ADAMS, the centralized anti-doping management system governed by the World Anti-Doping Agency (WADA), meaning every cross-border coordination required manual reconciliation to bridge the gap.
MODOC — a globally adopted testing platform used by anti-doping organizations worldwide — offered a path to standardization. Making the switch meant gaining native ADAMS integration and alignment with international agency workflows, but it also meant migrating mission-critical operations for a program where disruption to live athlete testing wasn't just an inconvenience — it was a compliance risk with real consequences for athletes and governing bodies.
The operational constraint: USADA tests athletes year-round, including during major international competitions. The migration could not introduce gaps in testing continuity, data integrity, or chain-of-custody documentation — standards that are legally and reputationally binding under the World Anti-Doping Code.
The core challenge wasn't just replacing software — it was replacing software that sat at the center of a tightly coupled ecosystem of internal systems, external agencies, and field operations.
A simultaneous switch across all field teams would have introduced unacceptable risk. I designed a phased rollout by region, starting with lower-volume testing areas before moving to high-activity programs. This allowed us to identify and resolve integration issues in a controlled environment before they could affect major competition testing cycles.
Rather than decommissioning DCO Mobile at migration start, I maintained both systems in parallel during the rollout window. This preserved a fallback for active testing workflows and allowed us to validate MODOC data output against known DCO Mobile records before fully committing.
I ran formal UAT cycles designed around real-world testing scenarios and edge cases — including ADAMS sync behavior under data conflicts, SFTP pipeline failures, and concurrent multi-region submissions. The investment surfaced several integration issues that would have been significantly more disruptive to resolve post-launch.
Doping Control Officers are often working in the field with limited connectivity and no time to troubleshoot software during a test. I prioritized training, documentation, and hands-on readiness sessions before each regional go-live rather than providing support reactively.
Led detailed requirements sessions with the MODOC vendor to define USADA-specific configurations. Translated existing DCO Mobile workflows and internal system dependencies into formal integration requirements.
Coordinated integration work across SIMON, PostgreSQL, SFTP pipelines (laboratory data exchange), and ADAMS. Served as the operational bridge between the vendor's engineering team and USADA's internal systems.
Designed and executed UAT cycles covering standard workflows, edge cases, and integration failure scenarios. Validated ADAMS sync behavior, chain-of-custody documentation, and multi-region concurrent operations.
Executed go-lives by region in sequenced waves, maintaining DCO Mobile in parallel during each transition window. Used learnings from early regions to refine training materials ahead of higher-volume rollouts.
Delivered hands-on training sessions and operational debriefs for DCOs across regions. Developed SOPs and quick-reference guides tailored to field conditions.
Managed the structured sunset of DCO Mobile once all regions had transitioned. Established Zendesk-based support workflows to track and resolve post-launch issues through the stabilization period.
The most complex moment in the project was coordinating the final DCO Mobile decommission during an active international testing period. Timing the sunset required aligning with competition schedules, lab processing windows, and ADAMS submission deadlines across multiple governing bodies simultaneously.
The migration completed on schedule across all regions with no disruption to live athlete testing operations.
How I led the end-to-end Salesforce implementation that transformed TRAILS' program operations — migrating 41,354 records, onboarding 120 users, and establishing a unified platform capable of supporting a national network of 12,000+ schools and 30,000+ mental health professionals.
TRAILS had grown from a regional pilot into a national mental health program serving thousands of schools across the country. That growth had exposed a structural problem with how the organization managed its operational data. Program delivery, training operations, and stakeholder engagement each ran through their own tools and processes, with Airtable serving as the closest thing to a shared system of record. In practice, data lived in disconnected places, reporting was unreliable, and leadership was making expansion decisions without a clear view of what was happening on the ground.
The scale constraint: TRAILS was actively expanding during the implementation. The data model and workflows I designed had to accommodate not just current operations, but a program footprint that was growing week over week throughout the project.
Salesforce Nonprofit Cloud ships with a standard model optimized for donor management. TRAILS' primary use cases were program delivery and training operations. Rather than adapting our processes to fit the standard model, I defined custom object relationships grounded in how TRAILS actually operated.
Migrating 41,354 records required extensive pre-migration cleaning, deduplication, and validation. I invested heavily in data quality before migration rather than moving data quickly and cleaning post-launch. A Salesforce system loaded with dirty data would have undermined adoption immediately.
Access controls were designed into the data model from the beginning rather than added at the end. With sensitive program data and student-adjacent records in scope, the security architecture had to be deliberate. Retrofitting RBAC onto a live system is significantly harder and riskier than building it in from day one.
I scoped a dedicated post-launch stabilization period into the project plan — a window for monitoring adoption, resolving workflow friction, and refining configurations based on real-world use. Treating go-live as a transition point rather than the finish line produced a smoother landing and higher long-term adoption.
Conducted cross-functional requirements sessions with program delivery, training operations, and stakeholder engagement teams. Designed the Salesforce data model — object relationships, fields, and workflows — to reflect actual business operations.
Audited existing Airtable data for duplicates, inconsistencies, and structural mismatches with the target Salesforce model. Cleaned and transformed records to meet migration quality standards before any data was moved.
Executed the migration of 41,354 records in staged batches with validation checks between each load. Confirmed record integrity, relationship linkages, and field mapping accuracy before proceeding to subsequent stages.
Designed and implemented role-based access controls aligned to organizational hierarchy and data sensitivity levels. Ensured staff access was appropriately scoped by role across program data, partner information, and reporting.
Conducted comprehensive QA cycles covering standard user workflows for each team, cross-functional data flows, and edge cases including bulk operations and reporting accuracy.
Managed the go-live transition for 120 internal users. Supported onboarding through training and documentation. Maintained a stabilization period post-launch to monitor adoption and address workflow friction.
The hardest part of this implementation wasn't technical — it was managing the workflow changes for staff who had built daily habits around Airtable. Solving that required ensuring the new system immediately felt more helpful than the old one, not just more complex. That demanded careful onboarding quality and rapid response to post-launch friction.
For the first time, TRAILS' executive team had reliable, real-time reporting on program delivery, training completion, and stakeholder engagement at a national scale — directly informing expansion decisions in a way that fragmented Airtable data never could.
How I led the cross-functional delivery of a modern athlete compliance platform that replaced a low-adoption legacy system — improving Whereabouts reporting accuracy by 30% while maintaining full alignment with World Anti-Doping Agency standards.
USADA's athlete compliance operations ran on a legacy platform that had aged out of step with the demands placed on it. Whereabouts reporting — the process by which athletes in the Registered Testing Pool must file their daily location availability for no-advance-notice testing — requires precision and consistency. A system that's hard to use produces errors. Errors produce compliance risk. And in anti-doping, compliance risk has real consequences for athletes.
Athlete Connect was USADA's answer: a modern, purpose-built web application designed around how athletes actually work. My role was to lead the delivery of that platform across engineering, QA, and compliance teams — from requirements definition through production launch.
The regulatory constraint: Every workflow had to satisfy WADA's technical standards simultaneously with usability goals. Building for athletes and building for regulators weren't optional tradeoffs — both were non-negotiable at the same time.
Before any workflow was designed, I mapped WADA's technical standards directly to system behaviors and validation rules. Getting the compliance boundaries established first meant usability improvements happened within a well-defined regulatory envelope, rather than creating gains that required painful rework later.
Rather than validating individual features in isolation, I designed UAT cycles around complete athlete workflows. Testing across full workflows surfaced integration gaps and edge cases that feature-level testing would have missed.
Athletes in the Registered Testing Pool have stricter compliance obligations. I carved out RTP-specific UAT scenarios to validate the platform correctly enforced additional requirements for this group — treating them as a distinct cohort rather than a generic athlete profile.
I made integration validation across SIMON, PostgreSQL, and Global DRO a hard gate in the release process. No sprint was considered complete without confirmed data integrity across all three systems — slowing some cycles but preventing data consistency issues from reaching production.
Translated WADA's technical standards into concrete system workflows, validation rules, and acceptance criteria — the foundation all delivery decisions were measured against.
Drove sprint planning and backlog refinement across engineering, QA, and compliance teams. Prioritized features based on regulatory criticality and athlete workflow impact.
Coordinated integration across SIMON, PostgreSQL, and Global DRO. Validated data consistency, real-time sync behavior, and edge case handling across all three systems before each sprint release.
Designed and executed structured UAT cycles covering standard workflows, RTP-specific compliance scenarios, and edge cases. Tracked defects through to resolution before production sign-off.
Identified and resolved high-frequency workflow friction points during UAT — particularly in the Whereabouts filing flow — before launch. Coordinated the production launch and retirement of the legacy platform.
The most nuanced delivery challenge was the tension between the compliance team's requirement for strict validation rules and the product goal of reducing user friction. Resolving that required sitting in the middle — understanding both the regulatory intent behind each requirement and the usability cost of implementing it literally.
Athlete Connect established a centralized, integrated compliance platform that gave USADA teams real-time visibility into athlete status for the first time — shifting from reactive case management to proactive outreach, which is a fundamentally more effective operating model.
How I led the delivery of a custom mobile platform that replaced paper-based doping control workflows — eliminating manual data entry errors, accelerating lab coordination, and establishing legally defensible digital records for 40,000+ athletes.
When a Doping Control Officer conducts an athlete test in the field, every step — from notification to sample collection to chain-of-custody documentation — must be recorded with precision. The data generated is legally significant: it forms the evidentiary basis for anti-doping rule violation proceedings and must be defensible under international arbitration standards.
For years, USADA managed this process on paper. Officers completed forms by hand, scanned them, emailed them to headquarters, and then staff re-entered the data into internal systems. As testing operations scaled, this workflow became an operational liability — delays accumulated, transcription errors introduced data integrity risk, and the paper trail was slower and harder to audit than a digital record.
The legal constraint: DCO Mobile wasn't just a productivity tool — it was producing records that could be scrutinized in international arbitration. Every design decision about data capture, chain-of-custody documentation, and audit trails had to meet the evidentiary standards of the World Anti-Doping Code.
Before any technical requirements were written, I worked with field teams to map the actual end-to-end testing workflow from officer notification through to sample handoff at the lab. This grounded the system design in operational reality, surfacing field-specific constraints that wouldn't have been obvious from a headquarters perspective.
I translated USADA's legal and compliance requirements into explicit UAT acceptance criteria that could not be waived. A feature that improved usability but compromised legal defensibility was not acceptable. This created productive tension with the vendor but produced a more defensible system.
I implemented a CI/CD process with deliberate release gates rather than continuous deployment. Each release required confirmed UAT sign-off and integration validation before promotion to production. The small cost in deployment velocity was worth the risk reduction in a legally sensitive system.
The ability to transmit testing data to laboratories before the sample physically arrived was scoped as a core delivery requirement, not a phase-two enhancement. Building it in from the start meant labs could begin preparation workflows in parallel with sample transit — a meaningful operational gain.
Mapped the end-to-end testing workflow with DCOs, translating operational steps into system requirements and legal constraints into formal acceptance criteria.
Served as the primary interface between USADA and the external software vendor. Managed sprint planning, backlog prioritization, and release coordination via Azure DevOps.
Coordinated integration across SIMON, PostgreSQL, and SFTP pipelines for secure lab data transmission. Validated data consistency and lab format compatibility before each sprint sign-off.
Designed UAT cycles based on actual DCO field scenarios — standard tests, no-advance-notice events, and edge cases including connectivity loss and sample exceptions. Validated chain-of-custody documentation and audit log completeness against legal criteria.
Coordinated the production launch and retirement of paper-based workflows. Developed training materials and SOPs for DCOs. Monitored post-launch data quality through the stabilization period.
The most complex part wasn't the technology — it was the handoff from paper to digital in a legally charged context. Convincing field teams and compliance stakeholders that a digital record was as defensible as a signed paper form required demonstrating through structured UAT that the digital audit trail was actually more complete and traceable than paper had ever been.
How I served as the central IT coordination point for live satellite launch operations — owning readiness across infrastructure, applications, and distributed support teams to maintain 100% system availability during zero-tolerance launch windows.
The Air Force Satellite Control Network (AFSCN) is the global ground-based infrastructure that commands and controls U.S. military satellites. It operates across geographically distributed sites, with each location dependent on communications infrastructure, command and control systems, and IT support that must be available continuously — but especially during launch windows, when new satellites are being brought into operational status.
Launch events are operationally compressed and unforgiving. The sequence of events runs on a precise timeline. Any gap in IT infrastructure, communications availability, or systems support during that window doesn't produce a minor inconvenience; it produces a mission impact. My role was to ensure that never happened.
The operational reality: There is no "we'll fix it after the launch" in satellite operations. Problems during a launch window have to be resolved in real time, with no ability to pause or reschedule. The entire value of pre-launch preparation is reducing the probability of that scenario to as close to zero as possible.
Rather than relying on experienced operators to informally confirm system status, I implemented a structured validation checklist that systematically stepped through every critical system, communication channel, and escalation path before the window opened. This took longer but caught issues early enough to resolve them — rather than discovering them at T-minus-zero.
In a geographically distributed operation, ad-hoc escalation during a live event is a recipe for confusion and delay. Before each launch, I established and communicated explicit escalation paths: who gets contacted for which category of issue, in what sequence, through which channel.
I developed contingency plans specifically for the failure modes that history and operational analysis indicated were most likely — communications link degradation, application availability issues, and coordination gaps between distributed sites. Pre-thought responses meant faster reaction times when they occurred.
After each launch event, I conducted structured reviews to identify what had worked, what had required improvisation, and what should change. Treating these as genuine operational inputs — not administrative paperwork — produced improvements that made each subsequent launch more reliable.
Executed structured validation of all critical systems, applications, and communication channels before each launch window. Confirmed readiness across all geographically distributed sites and documented status before proceeding.
Defined and communicated explicit escalation paths for each category of potential issue. Briefed distributed support teams on contingency plans before each event so responses were pre-understood, not improvised.
Served as the central IT coordination point during live events — monitoring system status across sites, directing real-time troubleshooting, managing communications between distributed teams, and making rapid escalation decisions.
Conducted structured post-event reviews after each launch to capture what had worked, what had required improvisation, and what process changes would improve the next event.
The most valuable thing I learned wasn't a technical skill — it was the discipline of pre-thinking. Every scenario thought through before the launch window is a scenario that can be responded to calmly and correctly during it. The operators who perform best under pressure aren't the best improvisers; they're the ones who've done the most preparation. That principle has shaped how I approach every high-stakes delivery since.
The operational processes developed through these launch events — structured pre-launch validation, explicit escalation paths, and disciplined post-event review — became repeatable standards that improved consistency across subsequent events. The broader IT operations program was recognized as a Best Practice by the Air Force Space Command Inspector General.