90-Day Execution Plan for an Engineering Organization (Part 3/3)

5 minute read

90-Day Execution Plan for an Engineering Organization


This is the last of three posts based on a consulting engagement I did for a LATAM payments fintech. Names, data, and details have been changed. In the first post I did the organizational diagnosis. In the second, the metrics strategy. Here’s the plan to put everything into action.

A diagnosis without execution is just a nice document. The 90-day plan is where decisions become concrete actions.

Squad rituals

Rituals aren’t meetings for the sake of meetings — they’re the minimum touchpoints to keep the team aligned without wasting time.

Before the sprint

  • Refinement (45min) — describe tickets, clarify scope, estimate
  • Sprint Planning (1h) — decide what goes into the sprint

Weekly

  • Sync Monday, Wednesday, and Friday (15min) — three times a week, not five. The days without syncs are for developers to have long, uninterrupted focus blocks. A 15-minute daily seems harmless, but five in a row fragment the week more than you’d think.

Bi-weekly

  • 1:1 with each engineer (30min)

End of sprint

  • Squad retro (30min)
  • Engineering-wide retro (30min) — in my experience, this is where the team truly aligns. Problems, solutions, and context are shared across squads. It generates more collaboration than any document.
  • Demo for business stakeholders (30min)

Metrics-driven planning

Estimation

The scale follows the Fibonacci sequence (1, 2, 3, 5, 8) — the gaps between numbers reflect that the bigger something is, the more uncertainty it carries.

Points Meaning Example
1 Trivial, < 2 hours Fix typo, config change
2 Simple, half a day Clear bug, minor adjustment
3 Moderate, 1-2 days Small feature, scoped refactor
5 Complex, 3-4 days Full feature, integration
8 Very complex, 1 week New system, high uncertainty
> 8 Must be split Too large

Voting in refinement matters — the context from whoever worked on it before and senior expertise carries weight. If there’s a difference greater than 3 points between votes, discuss it. That’s where hidden dependencies and risks surface.

Capacity

  • Historical velocity (average of last 3 sprints) as ceiling
  • Commitment at 80% of capacity
  • 20% buffer for interruptions
  • WIP limit: maximum 2 tickets per person

Target sprint composition

  • Minimum 50% features
  • Maximum 25% bugs
  • Maximum 15% interruptions

These numbers are a guide, not a rigid rule. They adjust based on squad context and timing.

Prioritization system

For the backlog (in refinement)

Score = (Revenue Impact Ă— Confidence) / Effort
  • Revenue Impact (1-5) = how much it affects revenue
  • Confidence (1-5) = how clear the scope is
  • Effort (1-5) = estimated effort
Ticket Revenue Impact Confidence Effort Score Priority
Fix Payout Service crash 4 5 2 10 P1
Migrate endpoint 2 5 3 3.3 P2
Reporting feature 3 3 5 1.8 P3

For the sprint

  • P0: Production incidents → interrupts sprint, handle immediately
  • P1: Direct revenue impact → enters next sprint
  • P2: Improvements → prioritized backlog
  • P3: Nice to have → backlog

For unplanned work

  • P0: Production down → interrupts sprint
  • P1: Client blocked → same day
  • P2: Everything else → goes to backlog, prioritized in next planning

P0 and P1 fit into the 20% buffer reserved in the sprint for interruptions.

Process fixes

Work intake

  • Every request enters through a single channel
  • EM triages the same day
  • If urgent (P0/P1): enters current sprint
  • If not urgent: goes to backlog for next planning

Scoping

  • Tickets without acceptance criteria don’t enter the sprint
  • Tickets larger than 8 points get split
  • Dependencies identified before planning

Escalations

  • Engineer → Senior/EM: when blocked or needs a technical decision
  • Senior/EM → CTO: when there’s business risk (client affected, revenue at risk)

Outage management

  • Blameless postmortem within 48 hours
  • Template: what happened, timeline, impact, root cause, action items
  • Share learnings in engineering retro
  • Track action items until closure

Releases

  • Diagnose architecture and tech debt across projects
  • Improve release testing — QA by the same developers who built the feature
  • Validate production deployments for 15-30 minutes
  • Create alerts for service behavior and errors

Reporting

Weekly to CTO

  • Status: On Track / At Risk / Blocked
  • Metrics vs targets
  • Decisions that need input
  • Forecast for next 2 weeks

End of sprint to business stakeholders

  • What was completed
  • What was moved and why
  • Next sprint

Monthly

  • OKR review
  • Capacity planning for next month
  • Adjustments based on learnings

Improvement targets with DORA Metrics

To know if you’re improving you need clear targets. DORA Metrics is a framework from Google’s DevOps Research and Assessment team that classifies engineering team performance into four levels.

DORA Benchmark

Metric Elite High Medium Low
CFR <5% 5-10% 10-15% >15%
MTTR <1h <1 day 1d-1 week >1 week
Deploy Frequency Multiple/day Daily-weekly Weekly-monthly Monthly+
Lead Time <1h 1d-1 week 1 week-1 month 1-6 months

Targets for this organization

Metric Current Target 30d Target 90d Goal
CFR Payout Service 23% (Low) 15% <10% Low → Medium → High
CFR Split Engine 9% (High) 8% <5% High → Elite
Flow Eff. Payouts 13% 20% >30% Reduce wait time
Flow Eff. Payments Core 27% 30% >35% More productive time
Cycle Time P75 Payouts 19.7d 14d <10d Fit in a sprint
Cycle Time P75 Payments Core 10.2d 9d <8d Sprint margin
Sprint Completion ~60% 70% >80% Predictability
% Interruptions ~30% 20% <15% Protect roadmap

Team health and culture

Metrics measure the system. But behind the system there are people. If you don’t take care of the team, no metric improves.

Month 1 — listen and diagnose

  • 1:1 with each engineer to get to know them
  • Identify burnout risk, especially in Payouts (2 people with 57% reactive work)
  • Implement “Maker Time” — 4-hour blocks with no meetings so developers can focus

Month 2-3 — act

  • Optional tech talks to share knowledge
  • Share learnings across squads
  • Evaluate whether improvements are working with concrete data
  • Propose next steps based on what was learned

Career ladder

A team needs to know where it can grow. Without a clear career ladder, retention becomes a problem.

Levels

Level Technical Scope Impact Scope Leadership Scope
Junior (L1) Defined tasks, guided code Individual feature Learns from others
Mid (L2) Complete features, design with guidance Squad Active mentee, seeks feedback
Senior (L3) End-to-end systems, defines approach Multi-squad Mentor, makes technical decisions
Staff (L4) Organizational architecture, tech strategy Company Multiplier, sets hiring bar

Evaluation dimensions

  • Technical Excellence — code quality, architectural decisions, domain knowledge
  • Delivery — follow-through, predictability, ability to unblock
  • Teamwork & Communication — collaboration, documentation, clear communication
  • Business Acumen — understanding the business impact of technical decisions

Implementation

  • Assess each engineer to know where they stand
  • Define clear roles per level
  • Identify what each person needs to grow
  • Build an individual development plan, coordinated with People, Finance, and the CTO to evaluate budget, conditions, and timing

This is a first approach with limited context. Once you’re inside the organization and know the teams, processes, and culture up close, you can adjust every part of this plan. But as a starting point, it already gives you a structure to move from diagnosis to action.

The key isn’t following the plan to the letter — it’s having a system that lets you measure, adjust, and improve continuously.

If you want to dig deeper into the metrics I used as reference, check out the DORA framework.