Workflow

Quarterly OKR Review Workflow

Real OKR review in 2 hours — not 2 days of pulling data, guessing scores, and arguing about wording.

Q4 is ending. You set 3 company-level OKRs 12 weeks ago. You remember 2. The third had a KR about 'improve NPS' and you have no idea where NPS ended up. Scoring is 90% gut feel because the data lives in 5 systems. Q1 OKRs get set Friday over Slack messages and by Feb everyone's operating on different assumptions again.

Free to startNo credit card requiredUpdated Apr 2026
Tycoon solution

Astra + AI COO run a structured quarterly OKR cycle. Last-quarter KRs scored from real data (Stripe, Mixpanel, HubSpot, Gusto, GitHub), wins/losses synthesized with root causes, next-quarter OKRs drafted from strategic priorities + prior-quarter learnings, and the review meeting becomes a 2-hour discussion with everyone looking at the same numbers. No more debate about what 'partially achieved' means.

How it runs

  1. 1
    Pull last-quarter scoring data

    Two weeks before quarter end, AI COO pulls each KR's metric from source: revenue from Stripe, activation from Mixpanel/PostHog, hiring from Gusto/Ashby, shipping velocity from Linear/GitHub. Calculates achievement % against target. No manual data pulling.

  2. 2
    Generate scoring with rationale

    Each KR gets scored 0.0-1.0 per Google's OKR playbook (1.0 = target hit, 0.7 = strong, 0.3 = missed but close). Astra writes a 2-sentence rationale for each score: what was achieved, where the gap came from. Scoring + rationale reviewed with you before publishing.

  3. 3
    Win/loss synthesis

    For each OKR, AI COO synthesizes: what went right (specific wins with evidence), what didn't (specific misses), root cause (under-investment, wrong assumption, external factor, good-execution-wrong-goal). Output is a 1-page retro doc — more useful than 'we didn't quite hit it, moving on'.

  4. 4
    Strategic priorities for next quarter

    Astra reviews: last-quarter learnings, current runway + cash position, board-level priorities, competitive landscape shifts, customer conversation themes. Drafts 3-5 strategic priorities with trade-offs explicit — not a wishlist of 12 things, a forced-choice of 3-5.

  5. 5
    Draft next-quarter OKRs

    From strategic priorities, AI COO drafts 3 OKRs each with 3-4 KRs. KRs are measurable (specific number, timeframe), outcome-based (not 'build feature X' but 'feature X drives 1000 weekly actives'), ambitious (0.7 is a strong quarter). Each KR has named metric sources and baseline values.

  6. 6
    Team review and alignment

    Draft shared 5 days before quarter-end with team + board. Feedback collected in Notion comments. AI COO consolidates feedback, surfaces disagreements, drafts final version. No 4-hour whiteboarding session — structured async review that converges.

  7. 7
    Publish and set up tracking

    Final OKRs published to your OKR tracker (Notion, Lattice, Mooncamp, or ClickUp). Each KR gets an automated weekly update pulling from source metrics. Slack/chat weekly digest: 'KR #3 is at 34% of target at week 4 of 13 — pace = 2x, on track'. No end-of-quarter surprise.

Who runs it

hire/ai-ceohire/ai-coohire/ai-cfo

What you get

  • Quarterly OKR review takes 2 hours instead of 2 days
  • Scoring grounded in real data, not gut feel
  • Root cause analysis per KR — compounding learning
  • Next-quarter OKRs drafted with explicit trade-offs, not wishlist
  • Weekly KR tracking throughout the quarter (no end-of-quarter surprise)
  • Team alignment via async review, not marathon meetings
  • OKR cadence becomes a rhythm, not a crisis

Frequently asked questions

We're 1-5 people. OKRs feel like enterprise theater at our stage.

OKRs at 1-5 people work if you simplify: 1 OKR (not 3), 3 KRs, quarterly (not OKRs-within-OKRs). The value isn't the formalism; it's the forcing function of 'pick 3 measurable things for the next 90 days and don't drift'. Tycoon's 1-5-person mode skips the team-review step and makes it a founder-plus-AI-COO conversation — 45 minutes to set them, 45 minutes at quarter-end to review. If that sounds like theater, skip OKRs entirely and use /workflows/weekly-review instead.

How is this different from just using Lattice or Mooncamp?

Lattice and Mooncamp are tracking tools — they store OKRs and let you update status. They don't pull data from source systems automatically (you still manually update progress), they don't draft OKRs from prior-quarter learnings, they don't synthesize win/loss root causes, and they don't surface 'you're behind pace on this KR' mid-quarter. Tycoon orchestrates across Lattice (or whatever tracker you use) + your real data sources. Tracking tool stays; the work of reviewing + planning moves to something purpose-built.

What if our KRs aren't measurable — we have qualitative goals like 'improve culture'?

Qualitative KRs get operationalized into proxies. 'Improve culture' becomes 'ENPS >60 in Q2 survey' + 'zero voluntary departures' + 'glassdoor rating stays ≥4.5'. Astra walks you through the conversion: what concrete evidence would convince you the KR succeeded? Those become the metrics. If you can't name any evidence, the KR isn't a KR — it's a direction, and directions go into the 'strategic priorities' bucket, not the OKR bucket.

Our Q3 KRs were wrong in retrospect — should we abandon mid-quarter or grind through?

Astra's default guidance: finish the quarter, honestly score the KRs you couldn't hit, use the retrospective to understand WHY they were wrong. Killing OKRs mid-quarter creates a bad habit (oh, we can just drop what's hard). The exception: if the business context has fundamentally changed (pivot, new funding, existential competitor move) and the KRs are now actively harmful to pursue, you change them with a written explanation that serves as your retro for the quarter. Tycoon has a 'pivot mode' that handles this formally.

How does this work for multi-team orgs — company OKRs that cascade to team OKRs?

Same workflow, added layer. Company OKRs set first, then each team (product, marketing, ops, eng) sets 2-3 team OKRs that ladder up to company OKRs. AI COO enforces the cascade: every team KR must link to a company KR it supports. Team leads review the cascade with their managers, AI COO checks for gaps (company KR with no team owner) and overlaps (two teams committing to the same number). Cascade takes 1 week instead of 3 weeks of alignment meetings.

Related resources

Run your one-person company.

Hire your AI team in 30 seconds. Start for free.

Free to start · No credit card required · Set up in 30 seconds