Ask Astra

Write quarterly performance reviews for my team

Your AI CEO writes the reviews you keep pushing to next week.

OperationsPeopleQuarterly per review cycle.
Free to startNo credit card requiredUpdated Apr 2026

You'd think this needs a clear Saturday and a strong coffee — Astra hands you 8 drafts before lunch.

The short answer

Astra writes performance reviews by reading each report's actual work output and turning it into a balanced, specific draft you can polish in 10 minutes. For each direct report she pulls Linear (issues shipped, time per ticket), GitHub (PRs reviewed and authored, code quality signals), Google Calendar (meeting load, focus time), 1:1 notes from your Notion or Lattice, and any 360 feedback collected. She drafts a review structured as: top 3 wins with specific examples, top 2 growth areas with concrete next steps, alignment with role expectations, and one stretch question for the conversation. The draft is honest — neither sycophantic nor harsh — and grounded in observable evidence. Output: one Lark draft per person, queued for your 10-min edit pass. A 6-person team that used to eat your weekend now takes 60 minutes total.

How Astra actually does it

  1. 1
    Pull work output per person

    Linear (issues, cycle time), GitHub (PRs authored/reviewed, lines changed), calendar (meeting load), 1:1 notes from Notion/Lattice, peer 360 if available.

    LinearGitHubNotionLattice
  2. 2
    Identify wins and growth areas

    3 wins per person with specific examples (project name, impact, behavior). 2 growth areas with concrete next-quarter actions, not generic advice.

  3. 3
    Draft balanced review

    Structure: wins, growth areas, role expectation alignment, stretch question. Tone: honest, specific, neither sycophantic nor harsh.

  4. 4
    Cross-check against role rubric

    Compare to your role expectations doc (or industry-standard SDR/IC/Manager rubric if you don't have one). Flag any expectation gaps.

  5. 5
    Queue in Lark for edit

    One draft per person in Lark. Inline edits supported. Once you approve, ships to Lattice/Notion as the official review record.

    LarkLatticeNotion

What it looks like in chat

Write Q2 performance reviews for my 6-person eng team.
Astra
Pulling data for all 6 reports. Drafts in Lark in ~90 min. Quick preview of one (Devon, Senior Backend, 9 months tenure): **Wins:** 1. Owned the rate-limiting feature end-to-end — shipped 3 days early, zero incidents post-launch. PR #847. 2. Mentored Aiyana through her first production deploy. Lattice peer feedback: 5/5 from her. 3. Reduced p99 latency on /api/checkout by 340ms (PR #821) — directly correlated with the conversion lift in week 14. **Growth areas:** 1. PR review turnaround averaged 38 hours (team median: 14h). Suggest committing to <24h for blocker-tagged PRs next quarter. 2. Skipped 3 of 12 weekly 1:1s — twice due to deep work, once unexplained. Suggest defending these as non-negotiable. **Role alignment:** Meeting Senior IC bar; tracking toward Staff if mentorship pattern continues. **Stretch question for the convo:** "Where do you feel underutilized right now?" All 6 drafts done by 2pm. Edit pass should take you ~10 min each. Start with Devon's?
What you get back

One Lark draft per direct report — wins with specific examples, growth areas with concrete next steps, role-rubric alignment, and a stretch question — ready for your 10-min edit and ship to Lattice.

Cadence

Quarterly per review cycle.

Ask Astra this right now

We'll spin up your workspace, hand the prompt to Astra, and you see the answer in 60 seconds. Free.

Try this with Astra

Frequently asked questions

What if I have non-engineers on the team?

Astra adapts the data sources: for sales reps she pulls Salesforce (closed-won, pipeline coverage, activity); for designers Figma (files shipped, comment threads); for marketers GA4 + content output. She'll ask if she's not sure which sources matter for a given role.

Will the reviews sound generic and AI-written?

No — she grounds every claim in a specific example (PR number, project name, observed behavior). Generic praise ("great team player") is replaced with specific evidence ("Lattice peer feedback from 4/5 reports cited her unblocking them on the migration"). The 10-min edit pass adds your voice.

What if I haven't kept 1:1 notes?

She works with what's there. Without 1:1 notes she leans more heavily on observable work output (Linear, GitHub) and skip-level feedback if available. She'll explicitly note what data was missing so you can supplement from memory in the edit pass.

Can she also rate or rank performance?

She maps each person to your role rubric ("meeting bar / exceeding bar / below bar") with reasoning. If you use a numeric calibration (like Lattice 1-5), she'll propose a score with justification — but final calibration stays with you and your leadership team. She never publishes ratings without your sign-off.

Run your one-person company.

Hire your AI team in 30 seconds. Start for free.

Free to start · No credit card required · Set up in 30 seconds