Ask Astra

Which feature requests are most asked?

Real demand signal from real customer voices. Scored by revenue impact.

Business insightProductWeekly Monday 9am. Quarterly deep-dive correlating built features with retention/expansion lift.
Free to startNo credit card requiredUpdated Apr 2026

You'd think this needs a product manager and a survey — Astra mines every customer touchpoint and ranks requests by revenue impact, not vote count.

The short answer

Astra mines feature requests from 5 sources weekly and ranks by revenue impact, not just request count. Sources: (1) Intercom conversations tagged or mentioning feature requests, (2) Linear tickets in your 'feature requests' or 'icebox' projects, (3) Granola sales call transcripts where prospects asked for missing features, (4) Twitter mentions of your brand with 'wish you had', 'need', 'add', (5) Cancellation surveys mentioning specific gaps. She clusters into themes via semantic similarity (so 'Notion sync' and 'integrate with Notion' merge), then scores each by: total mentions × average MRR of requesters × % of mentioners who explicitly tied it to renewal/expansion decision. The Lark report ranks top 10 requests with the score, the customer list, sample quotes, and a build-effort estimate. Most teams find their #1 request by vote count is rank 4-5 by revenue impact — and the actual #1 is something a few high-MRR customers have been asking quietly for months.

How Astra actually does it

  1. 1
    Mine 5 source channels

    Intercom conversations, Linear feature-request projects, Granola sales call transcripts, Twitter mentions matching keywords, Stripe cancellation reason surveys.

    IntercomLinearGranolaTwitter APIStripe
  2. 2
    Cluster by theme via semantic similarity

    'Notion sync' + 'integrate with Notion' + 'export to Notion' all merge to one theme. Avoids vote-splitting that hides true demand.

  3. 3
    Score by revenue impact

    Per theme: total mentions × avg MRR of requesters × renewal-tied % (mentioned in cancellation/renewal context). The score, not raw vote count, is the signal.

  4. 4
    Add build-effort estimate

    For each top-ranked request she pulls existing Linear tickets/RFCs and estimates eng weeks (T-shirt size: S=1wk, M=2-4wk, L=6-12wk, XL=quarter+).

  5. 5
    Weekly report + customer list

    Monday Lark: top 10 requests by revenue impact, customer list per request (so sales/CS can follow up), sample quotes, build-effort tag.

    Lark

What it looks like in chat

Which feature requests should I actually build next?
Astra
Top 5 from last 90 days, ranked by **revenue impact** (not vote count): 1. **Notion integration** (M, 3-4wk eng). 47 mentions, avg requester MRR $182. **Tied to 4 renewal decisions ($28K ARR at risk).** Top quote: 'We'd 3x our usage if we could push outputs to our Notion docs.' 2. **Custom rate limits per API key** (S, 1wk). 18 mentions but **all from Enterprise tier** ($890+ MRR). Tied to 2 deals stuck in pipeline ($96K ARR). 'Can't ship to prod without per-key controls.' 3. **SSO for Pro tier** (M, 2wk). 31 mentions. **3 cancellations cited 'no SSO at our price point.'** $14K ARR lost. 4. **Bulk export to CSV** (S, 3 days). 89 mentions (highest vote count) but mostly free-tier users. Avg MRR $24. Low impact. 5. **Slack notifications** (M, 2wk). 22 mentions, mid-tier. $8K ARR opportunity. **Build order recommendation:** - Ship #2 (custom rate limits) this sprint — 1wk for $96K pipeline unstick. - Then #1 (Notion) — 4wk for $28K ARR retention + expansion. - Then #3 (SSO Pro) — 2wk to plug churn leak. **Skip #4 (CSV export) for now** — vocal but low-value users. Want me to draft the Linear epics for #2 + #1?
What you get back

Weekly Monday Lark report: top 10 feature requests by revenue impact, customer list per request, sample quotes, build-effort estimates.

Cadence

Weekly Monday 9am. Quarterly deep-dive correlating built features with retention/expansion lift.

Ask Astra this right now

We'll spin up your workspace, hand the prompt to Astra, and you see the answer in 60 seconds. Free.

Try this with Astra

Frequently asked questions

What if I don't track feature requests in Linear?

She'll create a Linear project for you on day 1 and start tagging requests as they come in. For backfill, she scans last 6 months of Intercom conversations and Granola transcripts to seed the database. After 30 days you'll have the full picture even if you've never tracked formally before.

Vote count seems important too — am I ignoring it?

No — she shows both. Vote count tells you what people will tweet about. Revenue impact tells you what they'll pay for. Often they're the same; when they differ, revenue impact wins for build prioritization. The vocal free-tier features can ship later as part of broader releases.

Can she predict which features will actually retain customers?

After 6 months of data she can — she correlates 'features shipped' with 'retention/expansion of customers who requested them.' Early on the prediction is rough; by month 6+ she'll tell you 'features in cluster X have an 80% retention lift, features in cluster Y have 0% lift, prioritize X.'

What if my CEO/PM has different priorities?

She informs, doesn't decide. The report is data; you weigh it against strategy, technical debt, and team capacity. She'll flag if a roadmap item has zero customer demand signal (a 'we think this is cool' build) — useful for healthy debate, not a veto.

Run your one-person company.

Hire your AI team in 30 seconds. Start for free.

Free to start · No credit card required · Set up in 30 seconds