← Back to blog

ROAS cohort analysis for mobile games:
D7, D14, D30 guide (2026)

Most UA managers look at ROAS every day. Far fewer are looking at clean ROAS, the number after whale distortions are removed and revenue sources are properly aligned. The gap between those two numbers is where bad scaling decisions live.

IN THIS GUIDE
  • What ROAS cohort analysis actually measures and why it's not the same as the number in your MMP
  • Why raw D7 ROAS from AppsFlyer or Adjust is almost always wrong
  • How to calculate clean ROAS at D7, D14, and D30
  • ROAS benchmarks by genre: hypercasual, mid-core, RPG, casino
  • How to turn cohort ROAS data into scale, hold, or pause decisions

What ROAS cohort analysis actually measures

Raw ROAS is simple: revenue divided by spend. If you spent $10,000 acquiring a group of users and they generated $8,000 in revenue, your ROAS is 0.8x.

ROAS cohort analysis is more specific. It measures the revenue generated by a defined group of users, a cohort, typically grouped by install date, over a fixed time window after acquisition. D7 ROAS means the revenue that cohort generated in their first seven days, divided by what you spent to acquire them.

This distinction matters for one reason: cohort ROAS is the only number that tells you whether a specific campaign, network, or creative is actually profitable. Blended ROAS across all active users mixes new cohorts with old ones, high-LTV users with low-LTV ones, and profitable campaigns with money-losing ones. It tells you what happened overall. It doesn't tell you what to do next.

For a deeper foundation on cohort methodology, see our cohort analysis guide. This post focuses specifically on ROAS calculation and benchmarks.

ROAS cohort analysis answers one question: for the users I acquired with this campaign, on this date, from this network, did the money come back? That's the question every scaling decision should start from.

Why raw D7 ROAS from your MMP is almost always wrong

Your MMP, AppsFlyer, Adjust, or equivalent, calculates ROAS from the data it has access to: attributed installs and in-app purchase events reported through its SDK. That sounds complete. In practice, it misses several things that materially affect the number.

Problem 1: Store revenue doesn't reconcile

Your MMP reports IAP revenue based on purchase events fired through its SDK. Your App Store Connect and Google Play Console report actual settled revenue, after platform fees, refunds, and processing. These two numbers are never identical, and the gap is rarely random. It tends to skew in a consistent direction for your specific game and market mix.

If you're making scale decisions on MMP revenue without reconciling against store data, you're working with a number that's systematically off, often by 5-15%.

Problem 2: Attribution window gaps

Most MMPs attribute revenue to a campaign within a 1-7 day click window, depending on your settings. Revenue that comes in after the attribution window closes doesn't get credited to the campaign, even if those users installed directly because of it. For games with meaningful D14+ monetisation, this means your D7 ROAS from the MMP understates true campaign performance.

Problem 3: Whale distortion

This is the most expensive problem, and the least visible. A single high-value spender, a whale, in a cohort of 2,000-5,000 users can inflate that cohort's ROAS by 40-100%. Your MMP dashboard reports the number including the whale. There's no flag, no warning. It just looks like a strong campaign.

The result: you scale into a cohort that would have shown 0.6x clean ROAS, because the reported number was 1.3x. By D30, the whale effect has diluted, the true ROAS is visible, and the budget is already spent. This is the core problem that mobile UA analytics at most studios fails to catch.

cohortful · roas_analysis · Meta_Lookalike_v3 ● live
analyst> run roas_analysis --campaign="Meta_Lookalike_v3" --interval=D7
↳ Installs: 3,841 · MMP revenue: $48,200 · Store revenue: $44,600
↳ Revenue gap: -7.5% · applying store-aligned figure
Whale detected: user_id #2274 · D7 spend $6,100 · cohort distortion +94%
D7 ROAS (MMP raw): 1.38x
D7 ROAS (store-aligned): 1.27x
D7 ROAS (clean): 0.65x // whale + store adjusted
Recommendation: PAUSE · does not meet payback threshold

The same campaign. Three different numbers. Only the clean ROAS leads to the right decision.


How to calculate clean ROAS at D7, D14, D30

Clean ROAS requires three adjustments to the raw MMP number. Done in order:

Step 1: Align revenue to store data

Pull settled revenue from App Store Connect and Google Play for the same install cohort and date range. Calculate the ratio between MMP-reported revenue and store-settled revenue. Apply this as a correction factor to your ROAS calculation. For most studios this ratio is stable within a narrow band, once you've measured it, you can apply it systematically.

Step 2: Remove whale outliers

For each cohort, calculate the mean and standard deviation of per-user revenue. Flag any user whose D7 revenue exceeds 3x the standard deviation. Remove them from the cohort revenue total before calculating ROAS. Store the raw and clean numbers separately, you'll need both.

The threshold, 3x standard deviation, is a starting point. For games with high natural revenue variance, casino and RPG, you may want to tighten this to 2.5x. For hypercasual games with low IAP, 3x is usually sufficient.

Step 3: Calculate at each interval

With clean revenue, the calculation is straightforward:

Interval Revenue window Formula Primary use
D1 ROAS Day 0-1 post-install Clean D1 rev ÷ Spend Hypercasual early signal
D7 ROAS Day 0-7 post-install Clean D7 rev ÷ Spend Primary scale/pause trigger
D14 ROAS Day 0-14 post-install Clean D14 rev ÷ Spend Mid-core confirmation
D30 ROAS Day 0-30 post-install Clean D30 rev ÷ Spend Full payback assessment

Always store both raw and clean values. The delta between them is diagnostic data, a consistently large whale distortion in a specific channel is a signal about traffic quality, not just a data artefact.


D7 ROAS benchmarks by genre: what good actually looks like

There is no single good D7 ROAS. The right number depends entirely on your genre's LTV curve, specifically, how much of the total D30 and D90 revenue is captured by D7. A genre with front-loaded monetisation, hypercasual or puzzle, has a very different D7 target than one with long-tail revenue, RPG, strategy, or casino.

The benchmarks below are based on industry patterns across mobile game genres. They reflect clean ROAS, after whale removal and store alignment. Raw ROAS from your MMP will typically read 20-100% higher.

Note: The figures below are illustrative benchmarks based on industry patterns, not guarantees. Real numbers vary significantly by game, market mix, monetisation model, and traffic source. Use these as a starting framework, then calibrate against your own historical cohort data.
HYPERCASUAL / CASUAL
Ad-monetised, fast payback
D1 ROAS target0.25-0.45x
D7 ROAS target0.65-0.90x
D7→D30 multiplier1.3-1.5x
Scale threshold (D7)≥ 0.75x
MID-CORE (SHOOTER / STRATEGY)
IAP-driven, medium payback
D1 ROAS target0.05-0.15x
D7 ROAS target0.45-0.75x
D7→D30 multiplier1.8-2.4x
Scale threshold (D7)≥ 0.55x
RPG / 4X STRATEGY
Long-tail monetisation
D1 ROAS target0.03-0.10x
D7 ROAS target0.30-0.55x
D7→D30 multiplier2.5-4.0x
Scale threshold (D7)≥ 0.35x
CASINO / SLOTS
High LTV, whale-heavy
D1 ROAS target0.10-0.25x
D7 ROAS target0.50-0.80x
D7→D30 multiplier2.0-3.5x
Scale threshold (D7)≥ 0.60x
Important: Casino and RPG genres have the highest whale distortion risk. A single high-value user in these genres can account for 10-30% of a cohort's total D7 revenue. Never make scale decisions on raw ROAS in these categories, clean ROAS is non-negotiable.

How to calibrate your own thresholds

The benchmarks above are starting points. Your real D7 scale threshold should be derived from your own historical data: take 6-12 months of cohorts where you know the D30 outcome, plot D7 clean ROAS against D30 clean ROAS, and find the D7 number that consistently predicts D30 profitability for your game. That number is your threshold, not a benchmark from a blog post.


Using cohort ROAS data to make scale and pause decisions

Clean cohort ROAS data is only useful if it drives a decision. Here's a practical framework for turning numbers into actions, applied daily, not at the end of the week.

Clean D7 ROAS Signal Default action Next check
≥ 1.0x Above payback threshold Scale, increase budget 20-40% D14 to confirm trajectory
0.75x - 1.0x On track for genre target Scale moderate, +10-20% D14 required before further scale
0.55x - 0.75x Below threshold, possible recovery Hold, no budget change D14 is the decision point
0.35x - 0.55x Weak signal Reduce, cut budget 30-50% D14 as final check before pause
< 0.35x Losing campaign Pause, reallocate immediately Post-mortem: creative, audience, CPI

A few things this framework assumes that are worth making explicit:

  • Thresholds are genre-adjusted. A 0.65x D7 ROAS means different things for hypercasual vs. RPG. Calibrate the table to your game before applying it.
  • All ROAS values are clean. Never apply this framework to raw MMP numbers, the thresholds don't hold.
  • Decisions are made at campaign level, not blended. A blended 0.8x across 8 campaigns might hide 3 campaigns at 0.3x and 2 campaigns at 1.4x. Look at each one separately.
  • The decision is made the same day the data is available. A correct decision made three days late is a bad decision. Daily clean ROAS is what makes this framework operational.

What D14 tells you that D7 doesn't

D14 ROAS is the confirmation layer. If a campaign is in the hold zone at D7, 0.55x-0.75x, D14 resolves the ambiguity, either the cohort is monetising on a delayed curve and the trajectory is positive, or it's flat and the campaign is a slow loser. For mid-core and RPG games, D14 is often the more reliable primary signal than D7.

D30 is the final verdict. By D30, almost all meaningful monetisation from the cohort has occurred, except very long-tail RPG/strategy games. If a campaign is still below 1.0x at D30 clean ROAS, it's definitively unprofitable and the post-mortem should focus on why D7 didn't predict it.


How Cohortful automates clean ROAS cohort analysis

The calculation above, store alignment, whale removal, cohort-level ROAS at D1/D7/D14/D30, daily, takes a UA analyst 15-20 hours a week to run manually. That's assuming they have a working pipeline, clean MMP exports, and no edge cases. In practice, it takes longer.

Cohortful automates all of it. Upload your CSV exports from AppsFlyer, Adjust, Meta, Google, App Store, or Google Play, or connect via API, and your team gets clean cohort ROAS for every active campaign every morning. Whale distortions flagged and adjusted. Store revenue reconciled. Scale/pause recommendations surfaced before your standup.

No analyst queue. No spreadsheet that breaks when Meta changes an export format. No decision made on a number that was quietly wrong.

Clean D7 ROAS. Whale detection. Store revenue aligned. Scale/pause recommendations. Ready every morning.

Stop scaling on numbers
you can't fully trust.

Cohortful automates clean ROAS cohort analysis for mobile game studios: D7, D14, D30, whale-adjusted, store-aligned, every day.

See how Cohortful works