Cohort Analysis for Mobile Games:
Complete UA Playbook (2026)
Most UA managers at mobile game studios are making five- and six-figure weekly budget decisions based on cohort data they don't fully trust. The numbers come from three different sources, they don't align, and by the time someone cleans them up, the campaign window is gone.
- What cohort analysis actually means for mobile game UA and what it doesn't
- How to read D7, D14, and D30 ROAS without getting burned by data distortions
- Why whale users silently destroy your cohort accuracy and how to catch them
- How to turn cohort data into a clear scale or pause decision
- What a clean cohort analysis workflow looks like end-to-end
What cohort analysis actually means in mobile UA
A cohort is a group of users who installed your game during the same time window, usually the same day or same week. Cohort analysis for mobile games means tracking how that group behaves over time: how much revenue they generate on Day 1, Day 7, Day 14, Day 30, and beyond.
For UA managers, this answers one question: did the money I spent acquiring these users come back, and when?
That sounds simple. In practice it isn't, because:
- Revenue data lives in the App Store and Google Play
- Install and attribution data lives in your MMP, such as AppsFlyer or Adjust
- Ad spend data lives in Meta, Google, TikTok, and other ad networks
- None of these systems agree on the numbers by default
Before you can analyze anything, you have to align three separate data sources, clean out attribution errors, handle currency and timezone mismatches, and remove statistical outliers. Most teams spend 15-20 hours a week doing exactly this. Then the data is already stale.
D7, D14, D30 ROAS: what each number tells you
ROAS measured at different cohort intervals tells you different things. Using the wrong interval for a decision is one of the most common and most expensive mistakes in mobile game UA.
| Interval | What it measures | Best for | Typical target |
|---|---|---|---|
| D1 ROAS | Immediate monetisation, Day 1 revenue vs. CPI | Hypercasual, ad-heavy games with fast payback | 0.3-0.6x |
| D7 ROAS | First week revenue signal, your most reliable early indicator | Most UA decisions, the primary scale/pause threshold | 0.8-1.2x |
| D14 ROAS | Mid-term monetisation, captures returning and paying users | Mid-core games, subscription models | 1.0-1.5x |
| D30 ROAS | Full payback signal, where most campaigns break even or profit | RPGs, strategy, casino, longer payback cycles | 1.2-2.0x |
The most important number for most UA managers is D7 ROAS. It's early enough to act on, campaigns are still running, you can still reallocate budget. And it's late enough to be meaningful, the Day 1 spike has settled, the first returning players have shown up.
The payback threshold problem
There's no universal "good" D7 ROAS number. A 0.8x D7 ROAS is catastrophic for a hypercasual game with no long-term monetisation. For a mid-core RPG with strong D30 LTV, the same number might mean you're on track for a 1.8x D30 return.
Your payback threshold depends on your genre, your LTV curve, and your target payback window. Before you set scale/pause rules, you need to know what D7 ROAS typically predicts D30 ROAS in your specific game. That calibration is where most teams skip and where most bad decisions come from.
The biggest reason your cohort data is wrong: whale distortion
Here's a scenario that plays out constantly in mobile game studios.
You launch a campaign. D7 ROAS comes back at 1.4x. Strong signal, you scale. Two weeks later, D30 ROAS is 0.6x. The campaign was actually a disaster. What happened?
One user, or three. High-value spenders, whales, who installed from that campaign and spent hundreds or thousands of dollars in the first week. They inflated the cohort's revenue number so dramatically that the ROAS looked profitable when it wasn't.
That's a real D7 ROAS of 0.69x, not 1.41x. The whale made a losing campaign look like a winner. Without whale detection in your cohort analysis, you'd have scaled into a money pit.
How to identify whale distortion manually
If you're doing this without automation, the process is:
- Export per-user revenue data for the cohort from your MMP
- Rank users by revenue and flag anyone generating more than 5-10x the cohort average
- Calculate ROAS with those users excluded
- If the gap between raw and clean ROAS is more than 15-20%, your reported number is unreliable
In a cohort of 5,000 users, this takes 30-45 minutes per campaign. If you're running 10 campaigns across 3 networks, you understand why it doesn't get done at the frequency it needs to.
Aligning revenue data across MMPs, ad networks, and stores
Whale detection is only one part of the data quality problem. The other, and arguably more time-consuming one, is revenue alignment across sources.
Your MMP reports revenue attributed to campaigns. Your App Store and Google Play report actual IAP revenue. Your ad networks report spend. These three numbers will never match exactly, for several reasons:
- Attribution windows: MMPs attribute installs within a 1-7 day click window. Revenue that comes in after the window closes doesn't get attributed to the campaign
- Currency conversion: stores settle in local currency, MMPs report in USD, and your reporting currency may differ again
- Organic overlap: some users would have installed without your ad. MMPs can attribute them to paid anyway
- Store reporting delays: App Store Connect can lag 24-48 hours on revenue confirmation
The result: a cohort's true D7 ROAS only becomes visible when you've pulled data from all three sources, resolved the discrepancies, and applied a consistent methodology. Without that, you're comparing different numbers and calling them the same thing.
Most teams have a spreadsheet for this. It takes hours. And it breaks every time Meta or Apple changes an export format.
Building a cohort analysis workflow that actually scales
Here's what a clean, repeatable cohort analysis process looks like for a mobile game studio running UA across 3-5 channels.
Step 1: Standardise your data schema
Before any analysis, define a single schema that all sources feed into: install date, campaign ID, network, geo, platform, revenue by day, D1 through D30 minimum, and spend by day. Every source maps into this schema. Discrepancies get flagged, not silently merged.
Step 2: Clean before you analyse
Run outlier detection before calculating any cohort metrics:
- Flag whale users, revenue more than 3x standard deviation from cohort mean
- Identify installs with no attribution data, organic bleed into paid cohorts
- Remove test installs and internal traffic
- Resolve all currencies to a single base at the date of the event, not the date of reporting
Step 3: Calculate ROAS at each interval
With clean data, calculate ROAS at D1, D7, D14, D30. Store both raw and cleaned values, you'll need both for debugging and for understanding how much distortion your cohorts typically carry. Track at campaign level minimum; at ad set and creative level if your data volume supports it.
Step 4: Apply your scale/pause thresholds
Define your thresholds before you look at the numbers, not after. A practical starting framework for D7 ROAS:
| D7 ROAS (clean) | Signal | Default action |
|---|---|---|
| ≥ 1.0x | Above payback threshold | Scale, increase budget 20-40% |
| 0.8x - 1.0x | On track, monitor | Hold, check again at D14 |
| 0.6x - 0.8x | Below threshold, possible recovery | Hold or reduce, depends on your D30 LTV curve |
| < 0.6x | Losing campaign | Pause, reallocate budget immediately |
These are starting-point thresholds. Calibrate them against your game's historical D7→D30 ROAS correlation. The principle, decide the rule before you look at the data, is universal.
Step 5: Run daily, not weekly
Cohort analysis done weekly is nearly useless for UA budget decisions. By the time you've run the analysis and reviewed it in a Monday meeting, the campaign has been running on the wrong setting for five days.
The cadence that actually protects budget is daily. Which means the cleaning and alignment process has to be automated, or it simply won't happen at the frequency it needs to.
Common cohort analysis mistakes UA managers make
Using raw ROAS without outlier removal
The most expensive mistake. Even a single whale in a mid-size cohort can swing ROAS by 40-100%. Always check for distortion before making a scale decision. If you only do one thing differently after reading this, make it this.
Comparing cohorts across different traffic quality periods
A December cohort and a January cohort are not directly comparable. CPIs shift, audience composition shifts, in-game seasonality affects early monetisation. Cohort benchmarks need to be calibrated to the same time window and traffic source.
Waiting for D30 data before acting
By D30, your campaign has been running for a month. If it was a loser, you've already burned the budget. D7 ROAS exists precisely so you can act before you have the full picture. Learn your D7→D30 correlation for your game and trust it.
Treating cohort analysis as a reporting exercise
A cohort dashboard shared in a Monday meeting is not cohort analysis, it's a report. Cohort analysis is a decision tool. If it's not driving a scale, pause, or budget reallocation action, the process isn't working.
Running analysis on blended data
A 1.0x blended D7 ROAS across all campaigns might mean five campaigns at 1.4x and three campaigns at 0.4x. You need campaign-level, ideally ad-set level, data to make real decisions. Blended numbers hide everything that matters.
What good cohort analysis looks like in practice
Here's what a UA manager at a mobile game studio running $200K/month in UA should be able to see every morning, without a data ticket or a 3-hour spreadsheet session:
- D7 ROAS for every active campaign, cleaned for whale distortion, aligned across MMP and store
- Which campaigns are above threshold, on track, or below threshold
- Any anomalies flagged automatically, a whale, an attribution spike, an unusually high CPI day
- Revenue breakdown by geo, network, and creative where data volume allows
Getting to that view manually requires a data analyst running a multi-step pipeline every morning. Most studios don't have that person dedicated to UA analytics, or that person is rebuilding the same spreadsheet every week.
The studios that get this right consistently, clean, daily cohort data with clear recommendations, make better budget decisions over time. The compounding effect of not scaling bad campaigns and not killing good ones early is significant at any meaningful UA spend level.
That's what automated cohort analysis solves: giving every UA team the analytical clarity that used to require a dedicated analyst and a full data pipeline.
Stop waiting three days
for yesterday's numbers.
Cohortful cleans your data, detects whale distortions, aligns your MMP and store revenue, and delivers D7 ROAS recommendations every morning. No analyst required.
See how Cohortful works