How to achieve high ROAS of advertising campaigns for a mobile RPG game

Introduction

For most companies, the key metric for evaluating the effectiveness of ad campaigns is the Return on Ad Spend (ROAS) indicator. Often (but not always), product teams plan to achieve an ad spend return of over 100% on the 30th, 90th, or even 180th day. To evaluate the effectiveness of advertising campaigns earlier and to make the necessary changes to achieve the set profit goals, the ROAS metric is calculated for 1, 3, and 7 days. This metric becomes the main KPI for UA teams.

This goal was also set for our team as part of ADVGO42’s collaboration with a well-known RPG mobile game: we needed to achieve at least a 52% return on ad spend by the 7th day of the cohort for each traffic source we used in the acquisition.

This article will discuss our case study, detailing the methods and tools that helped us achieve our set goal. To learn more about our cases, visit (ссылка на раздел с кейсами).

Launching and Testing First Hypotheses

We chose the geographies for the launch based on our experience, market knowledge, and historical ARPPU data. These were the US, DE, FR, and WW (worldwide) for Facebook and the US, DE, FR, Eastern Europe, and WW for Google Ads.

We decided to optimize for the event first_purchase (paying users), as the cost of this event was one of the indicators in our KPIs, and more statistics in the account were needed for ROAS optimization. For audiences on Google Ads, we used UACs, while on Facebook, we started with 1% look-alike and AAA campaigns. The budget was determined based on a minimum of 50 target weekly conversions per campaign.

The Google Ads campaigns were a resounding success, meeting all our KPIs in the first stage. The US, DE, and WW campaigns were particularly effective, driving impressive results. In Eastern Europe, we faced some challenges with low volumes, which we addressed by increasing the bid. ROAS fell short of our KPIs in the France campaigns, but we swiftly responded by optimizing our creatives and reducing the bid.

Facebook Ads campaigns met the KPIs on small budgets (up to 200 euros per day), but at the planned spending volumes, the campaigns no longer met the KPIs regarding the cost per paying player and ROAS. We corrected performance by working with budgets (not bids, as we used auto-bid on Facebook) and turning off inefficient creatives.

Campaigns’ Optimization and KPIs Achievement

During the first stage of the campaigns, we prepared new hypotheses: for Google Ads campaigns, with the aim of scaling, and for Facebook Ads campaigns, to meet KPIs and then scale.

In Google Ads, we launched ROAS optimization for WW, US, and DE in addition to existing acquisitions, while France and Eastern Europe were discontinued due to low performance.

The WW ROAS and US ROAS campaigns achieved a 52% return on investment by the 7th day, about a week after launch, allowing us to purchase at a higher volume while still meeting KPIs. DE ROAS campaigns were discontinued as there was no stable ROAS meeting the KPIs, which meant scaling was impossible.

Due to the poor performance, we discontinued France campaigns on Facebook and launched 5% standard look-alikes for US, WW, and DE, 5% look-alikes collected from the top quantile of payers for US, WW, and DE, and 1% value-based look-alikes for WW.

A few days later, we launched three campaigns with campaign budget optimization and several ad sets that differed in the audience (broad, lal 1%, lal 5%, interests).

Campaigns with 1% value-based lal on WW optimized for value and the campaign with mixed ad sets achieved a 52% ROAS on the 7th day with stable spending volumes. Other campaigns were eventually stopped.

Issues We Faced and How We Solved Them

Google Ads 

During the second month, while we were meeting KPIs, a product test was underway that required us to turn off mono campaigns targeting tier-1 geos and exclude these geos from WW campaigns for the duration of the test. This directly impacted campaign performance. 

To address the issue, we:

  • reduced bids on some campaigns;
  • created duplicate campaigns;
  • launched payment optimization testing in addition to payer and ROAS optimization.

This helped maintain performance and volumes at their previous levels, although we later disabled payment optimization campaigns because their effectiveness was lower than that of other campaigns.

Facebook

In the first month, campaigns still needed to meet KPIs at the required volumes. Our campaigns generated traffic but still need to achieve the target ROAS. 

To solve this, we tested 23 combinations of audience, geo, and a particular type of optimization before finding approaches that justified all tests and continuing to procure in KPIs.

Conclusion & Recommendations

Relying on our experience, we have prepared some tips for you:

  • Continuously test different hypotheses until you find combinations that yield positive results 
  • Create hypotheses based on previous statistics, including data from other sources with similar metrics (such as CPM, registration conversion rate), user behavior, and the inventory of advertising platforms (for example, not all platforms support lookalike audiences, etc.).

    Also, when formulating hypotheses, try to determine the conditions under which the test can be concluded and the results obtained. This can be determined based on understanding how the source algorithm works and, thus, based on the necessary amount of data. For example, Google’s algorithm typically learns over two weeks, and after that, metrics such as CPI, CPA, and ROAS often stabilize and fluctuations decrease.

    You need at least 50 target events per week or more to train the algorithm. If, after two weeks, you see metrics that, although not meeting KPIs, demonstrate a clear positive trend, then you can continue the test and fine-tune the campaigns with bids, budgets, or creatives. However, if you realize that you are far from the KPIs (the extent of which is determined based on experience), consider the test unsuccessful and turn it off. 
  • Scale campaigns when they consistently meet KPIs over several weeks.
  • Test creative concepts, including gameplay videos, teasers, and user-generated content (UGC). In our case, gameplay-based creatives performed the best.
  • When working with creatives, it is a good practice to allocate a larger portion of the budget to variations of the most successful creatives and a smaller portion to testing new approaches.
  • Launch tests with small budgets and bids. If you run multiple hypotheses simultaneously with high bids, there’s a high chance of overspending. Launching with lower bids and budgets is less risky.
  • In addition to standard bid/budget optimization and creative testing, experiment with non-standard settings (attribution windows, optimization events, audiences, etc.).