What happens when you activate the Fraud Prevention Suite – and how can you at best work with your ad network partners to understand and, at best, resolve any problems resulting from user acquisition fraud?
Activating the Fraud Prevention Suite can seem like a betrayal of the trust of your partners. Intuitively, we want to blame ad networks for their part in allowing mobile user acquisition fraud to come to pass. When we activate a set of filters like the Fraud Prevention Suite, and we apply that to the traffic our partners are delivering, it might seem implied that our ad network partners are delivering bad traffic.
We shouldn’t think this way. User acquisition fraud is perpetrated by individual fraudsters who operate as sub-publishers, delivering traffic to a number of networks. Their profiles – the apps they use, the names they use – change all the time. If they get kicked out of one network, they’re sure to reappear on another. If they’re caught on one campaign, they will switch to another offer within a moment’s notice. This is the explanation behind the curious peak-trough patterns we see in a certain campaign’s fraud prevention, where fraudsters knock on the door, find it locked, and move on to the next mark. (This is also why fraud rejection rates usually drop precipitously just after fraud filters are activated. The fraudsters are rejected, and so they reject the campaign.)
But still, we can be concerned about the impact on the partners we work with after we activate various fraud prevention tools. At Adjust, we try to be as transparent as possible about the functionality we provide and how that affects our ad network partners.
In this post, we’d like to discuss with both a marketer and an ad network partner how to best go about activating fraud filters - and how both parties benefit significantly from the cleaner & healthier dataset that comes as a consequence.
Advertising and fraud prevention: acquiring legitimate users in Southeast Asia
While creating the case study, we were working in conjunction with a sizeable but (for the case study itself) anonymous advertiser. One of their focuses was on delivering an ecommerce app to Southeast Asian consumers. They’ve been active for many years now and process over many thousands of purchases every month. Their primary markets are places like Thailand and Indonesia.
In these parts of the world, you may not rely as heavily on social advertising as in the West. Thai and Indonesian people tend, as in many emerging markets, to lean more heavily on direct messaging apps than broad social networks, and when they do use Facebook, they frequently don’t have the same width of targeting data available as European or American marketers might expect. Additionally, with lower ARPUs due to smaller ticket sizes, the high CPIs for socially-targeted traffic do not generate enough margin for our advertiser.
Instead, the advertiser relies on an array of ad network partners, typically running campaigns with 12-15 different ad networks at any given time. This is one of the factors that we look at when determining any marketer’s potential exposure to UA fraud: running with multiple different performance networks or RTBs exposes you to a large array of sub-publishers that deliver traffic to those networks. Some of these may be applying methods that aren’t totally above board.
Some percentage of the advertisers ad spend was going to waste to ad fraud. But this wasn’t the only problem. The UA team, working with BI, also saw highly unusual behavioural patterns from certain campaigns. On the one hand, specific parts of their reach would retain extraordinarily well, much better than other publishers on the same network. These campaigns were also associated with unusually low click-to-conversion rates, frequently far below 0.01 %. And on the other side, some campaigns would generate high conversion rates, good early engagement, but then rapidly drop off after retaining for a short time. These are both indications of suspicious traffic.
“We were running a highly optimized UA operation, and constantly tweaking the campaigns based on the performance numbers that came out, but when you know at least one campaign is performing way too well, you don’t just ignore that one campaign - you start thinking, to what extent are the performance KPIs of other campaigns boosted & affected? We still moved budget around, but we did so slowly and cautiously, keeping a wary eye on the reports from our BI folks to see that the returns we expected also happened.”
The problem with user acquisition fraud isn’t just that you’re losing budget. The main issue is that your dataset becomes unclean – conversions are there, that shouldn’t be; engagement is attributed to campaigns that had nothing to do with it; and as a result, when you’re optimizing for better engagement and ROI, you can never be quite sure. If once you step through a rotten floorplank, you’ll walk carefully for the rest of the day.
So when we started talking about fraud prevention, this anonymous advertiser was one of the early adopters.
This was an opportunity for them to become more confident and stronger in their conclusions about their traffic. When the fraud filters of the Suite were turned on, all the traffic that they were already analyzing with Adjust would go through extensive additional checks. These filters were to make sure that the traffic was coming from real users behaving as they should - whether that’s over the distribution of their click-to-install times, the metadata around their IPs, or the frequencies of their engagement. As far as implementation went, the advertiser didn’t have to do anything other than flip a switch in the dashboard.
When they flipped this switch, though, some proportion of the conversion volume that they and their partners had become acquainted to would be rejected.
Rejected attributions – either fresh installs or re-engagements that have been rejected – aren’t reported as normal “installs” to the ad networks to which they would have been attributed. Instead, they are reported separately as rejections, in the dashboard and over our callback API. This is available to all partners as a natural extension of the APIs they are already using to communicate with Adjust.
In the case of InMobi (one of our ad network partners that we were working together with) this was implemented as part of the Special Partner integration in the Adjust dashboard.
“Our existing anti-fraud capabilities work well with the Adjust Fraud Prevention Suite to cull out malicious sub-publishers at every level. At InMobi, we have zero tolerance for any fraudulent behaviour, and aim to work closely with Adjust to build a stronger capability throughout the pipeline on all fraud prevention measures, in our constant endeavor to deliver high value performance campaigns to our advertisers”, says Arpit Nanda of InMobi.
Further, Arpit adds: “A benefit and one of the most important features of Adjust’s fraud prevention suite is that we get full visibility on how the filters work and what they’re rejecting. We could quickly establish an extension of our existing postbacks that Adjust calls with details about what is rejected and why. We have a full log of all the stuff getting caught in the filters.”
How do you make sure everything’s set up?
In the Rovio case study, Lead of User Acquisition An Vu simply sent around an email to the ad networks she was working with, to let them know that the Fraud Prevention Suite was going to be activated, giving networks ample time to prepare to collect data on how the attributions were analyzed.
In an interview with Adexchanger from earlier last year, Allison Schiff asked An if her network partners had been taken aback by the changes made by the fraud prevention suite.
“As in, ‘caught red-handed’? No, not really. Most partners we worked with welcomed the additional capacity to optimize the campaigns we were running with them,” said An, “but then again, we hadn’t just activated the Suite in surprise, we’d let everyone know well in advance and plugged in the Adjust team to make sure they knew what was going to happen. That seems to me like the right thing to do when you trust your partners. Fraud isn’t the business of an ad network - their platforms are being abused as much as ours are.”
This is typically the experience that I’ve heard from most partners and marketers that I’ve talked to about the Fraud Prevention Suite. When the ad networks get a heads-up that this technology is being put in place, they can build the capacity to use the APIs we make available, and their ability to analyze their traffic gets a free boost.
With InMobi, the advertiser had extensive data transfers between the platforms. The tracker URLs were set up to capture campaign IDs and adgroup IDs from InMobi’s platform using our Campaign Structure Parameters. When conversions came through from the Adjust SDK in the advertiser’s app, they’d be analyzed within seconds and then forwarded over to InMobi via a server-to-server integration.
Running user acquisition – on clean campaign data
The anonymous advertiser ran a few different campaigns through InMobi, with a few varieties of banners depending on deals that were being made.
As the installs were streaming in, the Adjust fraud filters started catching small amounts of suspicious traffic here and there.
Around 0.60 % of the conversions from InMobi campaigns were caught in one or more of the filters. With an app average of roughly 1.16 %, the InMobi campaigns delivered the largest amount of installs for the lowest rejection rate out of all of the performance networks.
In the Adjust dashboard, the advertiser kept an eye on the rejection rates as an additional metric to keep in mind when optimizing for campaigns. In this case, the average rejection rate by network would be relatively low – below 10 %, anyway. The data indicated that only a minority of traffic delivered by any one partner would look suspicious. Drilling down into rejection rates by campaign, the advertiser identified specific adgroup IDs that faced higher rejection rates. In the long term, as they put it, you want to question why that is, but in the short term you’re just happy not to allocate more budget to those campaigns.
Either way, any traffic that was caught in the fraud filters would not be attributed to the campaign which it would have matched. That way, the advertiser knew that any suspicious traffic had already been filtered out of the average engagement KPIs, and can act on those numbers without taking the rejection rates into account.
“We had access to the same set of data, and so the methods we have for algorithmically moving the budget around based on conversion-to-purchase or other engagement metrics can be configured to act more strongly,” adds Arpit, “Even with the algorithms, though, we’re able to make higher and more confident decisions. We can let our algorithms make stronger and bigger movements, because we don’t need to ‘wait-and-see’ to make sure that the optimizations are correct. That’s a huge advantage, letting us deliver more value more quickly, finding their right audiences in a fraction of the time it would have taken us in the past.”
Moving forward with clarity
We spoke to both the advertiser’s marketing team and InMobi’s campaign managers in early October, as the numbers for September were settling and it was time to wrap up the month’s reports and invoices.
“It’s pretty straightforward. They know that they’re getting what we agreed to deliver and what they’re paying for, since any rejected attributions aren’t mixed into the cleaned dataset we’re using for billing,” says InMobi’s campaign spokesperson, Abhinav Mohan.
One concern we hear sometimes from marketers getting started with the Fraud Prevention Suite is a concern that average CPIs will go up after the Fraud Prevention Suite is set up. Without disclosing the particular CPIs for the InMobi campaigns, the effect was “minimal” according to the advertiser, and even then, “that’s not so important compared to the overall engagement that a fixed ad budget is delivering, which has increased thanks to better & quicker optimization based on a dataset we can feel confident about.”
Similarly, the advertiser’s impressive monthly reach with InMobi didn’t change. The number of installs delivered by the network maintained a steady, predictable level, even when some sub-campaigns were hit by significant rejections.
“We don’t tolerate sub-publishers that report high levels of suspicious activity. If they want to pull out of a campaign because they’re being rejected, we’re not going to let them get on any other campaign, either. In the short term the sub-publishers need to continue to deliver legitimate traffic. In the longer term, as we collect more data, we’re zeroing in on the issue with certain players on the supply side and need to be asking some tough questions,” says InMobi.
How will the campaigns in Q1 have changed for advertisers and InMobi?
“UA optimization is a constant legwork of looking at multiple data-points and finding new audiences. With the Fraud Prevention Suite turned on, we have additional ability to make sweeping changes in our campaign structure when the engagement metrics start looking good. That way we learn more and more about our audiences as we go along.”
Simon Kendall
Head of Communications
Simon is adjust’s all rounder technology translator. Having built up the company's tech support team and labored on product refinement, he's worked between clients and engineering since joining the company in 2012, and now focuses on bringing this experience to mobile techies everywhere.