Even though IGG was buying what appeared to be legitimate traffic, user value had begun to palpably degrade over time. In some cases, a user would seem to download an app, open it and then quit a second or two later never to return. In others, fraudsters would use a VPN to make it look like their installs were coming from the US – where the CPIs are higher – when they were actually coming from a Southeast Asian IP address.
Although IGG has access to device-level information about its users, it didn’t have the ability to run the kind of analysis necessary to determine which were fake, partially because it’s difficult to diagnose fraud by looking at an individual user’s behavior.
It’s within the realm of normal behavior, for example, for a user to install an app and ignore it for a while or to download it and not use it at all. At the same time, some fraudsters write scripts that mimic retention, opening the app several times to make it look like real use. In more extreme cases, perpetrators will actually make small in-app purchases at a price lower than the CPI to throw detection companies off the scent.
And because these practices are new, it’s hard for machine learning models, which rely on specific information about past attacks, to train themselves and stay ahead of the problem, said Fang Yu, CTO and co-founder of fraud-detection and analytics company DataVisor. The company works with IGG to detect fraudulent installs in its user base.
“Rules are difficult to maintain because attack patterns change frequently,” Yu said. “And once you do train a model, the fraudsters have already changed their patterns.”
DataVisor uses a method it calls unsupervised detection to identify patterns of suspicious activity. Rather than trying to ferret out individual fake installs, DataVisor looks at the entire user population – IGG shares all of its user logs with DataVisor – to create clusters of fake installs that demonstrate similar signatures.
“It’s not economically viable for fraudsters to create one or two fake installs per user account,” Yu said. “Typically, bad actors have hundreds of thousands of accounts, and because those accounts are not controlled by humans, their activity is highly correlated.”
In other words, DataVisor looks for signals that allow it to associate multiple fake installs with a single source, whether that’s a particular ad network or publisher. IGG then uses that information to clean up its user base and collect makegoods from ad networks that charged for fraudulent installs.
Depending on the game title and mobile network IGG was using, between 10% and 20% of the paid installs coming through were found to be fraudulent.
IGG also provides the data it gets back from DataVisor to its ad network partners so they can use it as ammunition to turf out shady publishers if they so choose. Some networks are receptive to IGG’s efforts and others – not so much.
“They may not care or ignore it because they want income now,” Yu said. “But it will hurt their reputation – and their revenue – in the long run.”
Although some social and ecommerce apps experience install fraud, the problem is far more acute in the gaming sector, mainly because games are such prodigious buyers of installs.
And as long as the cash is flowing, the incentive to scoop it up will also be there, whether that’s by a fraudster or by an ad network unwilling to be self-reflective.
“Unlike financial fraud – hacking into someone’s PayPal account or running a Western Union phishing scheme – app-install fraud is easier to scale, and the payout is good,” Zhang said. “Also unlike financial fraud, there’s little risk it’s going to be deeply investigated. At least for now, this is an industry problem rather than a problem for the police.”