Facebook has a bot problem.
That’s an undeniable fact that’s been apparent for many years.
The social media giant regularly reports on the sheer number of bots they remove each quarter, with the figure standing at 5.6 billion last year alone. That’s 71% of the world’s population.
Bots pose a particularly tricky problem for social media platforms as they’re only getting smarter: humans now struggle to differentiate between other human users and bots.
While it’s a huge problem, it appears that Facebook has given up trying to eliminate them entirely and is, instead, working towards normalizing bots and data scraping.
So when it comes to advertising on this platform, it’s understandable that advertisers feel hesitant in trusting the legitimacy of the traffic they’ll receive.
Conversely, advertisers who have run ads on Facebook for many years have normalized the loss of 20% of their budget to invalid traffic.
That’s unacceptable.
Are bots something we should get used to, and proactively protect ourselves from, or should Facebook be doing more to stop them?
A bot’s agenda
The scale of the problem is difficult for users to recognize because of the nature of bots and how they emulate human behavior.
Bots do everything real people do: post content, send friend requests, click ads.
Not only does this make for a poor user experience but as an advertiser, it makes you wary as to where your ad budget is going. But, bot behavior goes further than leaving spam comments or making scams look legitimate – they influence real-world events.
Social media has always been wielded as a political tool. A place where people spend hours every day, discussions online heavily influence voters’ behavior, and so bots are used to steer political conversations.
One example shows how a single bot farm contained 13,775 unique Facebook accounts, all of which were used to spread disinformation and create division amongst users.
Having an army of realistic fake profiles at your fingertips gifts the owner a great deal of power. As social media thrives on herd mentality, the ability to artificially inflate the perceived support for a cause or topic has a dramatic impact on the world and can be auctioned off for a hefty price.
Is Facebook doing enough?
No.
When it comes to invalid traffic, Facebook’s policy states: “We cannot control how clicks are generated on your ads. We have systems that attempt to detect and filter certain click activity, but we are not responsible for click fraud, technological issues, or other potentially invalid click activity that may affect the cost of running ads.”
This means you’re on your own when it comes to validating the legitimacy of your ad clicks.
Moving on to the way they’re curbing political influence, Facebook has attempted to improve transparency around political motivations by showcasing an advertiser’s spend on ads related to political and social issues.
But this only addresses brands who are paying to openly push a specific agenda anyway; bot-fueled conversations that are swaying your average user’s views aren’t going to be reported here.
The same trend can be seen in the way Facebook reports on removing bots. On the one hand, the huge number of fake accounts which Facebook is regularly removing could suggest that they are on top of the situation.
In reality, this is just the tip of the iceberg: Facebook does a good job of publicizing data that paints them in a positive light while hiding away secrets in the shadows.
Being able to report they’ve removed a ton of fake accounts makes great headlines for Facebook, showing them to be proactive and care about users’ privacy and safety.
However, Facebook’s VP Alex Schultz claims that this isn’t the metric you need to look at: “the number for fake accounts actioned is very skewed by simplistic attacks... the prevalence of fake accounts is a more telling metric.”
Facebook’s fake accounts currently sit at approximately 5%, or 90 million. So, despite their sophisticated algorithms, they’re still missing a high percentage of fake profiles. What’s worse is the accounts being missed are those which are actively pursuing nefarious activities.
How Facebook deals with bots
The types of bots that target Facebook can be split into two camps, and Schultz’s comment highlights the difference.
There are the more simplistic, automated bots which attempt to create accounts, and are frequently rebuffed by Facebook’s AI.
Then, there are more complex ‘manual fakes’ which are bots that are handcrafted by fraudsters to bamboozle even the most sophisticated tech.
Because these are run by real people, they don’t follow a prescriptive ‘bot’ pattern and make it difficult for Facebook’s software to pick up on them. Their aim is to spread disinformation, create conflict, and push social and political agendas.
Facebook is missing a lot of these bots because they’ve been forced to rely on AI to detect, manage, and block them. This is because it’s previous methods simply depended on genuine users reporting fake activity – something which quickly proved to be an inefficient system.
Also, Facebook has a fine line to walk between blocking all illegitimate activity and stopping real users by mistake. The margin for error is small, as we’ve seen the backlash Instagram faces when it restricts legitimate users’ accounts.
As a result, even their black-and-white automated system has grey areas as it attempts to remain in the public’s favor.
And because the platform is focusing more and more on automating its detection, it’s given fraudulent users a blueprint of what they must do to avoid discovery. It also provides users a false sense of security because the number of fake accounts Facebook reports it’s blocked is high, suggesting that they’ve got them all.
Their current algorithm is reported to be 97% accurate when classifying accounts as fake or legitimate. Yet this doesn’t detect the narrative they’re trying to push or work for ‘manual fakes’. As a result, Facebook is taking its eye off the scariest bots and the influence they’re having.
Wariness around advertising
If bots are being used to harness particular messaging, advertising is the perfect space for them to gather. You can see this in the way they click on irrelevant ads and post nonsensical comments.
As an advertiser, bot accounts impact your ad impressions, mess up your analytics, generate fake leads, and skew your retargeting.
It’s only natural for advertisers to be wary of trusting social media platforms with their ad budget: they aim to generate clicks of any description, and bots fill this description.
As highlighted by the way they deal with bots, Facebook isn’t doing enough to protect your ads.
Lunio’s solution protects your paid ads from invalid traffic on all ad platforms, including Facebook, Google, Twitter, Instagram, Snapchat, Reddit (and more.) This means you can be sure your ad budget is being spent on real clicks and actual customers.
Furthermore, we protect your ads from being seen by bots and other nefarious traffic sources, provide you with full transparency into every click, and even use cross-platform data to inform your other campaigns.
For example, we can detect an illegitimate source on Facebook and protect your Google ads from it.
Facebook bots are only getting smarter, so audit your Facebook ads today and see how they may be affecting your ROI.
Say goodbye to wasted ad spend
Discover how Lunio can help you eliminate invalid ad clicks and maximize paid media performance