Known as “the front page of the internet,” Reddit is a social news aggregation and discussion platform that curates hot topics and trending news.
With 430 million monthly active users, and the 14th most popular website in the world, it heavily influences the opinions of online users and how they see the current state of the world.
Like all social media sites, Reddit attracts a high number of Reddit bots. But the unique nature of the platform allows them to thrive more efficiently and take control of specific narratives. This is because Reddit works on a system of up and downvotes: users vote on whether they like posts or not, and the most popular content reaches its front page.
It’s an easy structure to manipulate – all you need to do is program something which can click ‘upvote’ over and over again – allowing Reddit bots to promote specific agendas and reach millions of people.
Bot-riddled foundations
Bots are knitted into the fabric of Reddit in a way that they aren’t on other social platforms. While some are helpful and fun (there’s a Reddit bot that replies “yum!” when someone mentions the word spaghetti) many strive to appear human in order to twist public conversations in their favor.
This is damaging for users who may not recognize their sinister motivations, advertisers, and the world’s political and social conflicts.
When you look at how Reddit started, it’s easy to see why it still has a severe problem with fake accounts. CoFounder Steve Huffman revealed that in the early stages, the platform was purposefully pumped with fake profiles that would regularly post comments to make it appear more popular than it was, stating “internet ghost towns are hardly inviting places.”
Huffman claims that by using fake users to post high-quality content, they could “set the tone for the site as a whole.”
And what he says is true, but perhaps not in the way it was intended – Reddit bot riddled foundations have leaked into what it’s become today.
The idea that Reddit was designed for bots extends into its UX. The front page, where the most popular content sits, often serves as a list of headlines for users to scan through, rather than click on and dive further into.
As a result, readers commonly take things at face value and reward upvotes to content that they haven’t read. Doing this causes other users to trust the post without reading it, fueling further fake news and misinformation.
This cycle of giving clout to things at face value makes it easy for fake users to gain traction and recognition.
Reddit bot motivations and behaviors
To ensure fair play, Reddit relies on moderators (almost all of whom are volunteers) to regulate the content and comments. However, this system makes it easy for Reddit bots to thrive; it’s a lot of work for a mod to dig into individual accounts to see if they’re real or not.
Unless there’s something obviously wrong, Reddit bots can slide under the radar with little resistance.
Some savvy moderators have noticed particular trends amongst Reddit bots which they use to identify other fake accounts.
Speaking on r/technology, one moderator notes: “These bots post relative (albeit recycled) content. So usually, mods have no reason to look closer until you realize that the same content is getting recycled every ~2 weeks or so. So upon taking a closer look, you will notice all of these accounts follow the exact same trend, some obvious, some not so obvious.”
Fake account usernames are typically two random words stitched together, as though there’s a list of names from which they’re being automatically generated.
From there, the Reddit bot will look to grow its account’s age, comment history, and gain comment karma as these things allow an account to appear more reputable.
These accounts can then be hired by whoever wants to push their own agenda, as two Redditors discovered when they set out to see how easy it was to cheat Reddit’s system.
Gaming the system
As highlighted, Reddit appears an easy system to take advantage of. But, two Redditors wanted to find out how far they could manipulate the social platform with a little bit of money and no prior knowledge of how it works.
Their results proved that with just $200, some dodgy contacts, and a desire to mislead people, they could achieve pretty much anything – including making fake news go viral on Reddit’s front page.
The experiment also revealed insights into the type of automated technology fraudsters have developed to game Reddit’s system.
One person they spoke to showed a program that automatically uses thousands of proxies to log in and out of bought accounts; the results allow him to charge people for guaranteed, genuine upvotes on whatever content they want.
All whilst avoiding Reddit bot detection systems.
Unsurprisingly, businesses have quickly caught on to the fact they can pay to harness the power of Reddit bot army. In fact, there’s a whole subreddit dedicated to calling out companies who create bots and advertise themselves in this way in all the comments: r/HailCorporate.
As advertising in illegitimate ways grows in popularity and bots continue to control the narrative, genuine users suffer the consequences.
Filters down to advertisers
Reddit isn’t taking the bots on its site seriously. It's very easy to create Reddit bots.
It doesn't take more than one bot that looks like a legitimate Reddit account, which posts and comments on forums, to sway the public opinion.
The impact they’re having on real-world conversations, and the public’s perception of events is damaging.
This damage filters down onto those trying to utilize this space in legitimate ways, such as paid advertising.
If you’re not protecting your ads from the hostile forces on Reddit, you’ll likely see the problems Reddit bot armies cause firsthand.
Say goodbye to wasted ad spend
Discover how Lunio can help you eliminate invalid ad clicks and maximize paid media performance