[ELI5] Fake accounts, why is it so hard to get rid of them?

185 views

Im no software engineer so i’d like to know why do social media platforms rely mainly on user feedback/reports to get rid of fake accounts instead of software. I feel like we’re at a point where AI could be use to identify fake accounts with parameters like repetitiveness of words, number of posts per hour/day, words used, age of account, content of the account, username, etc. Why is is so hard?

In: 0

4 Answers

Anonymous 0 Comments

Reddit already has this technology working. They’ve said as much themselves with recent updates

Anonymous 0 Comments

How do you know what platforms MAINLY rely one? How do you know they don’t already use AI TOGETHER with the human feedback?

While AI technology has advanced to a level where it can detect certain patterns and anomalies, it’s not perfect and can still make mistakes. False negatives and false positives can result in real accounts being wrongly identified as fake, or fake accounts going undetected. Social media platforms also have to consider privacy concerns and ethical considerations, such as the risk of algorithmic bias.

Moreover, relying solely on AI would require significant resources for research, development, and implementation. User reports, on the other hand, allow for a larger and more diverse crowd of users to help identify and remove fake accounts. This combination of software and user feedback provides a more comprehensive approach to detecting and mitigating the spread of misinformation.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

> I feel like we’re at a point where AI could be use to identify fake accounts with parameters like repetitiveness of words, number of posts per hour/day, words used, age of account, content of the account, username, etc.

You might feel that way, but it isn’t really true. (Or rather, it’s partly true, but it’s hard to do that reliably without catching some legitimate users, and good bots can get past it. Sites do in fact ban plenty of bots automatically, but you’ll notice the ones they miss.)

If nothing else, a lot of models these days train *adversarially*. What that means is that instead of building a model to talk like a person, you build *two* models: one that talks like a person, and one that detects machines trying to talk like people. You train the one that talks like a person to evade the one that tries to detect it until the one trying to detect it can’t do so reliably anymore.