With all the high technology development, why can’t bots check boxes that say “I am not a robot”?

336 views

With all the high technology development, why can’t bots check boxes that say “I am not a robot”?

In: 649

22 Answers

Anonymous 0 Comments

So I’m making something to stop bots, as well as to defeat anti-bot things and there’s quite a few reasons why and how it works.

First, it stops the most basic bots. For that to appear, it needs to run some javascript on the page. Basic bots dont run javascript so it will never appear. Clicking the button sends back a unique code which is checked whenever the real action you want to do is done so they can’t fake that.

It stops slightly less basic bots by checking your mouse movements, as well as how you click, where, for how long etc. Bots would have to program in randomness and also emulate the correct clicking method.

Mediocre bots will use what is called a “Headless browser” which is essentially Firefox or Chrome but does not have an actual UI for you to interact it. It’s strictly for programming. The problem here is that while the javascript is loaded and all that, you still have the mouse tracking issues. If that is fixed, you now have to trick it into thinking you’re a real browser. Headless browsers implement most features of their UI counterparts but not all of them. This allows detection of the headless browsers.

Stepping up even further, you now have bots that may fix some of them but now for large scale use you need to change the UserAgent (which is sent on every request and tells the site what browser and features it supports) and hope the features you emulated work exactly as they did in that version. Part of the detection is testing features against the versions to see if they act properly. A non-real example may be that chrome 100 reliably makes “0.1 + 0.2” equal “2.9999998” but chrome 101 makes it equal “2.2998”.

Stepping up even further is something I’m working on which detects network differences. It’s like the above, but we detect the changes in network connections between operating systems and browsers. With this, if the person uses the same program we can reliably detect them. We can also detect VPNs and proxies.

That also brings me to IP and network detection. Services like [Maxmind.com](https://Maxmind.com) have a database of IPs and who owns them as well as any reports about them. We can safely auto-ban any IP that is for hosting use.

Finally, something to know: getting past recaptcha is possible and fairly trivial. This is why I’m developing something new that thus far no bot maker I can find has protection against and is actually very hard to implement. Our site uses recaptcha for the time being but they do bypass it fairly easy and during testing of ours we can bypass it as well. It’s only good for stopping non-dedicated attackers. If you’re being targeted they will likely have a bypass solution.

You are viewing 1 out of 22 answers, click here to view all answers.