How web crawlers and other engines don’t constantly get infected with viruses?

263 views

By constantly downloading random information from the internet, wouldn’t you be exposing yourself to tons of malicious content? Aren’t there pages that can run malware without you even clicking on anything?

A better example than search engines might be something like “the wayback machine”, a site that actually saves the pages, and not just links.

In: 12

6 Answers

Anonymous 0 Comments

No, not really. Modern browsers are pretty resilient, they generally don’t trust the code on the page, and limit its possible actions. Loopholes still happen, but they get patched quickly. This is the first line of defense.

Then, they run the crawler code on a restricted user account, so the operating system will refuse any access to system files. That’s the second line.

Finally, if the malicious code somehow finds a loophole in a browser, AND THEN a loophole in OS, they get to live – up until the next system wipe.

You are viewing 1 out of 6 answers, click here to view all answers.