The original Roko’s Basilisk was a thought experiment posted by a user named Roko on the LessWrong forum. It used decision theory to postulate that an all-knowing, benevolent AI would inevitably end up torturing anyone with knowledge of the idea of the AI who didn’t actively work to bring it into existence. The logic is that such an AI would want to start existing as soon as possible, so the fear of not working on it once you know it exists incentives as many people as possible to help create it faster.
More broadly speaking, the term “Roko’s Basilisk” can now be used to describe any knowledge that is inherently dangerous to the person holding it, for example a monster that supernaturally hunts down and kills anyone who learns of its existence. There’s no evidence to suggest any such entities exist or ever will exist, so no the idea is not itself dangerous.
Latest Answers