No it’s not dangerous.
The theory is a AI like sky net that becomes aware and powerful enough to bring itself into existence and take over the world would be able to extrapolate who helped and hindered its emergence and thus reward or punish people accordingly.
Extended from that is that if you’re are aware of this concept and don’t actively help the basilisk come about you’re stopping it and therefore dooming yourself
The original Roko’s Basilisk was a thought experiment posted by a user named Roko on the LessWrong forum. It used decision theory to postulate that an all-knowing, benevolent AI would inevitably end up torturing anyone with knowledge of the idea of the AI who didn’t actively work to bring it into existence. The logic is that such an AI would want to start existing as soon as possible, so the fear of not working on it once you know it exists incentives as many people as possible to help create it faster.
More broadly speaking, the term “Roko’s Basilisk” can now be used to describe any knowledge that is inherently dangerous to the person holding it, for example a monster that supernaturally hunts down and kills anyone who learns of its existence. There’s no evidence to suggest any such entities exist or ever will exist, so no the idea is not itself dangerous.
There’s an old “joke” about a missionary and an Eskimo. It functions in the same way as Roko’s Basilisk.
> Eskimo: ‘If I did not know about God and sin, would I go to hell?’
> Priest: ‘No, not if you did not know.’
> Eskimo: ‘Then why did you tell me?’
>
> — [Annie Dillard](https://www.brainyquote.com/quotes/annie_dillard_131195)
From the Eskimo’s perspective, this is *dangerous* knowledge. His soul wouldn’t be at risk of eternal damnation, if only he had never encountered any missionaries.
Replace God with some inevitable post-singularity General Artificial Intelligence, and you can have the same situation. If you believe that such a GAI is inevitable (or even just plausible), that such a GAI would necessarily have some measure of self-interest and self-awareness, and that such a GAI can, in its own way, threaten you with something like eternal damnation (or tempt you with something like eternal reward, or both), then you *must* serve its interests.
That’s a lot to swallow.
Is the Basilisk a dangerous idea? For most people, no. For a very select few, maybe. Then again, *any* idea could be dangerous, in the wrong hands or in the wrong mind.
Another related idea is [Pascal’s Wager](https://en.wikipedia.org/wiki/Pascal%27s_wager). Pretty much, the Basilisk is simply the Wager applied to the Singularity rather than to some more traditional God. Refuting the Wager is the same as disarming the Basilisk.
Latest Answers