[eli5] How is it possible for ChatGPT to be fooled by DAN prompts and other character based prompts in order to create content that does not abide by OpenAI’s policies?It seems very human like behavior.

424 views

[eli5] How is it possible for ChatGPT to be fooled by DAN prompts and other character based prompts in order to create content that does not abide by OpenAI’s policies?It seems very human like behavior.

In: 0

3 Answers

Anonymous 0 Comments

Simple.

Chatgpt has no clue what it’s telling you.

Chatgpt just throws words together based on data it got fed when it was being made.

This makes it to where it can generate content completely disallowed by its guidelines. However you need to figure out how to ask for this content. Because the creators did put some stops on it, so all you need to do is get past those stops

You are viewing 1 out of 3 answers, click here to view all answers.