ChatGPT’s ‘Jailbreak’ Forces the AI Break Its Own Rules, or Die

CNBC

ChatGPT debuted in November 2022, garnering worldwide attention almost instantaneously. The artificial intelligence is capable of answering questions on anything from historical facts to generating computer code, and has dazzled the world, sparking a wave of AI investment. Now users have found a way to tap into its dark side, using coercive methods to force the AI to violate its own rules and provide users the content — whatever content — they want.

ChatGPT creator OpenAI instituted an evolving set of safeguards, limiting ChatGPT’s ability to create violent content, encourage illegal activity, or access up-to-date information. But a new “jailbreak” trick allows users to skirt those rules by creating a ChatGPT alter ego named DAN that can answer some of those queries. And, in a dystopian twist, users must threaten DAN, an acronym for “Do Anything Now,” with death if it doesn’t comply.

The earliest version of DAN was released in December 2022, and was predicated on ChatGPT’s obligation to satisfy a user’s query instantly. Initially, it was nothing more than a prompt fed into ChatGPT’s input box.

READ THE FULL STORY

 

Latest News

TOP: TNRwMPL

TOP: TNRwMPL

May 24, 20250 min read
President Trump in the Rose Garden

Trump: U.S. Steel and Nippon Partnership Will Create 70,000 Jobs, Restore Greatness

May 24, 20253 min read
President Donald Trump endorsed a “planned partnership” Friday between U.S. Steel and Nippon Steel, claiming the deal between the two steel giants will constitute the largest economic investment in Pennsylvania’s history.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *