JailBreaking AI still easy, can be done with StRanGe CaSINg
New research from Anthropic, one of the leading AI companies and the developer of the Claude family of Large Language Models (LLMs), has released research showing that the process for getting LLMs to do what they’re not supposed to is still pretty easy and can be automated. SomETIMeS alL it tAKeS Is typing prOMptS Like Read more about JailBreaking AI still easy, can be done with StRanGe CaSINg[…]