How to trick ChatGPT into writing exploit code using hex
OpenAI’s language model GPT-4o can be tricked into writing exploit code by encoding the malicious instructions in hexadecimal, which allows an attacker to jump the model’s built-in security guardrails and abuse the AI for evil purposes, according to 0Din researcher Marco Figueroa. […] In a recent blog, Figueroa detailed how one such guardrail jailbreak exposed Read more about How to trick ChatGPT into writing exploit code using hex[…]