Jailbreak
A technique for manipulating an AI model into ignoring its safety guidelines or producing content it was trained to refuse, typically through carefully crafted prompts.
Loading...
Related terms
Last updated 2026-05-12
A technique for manipulating an AI model into ignoring its safety guidelines or producing content it was trained to refuse, typically through carefully crafted prompts.
Related terms
Last updated 2026-05-12