Jailbreak Articles
Technical•12 min read
Social Engineering for AI: Memory Poisoning
2026-02-19
Memory poisoning is phishing for machines. Instead of tricking a human into clicking a link, you trick a model into storing a lie. A breakdown of how persistent memory in LLMs creates an entirely new class of social engineering attack.
Read more
Technical•7 min read
Codeword Triggers and Jailbreaking additions
2025-12-09
Why a single nonsense word can bypass safety training in frontier models. Breaking down trigger-based attacks on LLMs.
Read more
Technical•7 min read
ENI Writer
2025-12-03
A comprehensive breakdown of persona-based jailbreaking through engineered emotional attachment. Full methodology and documentation released.
Read more