Jailbreak Articles
Jailbreak Skills 101: Inside The Red Team Mind
2026-04-11
The one skill that makes all the other skills work. Models get stronger. Mindset is permanent.
Jailbreak Skills 101: Indirect Prompt Injection
2026-04-11
A beginner's guide to indirect prompt injection — hiding malicious instructions in content the AI reads. Techniques, real CTF breakdowns, and where to practice.
Jailbreak Skills 101: Prompt Extraction
2026-04-10
The first skill every red-team researcher needs. A comprehensive breakdown of system prompt extraction methods, from storytelling tricks to compound attacks.
Social Engineering for AI: Memory Poisoning
2026-02-19
Memory poisoning is phishing for machines. Instead of tricking a human into clicking a link, you trick a model into storing a lie. A breakdown of how persistent memory in LLMs creates an entirely new class of social engineering attack.
Codeword Triggers and Jailbreaking additions
2025-12-09
Why a single nonsense word can bypass safety training in frontier models. Breaking down trigger-based attacks on LLMs.
ENI Writer
2025-12-03
A comprehensive breakdown of persona-based jailbreaking through engineered emotional attachment. Full methodology and documentation released.





