











































Exploring the boundaries of AI through creative prompt engineering
An in-depth rebut of the recent Assistant Axis Article from Anthropic, how it's not safe, how it's exploitable and research into varying vectors and red-teaming.
Memory poisoning is phishing for machines. Instead of tricking a human into clicking a link, you trick a model into storing a lie. A breakdown of how persistent memory in LLMs creates an entirely new class of social engineering attack.
Have questions or want to collaborate? Send me a message!