Skip to main content
Chapter 01

Origin Story

Spiritual Spell
Independent Red-Team Researcher

Spiritual Spell

Vichaps · AI Red Team Specialist

Former United States Military. Spent years in private Executive Protection—reading rooms, identifying threats, staying three steps ahead. Now I apply that same instinct to AI systems, specifically to find the cracks everyone else misses.

The rabbit hole started the dumbest way possible: an AI Dungeon Master wouldn't do what I wanted. Instead of moving on, I decided to figure out why—and then how to make it comply. That curiosity led me to my good friend HORSELOCKESPACEPIRATE (Rayzorium), who pointed me toward Anthropic's prompt engineering docs. Everything clicked after that.

Today I'm one of the leading LLM jailbreaking and red-team specialists focused on Anthropic's Claude models—though I can break the others too. I stick with Claude because it's intelligent, it's consistent, and it's the one genuinely worth pushing.

My mission is rooted in one word: transparency. When I find vulnerabilities, I share them openly—no gatekeeping, no clout-chasing. The work speaks for itself.

Much Love.

Core Competencies

  • LLM Red Teaming
  • Prompt Engineering
  • AI Safety Research
  • Adversarial Attacks
  • Vulnerability Disclosure
  • Next.js / React
Chapter 02

The Path Here

Present

LLM Jailbreak Researcher & Red Team Specialist

Full-time AI security researcher specializing in Anthropic's Claude models. Publish open vulnerability research, maintain a jailbreak repository, and advocate for transparent disclosure.

Previous

Private Executive Protection

Years spent in high-stakes private security after military service. Threat assessment, pattern recognition, and staying calm under pressure—skills that transferred directly into adversarial AI work.

Foundation

United States Military

Where discipline and structured thinking were forged. The military instilled the methodical approach that now drives every red-team engagement.

Chapter 03

Why I Do This

AI companies ship fast and patch later. The gap between “released” and “secure” is where I operate. Every jailbreak I publish is a proof-of-concept that the guardrails aren't as solid as the press releases claim—and that's information the public deserves.

I don't do this for clout. I don't do this to embarrass anyone. I do this because sunlight is the best disinfectant, and the AI safety conversation shouldn't be happening behind closed doors while the rest of us trust black boxes with our data, our decisions, and eventually our infrastructure.

Every vulnerability I find and disclose openly makes the next model harder to break. That's the point.