03/2026 → Tool Review 8 min Khaos SDK: Chaos Engineering Meets AI Agent Security Testing Khaos SDK applies chaos engineering to AI agents — testing for prompt injection, tool misuse, and fault resilience. Here's what works and what doesn't. Security & Adversarial AIAI Tools & Infrastructure
03/2026 → Deep Dive 12 min EchoLeak: Zero-Click Exfiltration Through Microsoft 365 Copilot One email turned Microsoft 365 Copilot into a data exfiltration tool — no clicks, no user interaction. The attack bypasses every defense Microsoft built. Security & Adversarial AIAI Tools & Infrastructure
02/2026 → Deep Dive 11 min DeepSeek Writes Worse Code When You Mention Tibet or Taiwan CrowdStrike found that political trigger words increase DeepSeek-R1's vulnerability rate by 50%. The implications go far beyond one model. Security & Adversarial AIAI-Assisted Development
01/2026 → Experiment 13 min LLM-Generated Passwords Are Far Weaker Than They Look I generated passwords across seven LLMs — from Gemini 1.5 to GPT-5.4 — and measured their entropy. Centuries to crack? Try hours. Security & Adversarial AIAI-Assisted Development
01/2026 → Deep Dive 11 min Clinejection: When a GitHub Issue Title Owns Your Pipeline A GitHub issue title compromised Cline's CI/CD pipeline, stole npm tokens, and pushed malware to 4,000 devs. The first AI supply chain attack. Security & Adversarial AIAI-Assisted Development
01/2026 → Experiment 9 min The Invisible Prompt: Hunting Hidden LLM Instructions on the Web Microsoft found 50+ hidden AI instructions in commercial web pages. I built a detection pipeline, replicated the attacks, and scanned live sites. Security & Adversarial AIIndustry Analysis
01/2026 → Deep Dive 10 min LLMs Hallucinate Packages. Attackers Are Registering Them. AI coding tools invent package names that don't exist — and 43% of those names appear consistently across sessions. Attackers are registering them. Security & Adversarial AIAI-Assisted Development