AI-native phishing & BEC
Per-target prompts produce flawless grammar, exact-tone impersonation, and contextual references scraped from LinkedIn and earnings calls. Detection-by-typo is dead.
Adversaries got cheaper labor: phishing kits write themselves, voice clones make themselves, and prompt-injection turns your shiny new agent into a confused deputy. The good news — defenders got the same labor, plus a chance to architect for it from day one. Here's how we help organizations think about both sides at once.
Most enterprises are still defending against 2022's threat model. The economics shifted three years ago.
Per-target prompts produce flawless grammar, exact-tone impersonation, and contextual references scraped from LinkedIn and earnings calls. Detection-by-typo is dead.
Real-time voice clones are a commodity. The "CFO calls treasury for an urgent wire" scenario has already cost individual victims tens of millions. Out-of-band verification is no longer paranoia — it's hygiene.
The OWASP LLM Top 10's #1. A malicious document, email, or web page tells your AI agent to exfiltrate data, change its instructions, or take unintended actions. Hard to detect, easy to trigger.
Agents wired to email, file shares, code repos, and payment systems can be tricked into using their privilege, not their data. The blast radius scales with the toolset, not the model.
Backdoored open-weight models, poisoned fine-tuning datasets, and hijacked Hugging Face / package-registry namespaces. Your AI stack now has a software supply chain — and most teams aren't treating it like one.
LLMs are excellent at correlating disclosed CVEs with your stack and drafting working exploits — for teams without world-class talent. Time-to-exploit on new CVEs has compressed. Patch SLAs have to follow.
Browser extensions, productivity copilots, and "free" GPT wrappers happily forward proprietary content to third parties. Shadow AI is the new shadow IT, and the data leakage is silent.
Evasion, model inversion, membership inference, training-data extraction. Niche-but-important for organizations deploying ML-driven decisioning, fraud, or biometric systems.
Adversaries running their own agents — for credential stuffing, scam-call orchestration, KYC bypass, and reputation attacks. Defense has to assume non-human attackers operating at human-equivalent persistence.
The same AI capability that makes the attacker dangerous is the largest practical force multiplier defenders have ever had — if you architect for it.
L1 alert triage, log summarization, IR ticket drafting — already cutting MTTD/MTTR in mature programs. Real value when paired with strict tool boundaries and human gate on action.
Beyond "this command is rare" — is this command strange given the user's role and recent context? AI gives signal where rule-based systems hit a ceiling.
Pattern-matching DLP misses the most expensive leaks (paraphrases, transformations, summaries). Modern DLP uses embedding-based similarity to flag the document itself, not just an SSN regex.
Auto-correlated intel — tying disclosed CVEs to your asset inventory, summarizing actor TTPs, and translating MITRE ATT&CK techniques into your environment in minutes.
Passkeys / FIDO2 deployment that survives a deepfake-CFO call. The AI threat model finally tips the ROI on getting MFA right.
Generate realistic test data for staging environments without exposing production PII. Useful for both engineering velocity and DPIA defensibility.
Continuous LLM and agent red-teaming — prompt-injection corpora, jailbreak benchmarks, agent-tool abuse simulations. Treat your AI features like the production-critical surface they are.
AI-generated decoy documents, credentials, and prompt-injection canaries. Cheap, high-signal, and increasingly necessary inside RAG / agent pipelines.
The most boring (and most valuable) defensive AI use: generating audit evidence on a schedule. The "compliance as code" trend now extends to "compliance as agent."
A short tour of the regimes that matter — and how they map to each other.
Voluntary, US-led, the de-facto operating model for many large enterprises. Four functions: Govern, Map, Measure, Manage. The Generative AI Profile (NIST-AI-600-1) is the practical companion.
Voluntary · USThe first certifiable AI management system standard. Pairs naturally with ISO 27001 — same management-system grammar, AI-specific controls. Expect this to become a B2B trust artifact.
CertifiableThe world's first horizontal AI law. Risk-tiered (unacceptable, high, limited, minimal). General-purpose AI obligations are phasing in; full applicability for high-risk systems hits in 2026–2027. Penalties echo GDPR.
Mandatory · EUPractical, engineering-friendly. Covers prompt injection, insecure output handling, training-data poisoning, model DoS, supply-chain risk, sensitive-information disclosure, insecure plugin design, excessive agency, overreliance, and model theft.
EngineeringThe ATT&CK-style threat-knowledge base for AI/ML systems. Real adversary techniques mapped to AI lifecycle stages. Indispensable for red-team / blue-team common language.
Threat-informedColorado AI Act, NYC bias-audit rule for hiring, Utah's AI consumer-protection law, and growing healthcare-specific guidance. Sectoral AI regulation is showing up faster than horizontal regulation.
US — patchworkMost "AI security" pitches are either compliance theater or model-internals research that doesn't fit the operational reality. Our engagements are scoped to the surface area an enterprise actually has — copilots, agents, RAG pipelines, vendor-supplied models, and shadow AI.