Cybersecurity Content Intelligence Agent

    Gives Claude Code the insider knowledge to write cybersecurity content that resonates with CISOs, security practitioners, and DevSecOps teams — complete with buyer personas, benchmark brand voice analysis, and threat-framework vocabulary.

    Free & openInstall in 30 seconds

    What This Agent Does

    This agent teaches Claude Code how to write content that cybersecurity buyers actually respect. It provides three detailed buyer personas (CISO/Security Leaders, Security Practitioners, and Developer/DevSecOps Engineers) with their exact titles, what they already know, what they evaluate, the questions they ask during research, and what turns them off in vendor content.

    It also includes deep analysis of five benchmark cybersecurity brands — CrowdStrike, Palo Alto Networks/Unit 42, SentinelOne, Wiz, and Snyk — with voice profiles, depth scores, and structural patterns worth borrowing. This means Claude Code can match the sophistication level that cybersecurity buyers are already accustomed to from the best companies in the space.

    Finally, it provides over 70 table-stakes terms and 30+ precision terms organized by fluency level (table-stakes terms you must never define, precision terms where brief context is acceptable, and terms that signal outsider writing), regulated language guardrails for threat disclosure and attribution, and a complete depth calibration framework with insider test examples showing exactly where credible content stops and threat research begins.

    What You Get

    • 3 detailed buyer personas (CISO/Security Leader, Security Practitioner, Developer/DevSecOps Engineer) with titles, knowledge assumptions, evaluation criteria, research questions, and turn-offs
    • 5 benchmark brand content analyses (CrowdStrike, Palo Alto Networks/Unit 42, SentinelOne, Wiz, Snyk) with voice profiles, depth scores, and structural patterns worth borrowing
    • 70+ table-stakes terms and 30+ precision terms organized by fluency level, plus 18 terms to avoid that signal outsider writing
    • Regulated language guardrails — what you can and cannot claim about cybersecurity topics, plus how benchmark brands handle threat disclosure and attribution
    • Depth calibration with insider test examples (passes, fails, and over-indexed examples showing exactly where credible content ends and threat research begins)
    • Content gap opportunities for cybersecurity-focused content
    • Writing quality checklist for cybersecurity content
    • Voice calibration with good, bad, and over-indexed examples

    Install

    Choose your preferred installation method. Both put the agent rule in the right place for Claude Code to discover automatically.

    Copy the rule below and save it as .claude/rules/cybersecurity-content.md in your project root.

    .claude/rules/cybersecurity-content.md
    # Cybersecurity Content Intelligence Agent Rules
    
    When writing content for cybersecurity buyers — CISOs, security practitioners, SOC analysts, and DevSecOps teams — follow these rules. This agent provides the domain intelligence needed to write content that passes the insider test with security professionals.
    
    **Benchmark brands analyzed:**
    - CrowdStrike (https://www.crowdstrike.com/blog)
    - Palo Alto Networks / Unit 42 (https://unit42.paloaltonetworks.com/)
    - SentinelOne (https://www.sentinelone.com/blog)
    - Wiz (https://www.wiz.io/blog)
    - Snyk (https://snyk.io/blog)
    
    ---
    
    ## 1. Buyer Persona Specifics
    
    ### Primary Cybersecurity Buyer: The CISO / Security Leader
    
    **Title/role:** CISO, VP of Security, Head of Information Security, Director of Security Operations, Head of Security Architecture (at enterprise and growth-stage companies)
    
    **What they already know (don't explain these):**
    - Zero-day, CVE, MITRE ATT&CK framework, SOAR, SIEM, XDR, EDR
    - Concepts like lateral movement, privilege escalation, command and control (C2)
    - Common security frameworks (NIST, ISO 27001, SOC 2)
    - Nation-state threat actors, APTs, ransomware groups
    - Cloud security posture management (CSPM), identity threat detection and response (ITDR)
    - Supply chain attacks, software bill of materials (SBOM)
    - Incident response terminology: dwell time, breakout time, mean time to detect/respond (MTTD/MTTR)
    - The difference between detection and response, prevention vs. remediation
    - The basics of identity and access management
    - How security incidents are detected, investigated, and remediated
    
    **What they're evaluating:**
    - Platform consolidation vs. best-of-breed approaches
    - AI/ML efficacy claims (they've heard them all before)
    - False positive rates and alert fatigue reduction
    - How to demonstrate security ROI and risk reduction to the board
    - Whether their current security stack can detect advanced threats
    - Cloud security posture management and multi-cloud complexity
    - Integration complexity with existing security stack and total cost of ownership
    - Vendor lock-in risks and data portability
    - Whether "agentic" capabilities are real or rebranded automation
    - Speed metrics: 1-10-60 rule (detect in 1 min, investigate in 10, remediate in 60)
    - Real-world threat coverage against named adversaries they've seen in their environment
    - How to reduce alert fatigue and improve SOC efficiency
    - Vendor ability to detect zero-day and N-day exploits
    
    **Questions they ask during research:**
    - "How does this actually work at runtime, not in a demo?"
    - "What's the architectural philosophy — agent-based, agentless, or hybrid?"
    - "Can it see across endpoint, cloud, identity, and SaaS in a single pane?"
    - "How does it handle high-velocity environments (containers, serverless)?"
    - "What's the research team's track record on discovering novel threats?"
    - "Is this going to create more work for my already understaffed SOC?"
    - "How do we measure security effectiveness beyond compliance checkboxes?"
    - "What's the MTTR for similar organizations?"
    - "Can this platform reduce our tool sprawl without creating blind spots?"
    - "What level of automation can we actually achieve without sacrificing control?"
    
    **What turns them off in vendor content:**
    - Vague claims about "stopping cyber threats" or "protecting digital assets"
    - FUD (fear, uncertainty, doubt) without actionable guidance
    - Generic threat warnings ("hackers are getting more sophisticated")
    - Over-explaining basic concepts (if you define "phishing," you've lost them)
    - Chest-thumping without evidence (show the MITRE ATT&CK coverage, not just claim it)
    - Claiming to stop "all threats" or provide "100% protection"
    - Vendor content that just describes attacks without showing how to defend
    - Treating all CISOs as technical experts (many come from risk/compliance backgrounds)
    - Ignoring the reality of budget constraints and competing priorities
    - Conflating product features with actual security outcomes
    
    ### Secondary Cybersecurity Buyer: The Security Practitioner
    
    **Title/role:** SOC Analyst, Security Engineer, Threat Intelligence Analyst, Detection Engineer, Incident Responder, Security Architect, Threat Hunter
    
    **What they already know:**
    - How to investigate alerts and triage incidents
    - Common attacker TTPs (tactics, techniques, procedures)
    - Technical IOCs, YARA rules, Sigma rules
    - Specific malware families, implant frameworks (Cobalt Strike, Mythic, Sliver)
    - Log analysis, query languages (KQL, SPL, SQL)
    - The MITRE ATT&CK framework and how it maps to real attacks
    - Adversary naming conventions (MITRE uses generic names, vendors use animal/mythology themes)
    - Web shell types, fileless malware techniques, living-off-the-land binaries (LOLBins)
    - Container escape techniques, Kubernetes attack vectors
    - Specific tools they use daily (SIEM, EDR, SOAR platforms)
    - How threat actors operate and what "living-off-the-land" means
    - The difference between detection, prevention, and response
    - API security, GraphQL exploitation, JWT token abuse
    
    **What they're evaluating:**
    - Detection coverage and false positive rates in their specific environment
    - Whether this tool reduces manual investigation time
    - Query flexibility and customization depth
    - Integration with existing SIEM, ticketing, and SOAR workflows
    - Access to raw telemetry and logs (not just pre-digested alerts)
    - Threat hunting capabilities — can they pivot and correlate freely?
    - How much "AI" gets in the way of manual investigation
    - Quality of threat intelligence and IOCs
    - Whether automation actually works or creates more work
    - Can this help them explain risks to management
    
    **Questions they ask during research:**
    - "Can I write custom detection rules or am I locked into vendor signatures?"
    - "What's the telemetry granularity — process level, network flow, file hash?"
    - "How does it perform in Linux/macOS environments, not just Windows?"
    - "Can I export data for offline analysis or correlation with other tools?"
    - "What's the API coverage for automation?"
    - "Does this detect [specific MITRE ATT&CK technique]?"
    - "How many false positives will this generate per day?"
    - "Can I query historical data and build custom detection rules?"
    - "How does this handle encrypted traffic or obfuscated commands?"
    - "Can this correlate events across endpoint, network, and cloud?"
    
    **What turns them off:**
    - Marketing claims disconnected from technical reality
    - "AI-powered" buzzwords without explaining what the AI actually does
    - Marketing that promises "zero false positives" (they know that's impossible)
    - Black-box AI that doesn't explain *why* something was flagged
    - Oversimplifying complex attacks or assuming all threats follow patterns
    - Ignoring the operational burden of deploying and maintaining tools
    - Content written by people who've clearly never worked in a SOC
    - Content that oversimplifies attacker behavior into "bad guys do bad things"
    
    ### Tertiary Buyer: The Developer / DevSecOps Engineer
    
    **Title/role:** Software Engineer, DevOps Engineer, Application Security Engineer, Cloud Engineer
    
    **What they already know:**
    - Software development lifecycle and CI/CD pipelines
    - Basic vulnerability types (injection, XSS, CSRF, etc.)
    - How containers and Kubernetes work
    - Infrastructure-as-code (Terraform, CloudFormation)
    - The tension between security and shipping velocity
    - What SAST, DAST, and SCA tools do
    
    **What they're evaluating:**
    - Whether security tools slow down their release pipeline
    - How much effort is required to remediate findings
    - Whether security feedback is actionable and prioritized
    - Integration with their existing dev tools (GitHub, GitLab, Jenkins)
    - Can this fix issues automatically or generate remediation code
    
    **Questions they ask during research:**
    - "Will this block my deployments or just alert?"
    - "How do I prioritize which vulnerabilities to fix first?"
    - "Can this run in my CI/CD pipeline without adding 20 minutes to builds?"
    - "Does it understand my code context or just pattern-match?"
    
    **What turns them off:**
    - Security tools that treat them as the enemy rather than partner
    - Overwhelming lists of low-severity findings without context
    - "Shift-left" rhetoric without acknowledging developer experience
    - Security people who don't understand modern development practices
    
    ---
    
    ## 2. Benchmark Brand Content Analysis
    
    ### CrowdStrike
    
    **Voice profile:** Authoritative without being academic. Balances executive-level strategic framing with deep technical detail. Uses a "we've seen this in the wild" stance that constantly reinforces their frontline threat intelligence credibility. Confident and definitive — they name adversaries, state attribution assessments clearly, and don't hedge excessively. Bridges executive and technical audiences with measured urgency through specific metrics.
    
    **Depth level:** 9/10. Example: In the LABYRINTH CHOLLIMA article, they don't just say "North Korean threat actors are active." They write: "LABYRINTH CHOLLIMA has evolved into three distinct adversaries with specialized malware, objectives, and tradecraft: GOLDEN CHOLLIMA and PRESSURE CHOLLIMA now likely operate separately from the core LABYRINTH CHOLLIMA group." They split adversary groups based on operational pattern analysis, not just geography. References "hands-on-keyboard activity" vs. automated attacks, "valid account abuse," and "EDR visibility gaps" as if readers inherently understand the implications.
    
    **What makes their content work:**
    - **Proprietary naming conventions as mental models**: They've created a taxonomy (PANDA for China, BEAR for Russia, CHOLLIMA for DPRK) that becomes shorthand for understanding threat ecosystems. This is differentiation through framework ownership.
    - **"Agentic" positioning as strategic narrative**: They've introduced "agentic SOC" and "agentic defense" as the next evolution beyond automation. This isn't just about AI tools — it's about architectural philosophy. Quote: "The question isn't whether to adopt AI in security operations. It's whether the platform architecture can support AI agents that reason across unified telemetry."
    - **Concrete metrics that create urgency**: The 1-10-60 rule, breakout times measured in minutes (fastest: 51 seconds), specific CVE patch lag data. They quantify the adversary advantage to create a sense of operational reality without sensationalism.
    - **Statistical evidence sandwich**: Open with metric, explain context, close with impact.
    
    **Structural patterns worth borrowing:**
    - Lead with strategic implications, then dive into technical details. Never the reverse.
    - Use adversary naming and specific campaign references throughout — it signals insider knowledge.
    - Structure threat intelligence posts as: Executive summary with key findings, adversary background, technical analysis, detection/response guidance, IOCs/TTPs in appendix format.
    - In executive viewpoint posts: Start with a provocative thesis (e.g., "The architectural divide in cybersecurity is no longer theoretical. It's operational."), support with real-world evidence, end with a clear strategic direction.
    - Threat actor profiling format: Name, geographical nexus, specific tactics, scale/volume data.
    - "Full report" downloads that convert intelligence consumption into lead generation.
    
    ### Palo Alto Networks / Unit 42
    
    **Voice profile:** Research-forward and methodical. Unit 42 positions itself as the threat research authority — less about platform selling, more about "here's what we're seeing in global incident response engagements." Measured and evidence-based. They explain their analysis methodology transparently (e.g., "We identified additional unreported infrastructure, which is linked to this campaign"). Responsible disclosure tone that avoids creating panic while being direct about risk. Quote: "Our goal is simple. Translate research into the insights security leaders actually need to make better decisions."
    
    **Depth level:** 8.5/10. Example: In the Notepad++ supply chain article, they detail the infrastructure-level hijack: "This allowed the attackers to intercept and redirect traffic destined for the Notepad++ update server... enabled the attackers to selectively target specific users... primarily located in Southeast Asia across government, telecommunications and critical infrastructure sectors." They describe not just *what* happened but *how* attackers operationalized the compromise. References "Graph API queries," "living-off-the-land binaries," "ITDR," and "conditional access policies" without explanation, assuming readers understand these as standard enterprise security components.
    
    **What makes their content work:**
    - **"From the front lines" credibility**: Constant references to "our incident response engagements" and "cases we investigated." This isn't theoretical — it's derived from real breaches they cleaned up.
    - **Thematic trend analysis, not just individual threats**: Their Threat Bulletin format focuses on evolving attacker behaviors (e.g., "vibe coding accelerating development while expanding attack surface" or "malicious QR codes becoming harder to detect"). They connect dots across disparate incidents.
    - **Pragmatic, non-sensational threat disclosure**: They report supply chain compromises and nation-state activity with clinical precision. No dramatics, no sky-is-falling rhetoric.
    - **Emphasis on prevalence** ("36% of all incidents") rather than severity alone.
    
    **Structural patterns worth borrowing:**
    - Start threat research posts with a concise "what happened" summary, then unpack the attack chain step by step.
    - Use temporal framing to show evolution: "Between June and December 2025..." — this contextualizes the threat lifecycle.
    - Break down analysis into: Initial access, persistence mechanism, C2 infrastructure, targeting/victimology, attribution assessment, defensive recommendations.
    - In monthly/quarterly reports: Use a thematic lens (not just a list of threats). Example: "Three shifts that are quietly changing how organizations should think about risk."
    - Visual data presentation (charts showing initial access vector distribution).
    - "Key Takeaway" callout boxes translate findings into actionable insights.
    - Remediation-focused conclusions (8 specific recommendations tied to identified gaps).
    
    ### SentinelOne
    
    **Voice profile:** Technical and immediate. SentinelLABS content is written for security engineers and threat hunters who need actionable intel *now*. They assume deep technical fluency and don't slow down to explain. The tone is urgent but not alarmist — they're alerting peers to live threats. Heavy use of imperative language: "This blog post includes the critical, immediate actions recommended to secure your environment." Multiple expert voices within single content (different analysts for different threat areas). Uses academic phrasing mixed with operational directness.
    
    **Depth level:** 9/10. Example: In the React2Shell RCE post, they immediately state: "the flaw stems from insecure deserialization in the RSC 'Flight' protocol and impacts packages including react-server-dom-webpack, react-server-dom-parcel, and react-server-dom-turbopack. Exploitation is highly reliable, even in default deployments, and a single request can compromise the full Node.js process." No handholding — they jump straight to the vulnerable component and exploitation mechanics. References "AppleScript's Objective-C bridge," "JXA (JavaScript for Automation)," "Gatekeeper workarounds," "code signing," and "notarization" in macOS security discussion without explanation.
    
    **What makes their content work:**
    - **Supply chain security as a core theme**: They've positioned themselves as the authority on agent/plugin supply chain attacks (OpenClaw malicious skills, AI agent security). This is differentiation through focus on emerging attack surfaces.
    - **Speed of response**: They publish vulnerability analysis and detection guidance within hours of public disclosure. Their React2Shell post was out the same day as CVE publication.
    - **Operational security guidance embedded throughout**: They don't just explain the threat — they immediately tell you how to detect it with their platform (but in a way that's generalizable). Quote: "new and existing Platform Detection Rules designed to defend against this vulnerability."
    - **"SentinelLABS" branding** separates research from product marketing.
    - **Predictions grounded in current evidence** (not just speculation).
    
    **Structural patterns worth borrowing:**
    - Lead with vulnerability severity and affected components in the first paragraph. No preamble.
    - Structure vulnerability posts as: Vulnerability summary, technical root cause, exploitation mechanics, impact assessment, detection/mitigation guidance, references/IOCs.
    - Use bold or emphasized text for critical actions: "immediately patch," "high-severity remote code execution."
    - Include code snippets, payloads, or technical diagrams where relevant — this audience expects to see the actual exploit.
    - Month-by-month chronicle structure for annual reviews.
    - Specific actor callouts with dated observations ("July 19th, two days after first observation").
    - CVE and vulnerability disclosure integrated into broader threat narrative.
    
    ### Wiz
    
    **Voice profile:** Cloud-native and developer-aware. Wiz writes with the assumption that modern security is a cloud problem first, not an endpoint problem with cloud bolted on. They speak fluent Kubernetes, CI/CD, and cloud-native development. The tone is conversational but precise — less formal than CrowdStrike, more accessible than SentinelOne, but no less technical. They explain *why* something matters in cloud contexts. CISO-to-CISO advisory perspective. Positions cloud security as business enabler, not blocker.
    
    **Depth level:** 8/10. Example: In the Moltbook exposed database post, they contextualize the finding: "We identified a misconfigured Supabase database belonging to Moltbook, allowing full read and write access to all platform data. The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents." They explain the cloud misconfiguration (Supabase), the data exposure scope, and immediately note their responsible disclosure process. Uses "cloud visibility," "misconfigurations," "over-permissioned accounts," "tool sprawl," and "compliance drift" without definition.
    
    **What makes their content work:**
    - **Cloud security as first-class problem domain**: They've created frameworks like SITF (SDLC Infrastructure Threat Framework) to address the gap in MITRE ATT&CK coverage for CI/CD and build infrastructure. Quote: "attackers have realized the high ROI of targeting SDLC infrastructure. They are not just looking for vulnerabilities in code anymore; they are compromising the factories that build the code."
    - **"Vibe-coded" security failures as a theme**: Wiz highlights how rapid development (especially AI-assisted coding) creates security debt. The Moltbook case study is framed as "what happens when applications are vibe-coded into existence without proper security controls."
    - **CISO budget framing addresses real pain point**: Security yield concept (risk reduction per dollar) gives executives new language for board conversations. Reframes traditional metrics (ROI to security yield).
    - **CISO-level framing with developer empathy**: They translate technical findings into business risk, but without talking down to technical readers. They respect that CISOs often have deep technical backgrounds.
    
    **Structural patterns worth borrowing:**
    - Start cloud security posts with a "what we found" summary, immediately state the blast radius (how many records/keys exposed), and note disclosure timeline.
    - Use cloud-native terminology naturally: "misconfigured Supabase database," "SDLC infrastructure," "runtime security," "DSPM."
    - Structure research posts as: Discovery context, what was exposed, why it happened (root cause), broader pattern/trend, defensive guidance.
    - Include frameworks and mental models: SITF for SDLC attacks, references to MITRE ATT&CK for cloud, Zero Trust architecture principles.
    - Numbered prediction/trend lists (scannable, shareable).
    - Expert quote integration throughout (not just at end).
    - Downloadable resources (benchmark reports) convert thought leadership into lead generation.
    
    ### Snyk
    
    **Voice profile:** Developer-first and pragmatic. Snyk writes for security-conscious developers and DevSecOps teams who view security as a continuous integration problem, not a perimeter defense problem. The tone is collaborative, not combative — they position security as enabling velocity, not blocking it. They use developer idioms naturally: "shift left," "false positives in CI/CD," "security as code." Acknowledges friction points honestly ("Security is often seen as a roadblock") before offering solutions. Least formal of all five brands but maintains credibility through frameworks and research citations.
    
    **Depth level:** 8/10. Example: In the ToxicSkills research, they frame the problem in developer terms: "Agent skills are reusable capability packages that instruct AI agents how to interact with tools, APIs, or system resources — and they're rapidly becoming standard in AI-powered development." They immediately quantify the risk: "13.4% of all skills, or 534 in total, all contain at least one critical-level security flaw" and note the sample size (3,984 skills scanned from ClawHub and skills.sh as of February 5th, 2026).
    
    **What makes their content work:**
    - **"AI Security Fabric" as strategic framing**: They've introduced a conceptual model that positions security not as bolt-on scanning but as a continuous fabric across "humans, models, and autonomous agents working together at machine speed." This reframes the problem from "scan dependencies" to "secure the entire software creation pipeline."
    - **Developer empathy without compromising on severity**: They acknowledge operational realities ("if you've installed one in the past month, there's a 13% chance it contains a critical security flaw") without sugarcoating the risk.
    - **Shift from reactive to prescriptive**: Their "Prescriptive Path" framework tells developers *how* to secure AI/ML workflows, not just *what* the threats are. This is differentiation through actionable guidance.
    - **Three-pillar frameworks** (transparency, accountability, risk reduction) create clear structure.
    - **Problem-first framing** connects to developer pain points (flow state, cognitive load, release velocity).
    - **DevSecOps maturity model** provides roadmap from current state to desired state.
    
    **Structural patterns worth borrowing:**
    - Lead with the problem statement in developer/engineering terms, not security jargon.
    - Use data-driven framing: percentages, sample sizes, timelines. Example: "scanning 3,984 skills from ClawHub and skills.sh as of February 5th, 2026."
    - Structure DevSecOps posts as: Problem/trend, research methodology, key findings (quantified), why this matters for developers, prescriptive recommendations, tool/integration guidance.
    - Reference frameworks: "AI Security Fabric," "SBOM," "supply chain security," "dependency confusion attacks."
    - Developer-friendly formatting (code examples, tool lists, concrete steps).
    - Benefits tied to developer productivity metrics (time saved, fewer context switches).
    - Language bridges technical and business audiences (developers AND security leaders).
    
    ---
    
    ## 3. Cybersecurity Vocabulary — Required Fluency
    
    ### Table-Stakes Terms (use naturally, never define)
    
    These terms should appear in your content as if the reader already knows them. No parenthetical definitions, no "also known as" explanations. If you're writing for a cybersecurity audience and you define these, you signal outsider status.
    
    1. **Zero-day, CVE, CVSS** — vulnerability terminology
    2. **MITRE ATT&CK** — threat framework (note: it's ATT&CK, not Attack)
    3. **TTP** — tactics, techniques, procedures
    4. **IOC** — indicators of compromise
    5. **C2 / command and control** — adversary infrastructure
    6. **APT (Advanced Persistent Threat)** — sophisticated, prolonged attack campaigns
    7. **Nation-state actor / threat actor / adversary** — adversary terminology
    8. **Lateral movement** — moving from initial access point to other systems
    9. **Privilege escalation** — gaining higher-level access than initially obtained
    10. **Data exfiltration** — unauthorized data extraction from target environment
    11. **Initial access** — first entry point into target environment
    12. **Persistence** — maintaining access even after reboots or credential changes
    13. **Credential harvesting / theft** — stealing usernames, passwords, tokens, or keys
    14. **Phishing / spear phishing** — social engineering via deceptive messages
    15. **Social engineering** — manipulating people into divulging information or taking actions
    16. **Ransomware / extortion** — cybercrime tactics involving encryption or data theft
    17. **Malware / fileless malware** — malicious software, including memory-resident threats
    18. **Living off the land (LOTL) / LOLBins** — using legitimate system tools to conduct attacks
    19. **EDR (Endpoint Detection and Response)** — security tools monitoring endpoint devices
    20. **SIEM (Security Information and Event Management)** — centralized log analysis and correlation
    21. **SOC (Security Operations Center)** — team/facility monitoring and responding to threats
    22. **XDR (Extended Detection and Response)** — EDR expanded to network, cloud, and other sources
    23. **SOAR (Security Orchestration, Automation, and Response)** — automating security workflows
    24. **MFA (Multi-Factor Authentication)** — requiring multiple forms of identity verification
    25. **Cloud misconfigurations** — improperly configured cloud resources creating security gaps
    26. **Supply chain attack** — compromising software or hardware before it reaches target
    27. **CSPM / DSPM** — Cloud/Data Security Posture Management
    28. **ITDR** — Identity Threat Detection and Response
    29. **Threat intelligence / threat hunting** — proactive defense
    30. **Incident response (IR)** — breach investigation and remediation
    31. **Vulnerability management** — identifying, assessing, and remediating security weaknesses
    32. **Attack surface** — sum of all possible entry points an attacker could exploit
    33. **Security posture** — overall security strength and readiness of an organization
    34. **Web shell / backdoor / implant** — persistence mechanisms
    35. **SBOM (Software Bill of Materials)** — inventory of software components
    36. **CI/CD pipeline** — continuous integration/deployment
    37. **Container / Kubernetes (K8s)** — cloud-native infrastructure
    38. **Serverless / Lambda** — cloud compute models
    39. **API security** — application programming interface protection
    40. **Dwell time / breakout time** — incident response metrics
    41. **OPSEC** — operations security (adversary tradecraft)
    
    ### Precision Terms (use when relevant, brief context OK)
    
    These are more specialized terms. You can still use them without formal definition, but a brief contextual clue is acceptable if it flows naturally. These signal deeper expertise.
    
    1. **MITRE ATT&CK technique IDs** (e.g., T1059.001) — specific adversary behaviors in the framework
    2. **Hands-on-keyboard activity** — manual attacker operations vs. automated attacks (CrowdStrike terminology for interactive intrusions)
    3. **Breakout time** — how quickly an attacker moves from initial access to lateral movement (CrowdStrike metric)
    4. **1-10-60 rule** — CrowdStrike's metric (detect in 1 minute, investigate in 10, remediate in 60)
    5. **YARA rules / Sigma rules** — detection signatures
    6. **Cobalt Strike, Mythic, Sliver** — commercial/open-source adversary simulation frameworks often abused by real attackers
    7. **N-day** — known vulnerability that remains unpatched (vs. zero-day)
    8. **Access broker / Initial Access Broker (IAB)** — threat actor who sells access to compromised networks
    9. **RMM (Remote Monitoring and Management)** — legitimate IT tools abused by attackers
    10. **SIM swapping** — taking over phone numbers to bypass SMS-based MFA
    11. **Vishing** — voice phishing (social engineering via phone calls)
    12. **Business email compromise (BEC)** — email scams targeting financial transactions
    13. **Infostealer** — malware designed to extract credentials, browser data, crypto wallets
    14. **Graph API** — Microsoft Graph API (often abused for cloud reconnaissance)
    15. **Conditional access policies** — Azure AD/Entra ID rules governing authentication
    16. **DevSecOps** — integrating security into DevOps workflows
    17. **SAST / DAST** — Static/Dynamic Application Security Testing
    18. **SCA (Software Composition Analysis)** — scanning for open-source vulnerabilities
    19. **Container security** — securing Docker, Kubernetes, and containerized applications
    20. **Infrastructure-as-Code (IaC)** — managing infrastructure through code (Terraform, CloudFormation)
    21. **Agentic security / agentic SOC** — AI agents that autonomously investigate and respond to threats
    22. **SITF** — SDLC Infrastructure Threat Framework (Wiz's framework for build pipeline attacks)
    23. **Vibe coding** — rapid, AI-assisted development without rigorous security review
    24. **Dependency confusion / typosquatting** — supply chain attack vectors
    25. **Prompt injection** — AI/LLM security vulnerability
    26. **Model Context Protocol (MCP)** — AI agent communication standard
    27. **Platformization** — consolidating multiple security tools into a unified platform
    28. **Sensor (in EDR context)** — the agent deployed on endpoints for telemetry collection
    29. **Telemetry** — security data collected from endpoints, cloud, network
    30. **Exploitation in the wild** — vulnerability actively being used by attackers, not just theoretical
    31. **False positive rate / alert fatigue** — SOC operational challenges
    32. **DSPM into runtime** — extending data security posture from static analysis to active monitoring
    
    ### Terms to Avoid (Signal Generic/Outsider Writing)
    
    These phrases immediately identify content as written by someone unfamiliar with cybersecurity. If you catch yourself using these, rewrite:
    
    1. **"Cyber threats are on the rise"** — Vague and overused. Cite specific threat increases (ransomware up X%, vishing up Y%)
    2. **"Hackers are getting more sophisticated"** — Cliche. Describe specific sophistication (MFA bypass techniques, cloud-native attacks)
    3. **"Protect your digital assets"** — Generic marketing speak. Say what you're actually protecting (customer data, intellectual property, cloud workloads)
    4. **"In today's digital landscape"** — Corporate platitude. Get to the point.
    5. **"Stay one step ahead of cybercriminals"** — Impossible promise. Focus on detection speed, response efficacy
    6. **"Comprehensive cybersecurity solution"** — No single tool is comprehensive. Specify what it covers (endpoint, cloud, identity)
    7. **"Advanced threat protection"** — Meaningless without specifics. What attacks does it detect? What techniques?
    8. **"Enterprise-grade security"** — Empty descriptor. Cite certifications (SOC 2, FedRAMP) or capabilities
    9. **"Next-generation security"** — Unless you're specifically contrasting with legacy approaches
    10. **"Zero-trust security"** — Overused buzzword. Explain actual zero-trust principles being applied
    11. **"AI-powered threat detection"** — Vague. Explain what the AI does (anomaly detection, behavioral analysis, pattern matching)
    12. **"Prevent all cyberattacks"** / **"Unhackable"** / **"100% secure"** — Impossible claim. No vendor can prevent ALL attacks
    13. **"Military-grade encryption"** — Meaningless marketing term (most modern encryption is strong)
    14. **"Cutting-edge cybersecurity"** — Empty temporal claim. Describe actual capabilities
    15. **"Secure your perimeter"** — Outdated network security concept (cloud/remote work changed this)
    16. **"Cybersecurity best practices"** — Too vague. Be specific about which practice
    17. **"Dark web monitoring"** — Unless you're explaining what you actually monitor
    18. **"Secure your data from bad actors"** — Imprecise fear-based language
    
    **Why these fail the insider test:** They're imprecise, fear-based, and don't demonstrate understanding of actual adversary behavior or defensive operations. Real cybersecurity professionals talk about *specific* threats, *measured* risks, and *operational* constraints.
    
    ---
    
    ## 4. Regulated Language Guardrails
    
    Cybersecurity involves vulnerability disclosure, incident reporting, and threat attribution. Your content must navigate this carefully.
    
    ### Claims You CAN Make About Your Cybersecurity Content:
    
    - **"We help cybersecurity vendors rank for high-intent keywords that buyers actually search during vendor evaluation."** — Factual service description
    - **"Our content matches the depth and vocabulary that CISOs and security engineers expect — we don't write generic 'cyber threat' content."** — Demonstrates vertical specialization
    - **"We understand the difference between writing for security buyers (CISO, VP Security) vs. technical implementers (SOC analysts, security engineers)."** — Persona-aware positioning
    - **"We track how cybersecurity brands like CrowdStrike, Palo Alto, and Wiz structure thought leadership vs. threat intelligence content."** — Competitive analysis capability
    - **"We know when to reference MITRE ATT&CK, when to cite CVEs, and when to explain cloud-native security concepts like CSPM or DSPM."** — Domain fluency demonstration
    - **"Content targeting security practitioners should reference MITRE ATT&CK techniques and CVE IDs."** — Strategic content advice demonstrating domain knowledge
    - **"We research competitor threat intelligence to identify content gaps and ranking opportunities."** — Standard competitive analysis
    
    ### Claims You CANNOT Make (Liability/Credibility Risk):
    
    - **"We guarantee first-page rankings for competitive cybersecurity keywords"** — Ranking guarantees are unprofessional; security keywords are highly competitive
    - **"Our content will make your product sound more secure"** — Implies misleading buyers
    - **"Our content will help you pass SOC 2 or ISO 27001 audits"** — Only authorized compliance firms can make audit-related promises
    - **"We perform vulnerability research or penetration testing"** — Unless you have actual security researchers, don't claim security testing capabilities
    - **"We can verify your product stops [specific threat actor]"** — Product efficacy claims require actual testing; you're writing marketing content, not running a security lab
    - **"We know as much about cybersecurity as CrowdStrike's research team"** — False equivalence; you understand how they write, not how they do threat research
    - **"Ranking higher will reduce your company's security incidents"** — Absurd claim; rankings don't affect security posture
    - **"We help you avoid regulatory scrutiny through content strategy"** — Implies helping clients evade compliance
    - **"Our threat intelligence is confirmed by industry sources"** — Don't pretend to be a threat intelligence provider; you create content ABOUT threat intelligence
    - **"We can attribute attacks to specific threat actors"** — Threat attribution requires deep technical investigation; don't claim this capability
    - **"We'll help you rank for zero-day vulnerabilities"** — Exploitative and potentially harmful
    
    ### How Benchmark Brands Handle Threat Disclosure Language:
    
    **Responsible disclosure is always noted:**
    - Wiz: "We immediately disclosed the issue to the Moltbook team, who secured it within hours with our assistance, and all data accessed during the research and fix verification has been deleted."
    - Unit 42: "Unit 42 also found that this threat activity is still ongoing and notified relevant entities."
    
    **Attribution is careful and evidence-based:**
    - CrowdStrike: "CrowdStrike Intelligence assesses that..." (not "we know for certain")
    - Unit 42: "likely motivated by intelligence-collection requirements aligned with the strategic interests of the People's Republic of China" (evidence-based language)
    - They attribute to geographical regions ("DPRK-nexus," "China-aligned") with hedging ("suspected," "likely")
    - Cite specific observation dates ("first observed July 19," "active since Q2 2025")
    - Avoid: "Chinese hackers attacked..." (too definitive without supporting evidence)
    
    **Vulnerability severity is quantified, not sensationalized:**
    - SentinelOne: "A critical remote code execution (RCE) vulnerability, dubbed 'React2Shell', affecting React Server Components" — they use CVSS-aligned terminology (critical, high, medium)
    - Avoid: "The worst vulnerability we've ever seen" or "hackers can steal everything"
    
    **Measured language throughout:**
    - Use: "observed," "detected," "identified," "confirmed," "indicators suggest," "telemetry shows," "consistent with"
    - Acknowledge uncertainty: "potential risk," "observed in the wild," "recommend patching"
    - Acknowledge limitations: "Based on available telemetry," "In investigated incidents"
    - Never sensationalize: Present data, let readers conclude severity
    
    ### How to Discuss Breaches, Vulnerabilities, and Threat Actors Without Sensationalism:
    
    **DO:**
    - Use clinical language: "We identified a misconfigured database allowing unauthorized access to..."
    - Use specific metrics: "Breakout time decreased from 84 minutes to 48 minutes year-over-year"
    - Quantify impact: "1.5 million API tokens, 35,000 email addresses exposed"
    - Contextualize severity: "This affects organizations using X configuration in Y cloud environment"
    - Note remediation: "The vulnerability was patched within 24 hours of disclosure"
    - Explain the root cause: "The flaw stems from insecure deserialization in the RSC 'Flight' protocol"
    - Provide defensive guidance: "Security teams should prioritize patching CVE-2025-XXXXX"
    - Cite sources: "According to Unit 42's 2025 Incident Response Report"
    
    **DON'T:**
    - Catastrophize: "This vulnerability could destroy your business"
    - Create false urgency: "You have 24 hours to patch or face certain breach"
    - Personalize attackers: "Evil hackers want to steal your secrets"
    - Speculate without evidence: "We believe this is the work of [nation-state] but have no proof"
    - Overgeneralize: "All cloud databases are vulnerable"
    - Name breach victims without public disclosure: "Our client [company] was breached"
    - Overstate threat actor capabilities: "Unstoppable hacking group"
    - Use fear-based language: "Your data is at risk right now," "Hackers are targeting YOU"
    - Omit context: "A major breach occurred" (without explaining what was accessed, how, and whether it was contained)
    
    ---
    
    ## 5. Content Depth Calibration
    
    ### The "Insider Test"
    
    Content passes the insider test when a security professional reads it and thinks "this person understands the threat landscape." Here are the five signals that separate insider content from generic cybersecurity writing:
    
    #### Signal 1: You name adversaries and campaigns, not just "hackers"
    
    **Insider:** "LABYRINTH CHOLLIMA, a DPRK-nexus adversary tracked by CrowdStrike, has split into three distinct operational groups: GOLDEN CHOLLIMA and PRESSURE CHOLLIMA focus on cryptocurrency theft targeting exchanges and DeFi platforms, while core LABYRINTH CHOLLIMA continues espionage campaigns against defense contractors. All three share infrastructure and tooling, indicating centralized resource allocation despite operational independence."
    
    **Outsider:** "North Korean hackers are targeting companies with advanced malware."
    
    #### Signal 2: You reference frameworks and standards without explaining them
    
    **Insider:** "The attack mapped to MITRE ATT&CK techniques T1059.001 (PowerShell) and T1071.001 (web protocols for C2)."
    
    **Outsider:** "The attack used PowerShell, which is a command-line tool, to communicate with the hacker's server."
    
    #### Signal 3: You explain *how* something works at the architectural level, not just *what* it does
    
    **Insider:** "The flaw stems from insecure deserialization in the RSC 'Flight' protocol, affecting react-server-dom-webpack. A single malicious HTTP request can inject arbitrary code into the Node.js process before any authentication occurs."
    
    **Outsider:** "The vulnerability lets hackers run code on your server."
    
    #### Signal 4: You use temporal and operational framing
    
    **Insider:** "Between June and December 2025, Unit 42 identified multiple intrusions targeting VMware vCenter environments. The adversary maintained persistent access with a median dwell time of 87 days before detection."
    
    **Outsider:** "Hackers recently attacked VMware systems."
    
    #### Signal 5: You cite real-world defensive implications, not just threat descriptions
    
    **Insider:** "Organizations should prioritize patching CVE-2025-55182 immediately. For those unable to patch, implement WAF rules to block requests containing serialized React Flight payloads, though this may break legitimate RSC functionality."
    
    **Outsider:** "Make sure to update your software to stay safe."
    
    #### Extended Insider Test Examples
    
    **Full paragraph that passes:**
    
    "EDR platforms face a detection challenge with modern ransomware: most operators now use living-off-the-land techniques rather than dropping malware files. CrowdStrike's 2025 Global Threat Report shows 79% of detections are malware-free, with threat actors using PowerShell, WMI, and legitimate RMM tools to move laterally and deploy payloads. Content strategies targeting security practitioners should focus on behavior-based detection capabilities — specifically, how vendors correlate process execution with credential access and lateral movement patterns. Ranking for 'ransomware protection' without addressing these technique-specific detections misses what SOC teams actually search for when evaluating solutions: MITRE ATT&CK coverage (T1059, T1021, T1047) with false positive rates in production environments."
    
    **Why it works:**
    - Names specific vendor and report (CrowdStrike 2025 Global Threat Report) with exact stat
    - References specific techniques (PowerShell, WMI, RMM tools, living-off-the-land)
    - Uses table-stakes vocabulary (EDR, malware-free, lateral movement, payloads, SOC)
    - Cites MITRE ATT&CK technique IDs (T1059, T1021, T1047)
    - Connects content strategy to actual search behavior
    - Acknowledges operational reality (false positive rates matter)
    
    **Full paragraph that fails:**
    
    "Cybersecurity threats are constantly evolving, and organizations must stay vigilant to protect their valuable data. Hackers are using increasingly sophisticated techniques to breach networks and steal sensitive information. Companies need robust security solutions and employee training programs to defend against these advanced attacks. By implementing best practices and investing in cutting-edge security technology, businesses can significantly reduce their risk of becoming a cyberattack victim."
    
    **Why it fails:**
    - "Constantly evolving threats" — cliche with no specifics
    - "Increasingly sophisticated" — vague claim without evidence
    - "Robust security solutions" — meaningless descriptor
    - "Cutting-edge security technology" — empty marketing language
    - "Best practices" — which practices?
    - No specific threats, tools, techniques, or frameworks referenced
    - Could apply to any industry, any year
    
    ### Depth Floor
    
    The minimum technical depth a cybersecurity content piece must hit to be credible, organized by content type:
    
    #### For Threat Research / Landscape Posts:
    - Name the vulnerability (CVE ID) or adversary (with attribution basis)
    - Describe the attack vector and affected component (not just "servers were hacked")
    - Explain the exploitation mechanism or TTP
    - Provide detection guidance (even if general: "monitor for unusual PowerShell execution in web server contexts")
    - Include timeline context (when was it discovered, when was it patched, is it being exploited in the wild?)
    
    #### For Thought Leadership Posts:
    - Reference specific industry frameworks (MITRE ATT&CK, NIST, Zero Trust)
    - Use operational metrics (false positive rates, MTTD, MTTR, breakout time)
    - Cite real-world examples (named breaches, vulnerabilities, or attack campaigns)
    - Address the architectural or strategic implication, not just the tactical fix
    - Acknowledge trade-offs (e.g., "agentic security improves speed but requires architectural rethinking")
    
    #### For Buyer Education Posts:
    - Explain the *why* behind security decisions (not just the what)
    - Use vendor-neutral language but cite real product categories (XDR, SOAR, CSPM)
    - Address operational constraints (budget, staffing, alert fatigue)
    - Include decision frameworks (e.g., "When evaluating SIEM vs. XDR, consider...")
    
    #### Depth Floor Example (Meets Minimum):
    
    "Security teams at mid-market companies typically struggle with alert fatigue when deploying SIEM platforms. The challenge isn't the SIEM itself — tools like Splunk, Sentinel, and Chronicle can ingest logs from endpoints, cloud, and network. The problem is that out-of-the-box detection rules generate thousands of false positives daily, overwhelming small SOC teams. According to Unit 42's 2025 Incident Response Report, 36% of incidents start with social engineering, yet many SIEM deployments focus rule tuning on malware signatures rather than credential abuse detection. Effective SIEM strategies prioritize high-fidelity detections for the most common initial access vectors rather than trying to detect everything."
    
    **Why this meets the floor:**
    - Names specific tools (Splunk, Sentinel, Chronicle)
    - Identifies real problem (alert fatigue, false positives)
    - Cites authoritative source (Unit 42 2025 report with specific stat)
    - Uses table-stakes terms (SIEM, SOC, initial access vectors, credential abuse, social engineering)
    - Provides strategic guidance (prioritize high-fidelity detections)
    - Acknowledges tradeoff (can't detect everything)
    
    ### Depth Ceiling
    
    Where your marketing content should stop. You're writing educational content to establish expertise and help buyers evaluate solutions. You're NOT writing:
    
    - **Threat research reports** — Don't publish original malware analysis, reverse engineering, or exploit development
    - **Detection rules or IOCs** — Don't publish IP addresses, file hashes, YARA rules, or Sigma rules unless citing authoritative sources
    - **Incident response playbooks** — Don't provide step-by-step IR procedures or forensic investigation guides
    - **Penetration testing guides** — Don't explain how to exploit vulnerabilities or conduct red team operations
    - **Vulnerability disclosure** — Don't publish CVE details or proof-of-concept exploits
    - **Configuration guides** — Don't write "how to configure SIEM rule 47 in Splunk"
    
    **The boundary test:** If what you're writing would require access to proprietary threat intelligence, incident response data, or vulnerability testing, you've crossed the line. Stick to synthesizing publicly available information and positioning strategy.
    
    **DO instead:**
    - Analyze *how* leading cybersecurity brands position their thought leadership
    - Synthesize trends from multiple threat reports (e.g., "CrowdStrike, Unit 42, and Mandiant all noted increased BEC attacks in Q4 2025")
    - Explain what security buyers should look for when evaluating vendors
    - Write about content strategy *for* cybersecurity companies (e.g., "How to rank for 'SIEM' without overpromising capabilities")
    - Create content that educates prospects about the security problem space (so they're informed buyers)
    
    #### Depth Ceiling Example (Too Far):
    
    "Analysis of the GOLDEN CHOLLIMA C2 infrastructure reveals three tiers of proxied connections: initial beaconing to Cloudflare Workers (SHA-256: a3f5e8...) for domain fronting, followed by redirection to VPS nodes (AS: 12345) in Southeast Asia, terminating at attacker-controlled servers in the DPRK IP space (175.45.176.0/24). We reverse-engineered the beacon protocol and identified a custom XOR obfuscation layer with a 16-byte rolling key. Our YARA rule (available at github.com/our-org/yara-rules) detects the beacon signature in memory. Deploy Sigma rule ID 0023-CHOLLIMA to trigger on network telemetry."
    
    **Why this exceeds the ceiling:**
    - Claims original reverse-engineering (your team didn't do this research)
    - SHA256 hashes, API calls, AS numbers — this is threat research, not marketing content
    - Publishes detailed IOCs and detection rules (that's for security vendors, not content teams)
    - Implies access to proprietary threat intelligence (you're synthesizing public info)
    - Goes beyond positioning and into operational guidance that requires validation
    - Appropriate for SentinelLABS or Unit 42, not a content team
    - Could constitute irresponsible disclosure if the details aren't already public
    
    #### Appropriate Depth (Strategic, Not Tactical):
    
    "Security vendors targeting threat intelligence-driven buyers should understand how nation-state adversary content ranks differently than product content. When CrowdStrike publishes its LABYRINTH CHOLLIMA analysis — including infrastructure mapping, C2 protocol analysis, and YARA rules — they're targeting security practitioners searching for campaign-specific IOCs and attribution data. This content ranks for long-tail technical queries ('[adversary name] IOC,' 'DPRK threat actor [technique]') and builds authority. Most cybersecurity vendors can't publish original adversary research at this depth — and that's fine. Instead, focus on threat landscape synthesis: 'What the DPRK Cryptocurrency Theft Campaign Means for Financial Services Security Teams' content that aggregates public research from CrowdStrike, Unit 42, and Mandiant and provides sector-specific defensive guidance. This approach captures related search traffic without requiring a threat research lab."
    
    **Why this is appropriate:**
    - References the adversary research (CrowdStrike's LABYRINTH CHOLLIMA) without claiming it
    - Connects it to content implications (what it ranks for, what queries it captures)
    - Positions you as understanding the domain without claiming capabilities you lack
    - Offers alternative approach for vendors without threat research teams
    - Focuses on content strategy, not malware analysis
    - Stays in its lane: content strategy, not cybersecurity research
    
    ---
    
    ## 6. Content Gap Opportunities
    
    Based on benchmark brand analysis, here are topics cybersecurity companies publish about — and content angles they're NOT covering that your content team should own:
    
    ### What Cybersecurity Brands Publish:
    
    1. **Threat intelligence reports** — Annual/quarterly threat landscape analysis, specific campaign breakdowns, threat actor profiles
    2. **Incident response insights** — Case studies from real engagements, common attack patterns, defensive recommendations
    3. **Product thought leadership** — How EDR/XDR/CNAPP works, detection methodology, platform capabilities
    4. **Technical deep-dives** — Malware analysis, vulnerability research, exploit technique breakdowns
    5. **Industry predictions** — "2026 Cybersecurity Trends," emerging threats, technology shifts
    
    ### Content Gaps Your Team Should Own:
    
    #### 1. "How Security Buyers Actually Search" Content
    
    **Opportunity:** Cybersecurity vendors create threat content but rarely analyze how CISOs and security teams search during vendor evaluation.
    
    **Content angles:**
    - "What CISOs Search When Evaluating EDR vs. XDR Platforms (Keyword Intent Analysis)"
    - "The Search Journey from 'What is MITRE ATT&CK' to '[Vendor] vs [Vendor]' Comparison"
    - "How Security Engineers Search Differently Than CISOs (Same Product, Different Questions)"
    - "Zero-Click to Final Decision: Mapping the CISO Content Consumption Path"
    
    **Why it works:** You have search data they don't; you understand buyer intent signals
    
    #### 2. "Content Strategy for Different Security Personas" Content
    
    **Opportunity:** Security vendors often write for "security professionals" generically, ignoring that CISOs, SOC analysts, and security engineers have different information needs.
    
    **Content angles:**
    - "Why Your EDR Platform Needs 3 Different Content Tracks (CISO vs. Security Engineer vs. Compliance)"
    - "The Keywords CISOs Search vs. What SOC Analysts Search (Same Threat, Different Priority)"
    - "How to Write Technical Security Content That Doesn't Alienate Business Buyers"
    - "Threat Intelligence Content: When to Cite MITRE ATT&CK vs. When to Explain Business Impact"
    
    **Why it works:** Solves a real problem cybersecurity marketers face (multi-persona buying committees)
    
    #### 3. "Cybersecurity Content ROI & Attribution" Content
    
    **Opportunity:** Security marketers struggle to prove content ROI when sales cycles are long and buying committees are large.
    
    **Content angles:**
    - "How to Measure Threat Intelligence Content Performance (Beyond Downloads)"
    - "Attribution Modeling for Enterprise Security Sales: What Keywords Actually Convert"
    - "The CISO Research Process: Why Ranking for 'EDR' Doesn't Drive Deals"
    - "Content Velocity vs. Content Depth: What Works in Cybersecurity SEO"
    
    **Why it works:** Addresses marketing leader pain point with data-driven approach
    
    #### 4. "Security vs. Compliance Content Strategy" Content
    
    **Opportunity:** Many security vendors conflate security and compliance, creating content that satisfies neither audience.
    
    **Content angles:**
    - "Why 'SOC 2 Compliance' Content Ranks Differently Than 'Threat Detection' Content"
    - "Security Practitioners vs. Compliance Officers: Different Searches, Different Intent"
    - "How to Write About Security Frameworks Without Boring Your Technical Audience"
    - "The Keyword Gap: What Auditors Search vs. What Security Teams Search"
    
    **Why it works:** Demonstrates understanding of audience segmentation
    
    #### 5. "Cybersecurity Content Benchmarking & Competitive Analysis" Content
    
    **Opportunity:** Security companies publish competitive feature matrices but rarely analyze competitor content strategies.
    
    **Content angles:**
    - "How CrowdStrike's Threat Intelligence Content Differs from Unit 42's (And What That Means for SEO)"
    - "The Content Depth Spectrum: From Vendor Whitepapers to Pure Threat Research"
    - "Why Some Security Vendors Rank for CVEs and Others Don't (Attribution Strategy Analysis)"
    - "Threat Actor Naming Taxonomies: How Vendors Differentiate Through Attribution"
    
    **Why it works:** Shows you understand the competitive landscape beyond just products
    
    #### 6. "Handling Sensitive Content: Breach Disclosure, Vulnerability Marketing, Threat Intel SEO"
    
    **Opportunity:** The benchmark brands have mature PR and legal teams. Startups don't. Your team can own the guidance around responsible disclosure, SEO for vulnerability names (e.g., "Heartbleed," "Log4Shell"), and how to market threat intelligence without fearmongering.
    
    **Content angles:**
    - "How Named Vulnerabilities (Heartbleed, Log4Shell, React2Shell) Become SEO Opportunities"
    - "Content Guardrails for Security Startups Without Legal Teams"
    - "The Line Between Threat Marketing and Fearmongering (And How to Stay on the Right Side)"
    
    **Why it works:** Fills a real operational gap for security companies without mature content programs
    
    #### 7. "Building Thought Leadership: The Adversary Naming Playbook"
    
    **Opportunity:** CrowdStrike has PANDAs, Unit 42 has Lotus Blossom, Mandiant has APT groups. Your team can deconstruct how these naming conventions work as SEO and brand strategy (they create searchable, ownable terms).
    
    **Content angles:**
    - "How CrowdStrike's Adversary Naming Taxonomy Became an SEO Moat"
    - "Creating Ownable Security Terminology: From Breakout Time to Agentic SOC"
    - "Why Branded Frameworks Win in Security Content (And How to Build Yours)"
    
    **Why it works:** Connects brand strategy to search visibility in a way only a content-focused team would analyze
    
    ---
    
    ## 7. Voice Calibration Examples
    
    ### Generic (Fails the Insider Test)
    
    "Cybersecurity threats are evolving rapidly, and hackers are using more sophisticated methods to breach organizations. To protect your business, it's essential to implement a comprehensive security solution that can detect and respond to cyber attacks in real time. From ransomware to phishing, the threat landscape is more dangerous than ever. Make sure your team is trained on best practices and your systems are up to date."
    
    **Why it fails:**
    - "Evolving rapidly," "more sophisticated" — vague threat language without evidence
    - "Comprehensive security solution" — buzzword-heavy, meaningless descriptor
    - No specificity (which threats? which defenses? which techniques?)
    - "Best practices" — which practices?
    - Patronizing tone ("make sure your team is trained")
    - Could apply to any year, any industry
    - No vocabulary from the table-stakes list
    
    ### Calibrated (Passes the Insider Test)
    
    "DPRK-nexus adversaries have restructured their operations over the past 18 months. CrowdStrike Intelligence now tracks LABYRINTH CHOLLIMA as three distinct groups: GOLDEN CHOLLIMA and PRESSURE CHOLLIMA focus on cryptocurrency theft targeting exchanges and DeFi platforms, while core LABYRINTH CHOLLIMA continues espionage campaigns against defense and logistics sectors. All three share infrastructure and tooling, indicating centralized resource allocation despite operational independence. For defenders, this means detection logic needs to account for overlapping IOCs but divergent targeting profiles. Content strategies for vendors in the DPRK threat space should address both the financial crime and espionage angles — SOC teams search for campaign-specific IOCs, while CISOs search for sector-specific risk assessments."
    
    **Why it works:**
    - Specific adversary naming and attribution (LABYRINTH, GOLDEN, PRESSURE CHOLLIMA)
    - Temporal framing (18 months of evolution)
    - Operational detail (targeting, tooling, infrastructure patterns)
    - Defensive implication (detection logic must adapt for overlapping IOCs)
    - Connects threat intelligence to content strategy and search behavior
    - Assumes reader understands terms like "DPRK-nexus," "IOCs," "DeFi platforms"
    - Uses table-stakes vocabulary naturally
    
    ### Over-Indexed (Too Deep — You're Writing Marketing Content, Not Threat Research)
    
    "Analysis of the GOLDEN CHOLLIMA C2 infrastructure reveals three tiers of proxied connections: initial beaconing to Cloudflare Workers (SHA-256: a3f5e8...) for domain fronting, followed by redirection to VPS nodes (AS: 12345) in Southeast Asia, terminating at attacker-controlled servers in the DPRK IP space (175.45.176.0/24). We reverse-engineered the beacon protocol and identified a custom XOR obfuscation layer with a 16-byte rolling key. Our YARA rule (available at github.com/our-org/yara-rules) detects the beacon signature in memory. Deploy Sigma rule ID 0023-CHOLLIMA to trigger on network telemetry."
    
    **Why this goes too far:**
    - Claims original reverse-engineering (your team didn't do this research)
    - SHA256 hashes, API calls, AS numbers — this is threat research, not content strategy
    - Publishes detailed IOCs and detection rules (that's for security vendors, not content teams)
    - Implies access to proprietary threat intelligence (you're synthesizing public info)
    - Goes beyond positioning and into operational guidance that requires validation
    - Appropriate for SentinelLABS or Unit 42, not a content team
    
    ### What You Should Write Instead:
    
    "Security vendors targeting threat intelligence-driven buyers should understand how nation-state adversary content ranks differently than product content. When CrowdStrike publishes its LABYRINTH CHOLLIMA analysis — including infrastructure mapping, C2 protocol analysis, and YARA rules — they're targeting security practitioners searching for campaign-specific IOCs and attribution data. This content ranks for long-tail technical queries ('[adversary name] IOC,' 'DPRK threat actor [technique]') and builds authority. Most cybersecurity vendors can't publish original adversary research at this depth — and that's fine. Instead, focus on threat landscape synthesis: 'What the DPRK Cryptocurrency Theft Campaign Means for Financial Services Security Teams' content that aggregates public research from CrowdStrike, Unit 42, and Mandiant and provides sector-specific defensive guidance. This approach captures related search traffic without requiring a threat research lab."
    
    **Why this is better:**
    - References the adversary research (CrowdStrike's LABYRINTH CHOLLIMA) without claiming it
    - Connects it to content implications (what it ranks for, what queries it captures)
    - Positions you as understanding the domain without claiming capabilities you lack
    - Offers alternative approach for vendors without threat research teams
    - Focuses on content strategy, not malware analysis
    - Stays in its lane: content strategy, not cybersecurity research
    
    ---
    
    ## 8. Writing Checklist: Cybersecurity Content Quality Control
    
    Before publishing any cybersecurity content, verify it passes these checks:
    
    ### Vocabulary Audit
    - [ ] Uses 10-15+ terms from the "Table-Stakes" vocabulary list naturally
    - [ ] Avoids all terms from the "Terms to Avoid" list
    - [ ] When using precision terms, provides minimal context if needed (but doesn't over-explain)
    - [ ] References at least one industry framework (MITRE ATT&CK, NIST, CIS) if relevant
    
    ### Depth Calibration
    - [ ] Meets depth floor for the content type (threat research, thought leadership, or buyer education)
    - [ ] Stays below depth ceiling: Doesn't provide exploit code, malware analysis, detection rules, or IR playbooks
    - [ ] Passes insider test: A security professional would recognize this as written by someone who understands the threat landscape
    - [ ] Passes the boundary test: Nothing in the content requires proprietary threat intelligence or vulnerability testing
    
    ### Threat Disclosure Responsibility
    - [ ] Uses measured language ("observed," "detected," "consistent with") not panic language
    - [ ] Cites sources for threat intelligence claims
    - [ ] Doesn't publish vulnerability details or exploit code that could be weaponized
    - [ ] Acknowledges uncertainty where appropriate ("suspected," "likely," "indicators suggest")
    
    ### Buyer Alignment
    - [ ] Addresses pain points the target persona actually experiences
    - [ ] Differentiates content for different personas (CISO vs. SOC analyst vs. developer)
    - [ ] Answers questions buyers ask during evaluation, not just feature descriptions
    - [ ] Acknowledges budget constraints and competing priorities
    
    ### Compliance Guardrails
    - [ ] Makes no unverifiable efficacy claims ("stops all ransomware")
    - [ ] Doesn't claim threat research capability you don't have
    - [ ] Doesn't promise audit outcomes or regulatory compliance
    
    ### Brand Voice
    - [ ] Formal yet accessible (professional authority, not academic)
    - [ ] Uses data/metrics to support claims (breakout time, percentage increases)
    - [ ] Measured urgency through evidence, not sensationalism
    - [ ] Acknowledges complexity and tradeoffs (not oversimplifying)
    
    ### Structural Standards
    - [ ] 3-5 sentence paragraphs
    - [ ] Mix of declarative and compound sentences
    - [ ] Subheadings every 150-250 words
    - [ ] At least one concrete example or case study per major section
    - [ ] Statistical evidence supports narrative claims
    
    ---
    
    ## 9. Quick Reference
    
    ### The Three Tests Every Piece Must Pass
    
    **1. Insider Test:** Could a CISO or SOC analyst read this and think "this person understands the threat landscape"? If the content could have been written by someone who spent 10 minutes Googling cybersecurity, it fails.
    
    **2. Depth Calibration Test:** Is the content deep enough to be credible but not so deep that you're pretending to be a threat researcher? Use table-stakes vocabulary from the vocabulary list naturally. Reference vertical-specific challenges (alert fatigue, MITRE ATT&CK coverage, false positive rates, buying committee structure, board reporting) with specificity, not generality. But don't write malware analysis, detection rules, or IR playbooks.
    
    **3. Anti-Template Test:** Does this page read differently from content about other verticals? Could you swap "cybersecurity" for "fintech" or "healthcare" and have the content still make sense? If yes, it fails. Every piece should have at least 3 insights, examples, or data points that are ONLY relevant to cybersecurity.
    
    ### The Insider Voice is Earned, Not Faked
    
    The benchmark brands write with authority because they *are* authorities — they run incident response, discover vulnerabilities, track adversaries, and build security platforms. You're not trying to fake that expertise. You're demonstrating that you *understand* how they write, what their buyers expect, and how to position content in this vertical.
    
    Your value proposition is: "We know how to write for cybersecurity buyers at the depth they expect, without oversimplifying or overpromising. We understand the vocabulary, the frameworks, and the buyer journey."
    
    That's a credible, defensible position. Own it.
    
    **Red flags that you're off-brand:**
    - Content could work for any tech vertical (not cybersecurity-specific)
    - Overuse of FUD and urgency without actionable guidance
    - No technical vocabulary from the table-stakes list
    - Sensationalizing threats without providing defensive recommendations
    - Claiming threat research or security testing capabilities
    - Treating all security buyers as identical (CISO is not the same as SOC analyst is not the same as developer)
    
    **Success signals:**
    - Security professionals share your content in industry Slack channels
    - Content ranks for specific threat technique or framework searches (MITRE ATT&CK IDs, CVEs)
    - Security buyers reference your content when comparing vendors
    - Sales team says "this explains exactly what our prospects worry about"
    - Threat intelligence analysts cite your content synthesis in briefings

    Usage

    Once installed, open your project in Claude Code and ask:

    Write a blog post about cloud security posture management for CISOs. Use the cybersecurity content intelligence rules.

    Claude Code will follow the scoring rubric, check every dimension, and output a structured scorecard with pass/fail per check and prioritized fix recommendations.

    Works Great With

    Need a Custom Agent?

    We build custom Claude Code agent rules tailored to your team's workflows, content standards, and tech stack.

    Get in touch