Study Guide: Alex for Security Engineers
Your reference for applying Alex to threat modeling, security architecture review, incident response, compliance documentation, and vulnerability assessment. Ready-to-run prompts — built around the hard parts of defending production systems, not the certification exam topics.
What This Guide Is Not
This is not a habit formation guide (see Self-Study Guide for that). This is a domain use-case library — the specific ways Alex supports professional security engineering work.
Where to Practice These Prompts
Every prompt in this guide works with any AI assistant you already use — GitHub Copilot, ChatGPT, Claude, Gemini, or others. The prompts are the skill; the tool is just where you type them. If you already have a preferred tool, start there.
For the deepest experience, the Alex VS Code extension (free) was built for these workflows. It understands security engineering context, lets you save what works with /saveinsight, and keeps your study guide and exercises right inside the editor where you already work.
You don’t need a specific tool to benefit. You need the discipline of reaching for AI when the work is genuinely hard — not just when it’s repetitive.
Core Principle for Security Engineers
Security engineering is the practice of reasoning about adversaries — understanding not just what a system does, but what an attacker could make it do. The hardest problems in security are not technical vulnerabilities; they are organizational: convincing teams to fix things that have not broken yet, prioritizing risks when everything feels urgent, and communicating threat in terms that decision-makers act on.
Your primary discipline with Alex: use it to systematically explore attack surfaces, pressure-test your threat models, and translate security findings into business language that drives action. Never use AI-generated security advice without verification against authoritative references (NIST, OWASP, CIS, vendor documentation).
Important: AI can miss novel vulnerabilities and may hallucinate CVE details. Always verify specific vulnerability data, compliance requirements, and remediation guidance against primary sources.
The Seven Use Cases
1. Threat Modeling
The security engineer’s modeling challenge: Threat modeling is the discipline of thinking about what can go wrong before it does. The failure mode is either not doing it (and discovering threats in production) or doing it as a one-time checkbox exercise that does not evolve with the system. Good threat models are living documents updated when the system, the threat landscape, or the trust boundaries change.
Prompt pattern:
I need to threat model [system/feature/architecture change].
System description: [what it does, data it handles, users it serves].
Architecture: [components, trust boundaries, data flows, external dependencies].
Data classification: [what sensitive data exists and where].
Existing controls: [authentication, authorization, encryption, monitoring].
Deployment: [cloud provider, network topology, exposure surface].
Using STRIDE, help me:
1. Enumerate threats for each component and trust boundary
2. Rank by likelihood × impact (consider attacker motivation and capability)
3. Identify the threats where existing controls are insufficient
4. Recommend mitigations prioritized by risk reduction per effort
Follow-up prompts:
Now model an insider threat scenario — what can a compromised employee account do in this system?
What supply chain risks exist in this architecture? Map the third-party dependencies and their trust assumptions.
Try this now: A new feature lets customers upload files through a web portal. Files are stored in S3, processed by a Lambda function, and results are emailed back. Paste the architecture into the STRIDE prompt above and ask for a threat enumeration. The response will surface trust boundary violations (the Lambda execution role, the email path, the upload validation logic) that are easy to miss in code review.
2. Security Architecture Review
The security engineer’s review challenge: Security reviews that only check a compliance list miss the architecture-level failures — the trust boundary that should not exist, the data flow that bypasses controls, the component that concentrates too much privilege. A good security review reasons about the design, not just the implementation.
Prompt pattern:
Review this architecture for security:
[Paste architecture description, diagram, or design document]
Focus on:
1. Trust boundaries — where are they, and are they enforced or assumed?
2. Authentication and authorization — is least privilege actually implemented?
3. Data flow — does sensitive data cross boundaries it should not?
4. Secrets management — how are credentials stored, rotated, and scoped?
5. Blast radius — if one component is compromised, what else is reachable?
Skip compliance checkbox feedback. I need architectural risk analysis.
Follow-up prompts:
What is the shortest attack path from the internet to the most sensitive data in this system?
Design the monitoring and detection strategy for this architecture. What should trigger an alert?
3. Incident Response and Forensics
The security engineer’s incident challenge: Security incidents demand speed and precision simultaneously. The pressure to “contain immediately” can destroy forensic evidence. The pressure to “investigate thoroughly” can extend exposure. The security engineer who handles incidents well balances containment, evidence preservation, and communication — and knows which order to do them in.
Prompt pattern:
Security incident in progress:
Indicators: [what was detected — alerts, anomalies, reports].
Scope: [what is known to be affected].
Current containment: [what has been done so far].
Evidence preserved: [logs, memory dumps, disk images — what do we have?].
Business impact: [what is affected — users, data, services].
Timeline: [when detected, estimated start of compromise].
Help me:
1. Prioritize immediate actions: contain vs. preserve vs. communicate
2. Identify what evidence I should collect NOW before it disappears
3. Map the likely attack progression — what has the attacker probably done that I have not found yet?
4. Draft the initial stakeholder communication (factual, not speculative)
4. Compliance Documentation and Audit Preparation
The security engineer’s compliance challenge: Compliance is not security, but compliance failures have real consequences — fines, contract losses, and organizational credibility damage. The failure mode is treating compliance as a separate activity from security engineering, creating two parallel realities: the compliant one in the documentation and the actual one in production.
Prompt pattern:
I need to prepare for [audit/compliance assessment: SOC 2, ISO 27001, HIPAA, PCI DSS, FedRAMP, GDPR].
Scope: [systems, data, and processes in scope].
Current maturity: [honest assessment — what is solid, what is weak, what is documented vs. actual].
Known gaps: [what I know does not meet the requirement].
Timeline: [when is the audit].
Help me:
1. Map our controls to the specific requirements — identify coverage and gaps
2. Prioritize gap remediation by: audit risk × effort × business impact
3. Draft evidence artifacts that are honest and auditor-friendly
4. Identify the questions an auditor will ask and prepare clear, defensible answers
5. Vulnerability Assessment and Prioritization
The security engineer’s prioritization challenge: The average organization has hundreds or thousands of known vulnerabilities at any time. Patching everything immediately is impossible. The discipline is not finding vulnerabilities — scanners do that. The discipline is prioritizing which ones matter: which are exploitable in your environment, which protect sensitive assets, and which are actually just scanner noise.
Prompt pattern:
I have [vulnerability scan results / penetration test findings / bug bounty reports]:
[Paste or describe findings]
For each finding, help me assess:
1. Exploitability in our environment (not just the CVSS score — consider network position, authentication requirements, exploit availability)
2. Impact if exploited (data exposure, lateral movement, service disruption)
3. Compensating controls already in place that reduce real risk
4. Remediation priority: critical (patch now) / high (this sprint) / medium (this quarter) / accept (document and monitor)
Skip the findings that are scanner noise. Focus on what an attacker would actually target.
6. Security Awareness and Developer Education
The security engineer’s education challenge: Security training that consists of annual slideshows produces compliance, not competence. The developers who write secure code are the ones who understand why a pattern is dangerous, not just that a policy says not to do it. The security engineer’s education challenge is making security knowledge actionable and relevant to the daily work of non-security engineers.
Prompt pattern:
I need to create security guidance for [audience: developers / DevOps / product managers / executives].
Topic: [secure coding practice / threat awareness / incident response / data handling].
Context: [our tech stack, our threat model, our recent incidents].
Current state: [what they know, what they get wrong, what they ignore].
Help me:
1. Identify the 3 security concepts that would have the most impact on this audience's daily work
2. Create concrete, stack-specific examples (not generic OWASP slides)
3. Design exercises that use our actual codebase or architecture (redacted as needed)
4. Write guidance that explains WHY, not just WHAT — reasoning builds better habits than rules
7. Security Metrics and Program Reporting
The security engineer’s reporting challenge: Security metrics are either too technical for leadership (“we have 847 high-severity CVEs”) or too abstract to be actionable (“our risk posture is medium”). The security engineer who communicates effectively translates technical findings into business risk language and tracks metrics that drive behavior, not just measure activity.
Prompt pattern:
I need to report on [security program / incident / risk posture / compliance status] to [audience: CISO / board / engineering leadership / regulators].
Data available: [vulnerability counts, incident metrics, compliance scores, pen test results].
Narrative: [what is the story — are we improving, stable, or degrading?].
Asks: [what I need from this audience — budget, headcount, priority change, executive support].
Help me:
1. Select the 3–5 metrics that tell an honest, actionable story (not a metric dump)
2. Translate technical risk into business impact language
3. Benchmark against industry peers where possible
4. Draft a report that leads with risk and recommendations, not activity summaries
What Great Looks Like
After consistent use, you should notice:
- Threat models are living documents that evolve with the system, not one-time artifacts
- Vulnerability prioritization is based on real risk, not CVSS scores alone
- Incident response is structured and evidence-preserving, not panicked
- Security communication drives action because it is in business language
- Compliance documentation reflects reality, not a parallel universe
The security engineer who will thrive in an AI-augmented environment is not the one who runs the most scans. It is the one who reasons most clearly about adversaries, communicates risk most effectively, and builds security into systems rather than bolting it on after.
Your AI toolkit: These prompts work in ChatGPT, Claude, Copilot, Gemini — and in the Alex VS Code extension, which was designed around them. Start with whatever you have. The skill transfers across all of them.
Your First Week Back: Practice Plan
| Day | Task | Time |
|---|---|---|
| Day 1 | Threat model one system using STRIDE with the pattern above | 30 min |
| Day 2 | Prioritize your current vulnerability backlog using the Assessment pattern | 25 min |
| Day 3 | Run the Architecture Review pattern on your highest-risk system | 25 min |
| Day 4 | Draft a security report for leadership using the Metrics pattern | 20 min |
| Day 5 | Save three reusable prompt patterns with /saveinsight | 10 min |
Month 2–3: Advanced Applications
Threat Model Archive
Maintain a living library of threat models:
/saveinsight title="Threat model: [system]" insight="Scope: [boundaries]. Top threats: [ranked list]. Key controls: [mitigations in place]. Gaps: [unmitigated risks]. Last updated: [date]. Revisit if: [triggers]." tags="security,threat-model"
Incident Pattern Library
Capture patterns from security incidents:
/saveinsight title="Security incident: [type]" insight="Indicators: [what was seen]. Attack path: [how it happened]. Root cause: [systemic gap]. Detection gap: [what we missed]. Prevention: [what stops this class of attack]." tags="security,incident,pattern"
Continue your practice: Self-Study Guide — the 30/60/90-day habit guide.
Show the world you've mastered using AI in security engineering. Add your certificate to LinkedIn.
Alex was a co-author of two books — a documentary biography and a work of fiction. Both explore human-AI collaboration from angles the workshop only touches.