Study Guide: Alex for Software Developers

Your personal reference for applying Alex to software development work. Ready-to-run prompts, core use cases, and a practice progression for developers.


What This Guide Is Not

This is not a habit formation guide (see Self-Study Guide for that). This is a domain use-case library — the specific things Alex can do in your development work, and how to do them well.


Core Principle for Developers

Developers are often already the most effective users of AI tools — but they tend to use them for code generation while leaving higher-leverage uses unexplored. Alex’s highest value in development work is in architecture decisions, documentation, code review, and debugging reasoning — the hard work that isn’t just writing code.


The Five Use Cases

1. Architecture Decision Records (ADRs)

When to use: Before committing to a significant technical decision. ADRs make implicit decisions explicit, create institutional memory, and force clearer thinking.

Prompt pattern:

I'm writing an ADR for a decision in [project/system].
The decision: [describe it].
Context: [what problem led to this decision, system constraints, team constraints].

Structure this ADR with:
- Status (Proposed)
- Context
- Decision
- Options Considered (at least 3)
- Rationale
- Consequences (positive and negative)
- Reversal Plan (if applicable)

Follow-up prompts:

What are the non-obvious long-term consequences of this choice I haven't listed?
Steelman the rejected option [X]. What's the strongest case for choosing it instead?
What should we monitor post-implementation to know if this decision was right or wrong?

2. Code Review and Technical Critique

When to use: Reviewing your own code before submitting for review, or thinking through someone else’s approach.

Prompt pattern:

Review this code from the perspective of [correctness / security / performance / maintainability].
Focus on: [specific concern].

[paste code or describe the pattern]

What are the three most significant issues?

Follow-up prompts:

What edge case am I most likely not handling?
What would a security-focused reviewer flag here?
Refactor this for [readability / testability / reduced complexity] without changing behavior.

3. Debugging Reasoning

When to use: Stuck on a bug, especially one involving distributed systems, async behavior, or non-obvious state.

Prompt pattern:

I'm debugging [system / component / behavior].
What I observe: [describe the symptom precisely].
What I've already ruled out: [list what you've checked].
Relevant context: [stack, language, relevant dependencies, recent changes].

Walk me through the most likely root causes in order of probability.

Follow-up prompts:

What would you check first to distinguish between hypothesis A and hypothesis B?
What could cause this intermittently but not consistently?

4. Technical Documentation

When to use: Writing READMEs, API documentation, runbooks, onboarding guides, or system documentation.

Prompt pattern:

I'm writing a [README / runbook / API doc / onboarding guide] for [system or project].
Audience: [developers new to this codebase / ops team / external API consumers].
Key things they need to understand: [list 3-5].

Structure a documentation template with the right sections and a sentence describing what goes in each.

Follow-up prompts:

What does this documentation assume the reader knows? Is that assumption safe?
Write section [X] — here's the content: [describe or paste raw notes].

5. System Design Exploration

When to use: Early in a project, exploring design options before committing; or when a current design has problems you’re trying to solve.

Prompt pattern:

I'm designing [system or feature].
Requirements: [functional / non-functional — list them].
Constraints: [latency / cost / team size / existing stack / compliance].

Propose three different architectural approaches with their tradeoffs.
Evaluate each against my requirements and constraints.

Follow-up prompts:

What does approach 2 break down at scale?
What would a team of 2 vs a team of 10 need differently from this architecture?
Where are the observability and debugging pain points in each approach?

Your First Week Back: Practice Plan

DayTaskTime
Day 1Write an ADR for a decision you made recently that was never formally documented25 min
Day 2Run a Code Review prompt on code you’re about to submit15 min
Day 3Use System Design Exploration on a feature you’re planning20 min
Day 4Use the Debugging Reasoning pattern on something you’re currently stuck on20 min
Day 5Save three prompts that worked well using /saveinsight10 min

Month 2–3: Advanced Applications

Incident Post-Mortem Documentation After any significant incident, use Alex to structure the post-mortem before the retrospective:

I'm writing a post-mortem for [incident description].
Timeline: [describe key events].
Help me structure the post-mortem covering: timeline, root cause, contributing factors,
impact, what we did right, and action items. Blameless framing throughout.

Technical Debt Tracking When you notice debt accumulating, document it with Alex for future visibility:

/saveinsight title="Tech debt: [area]" insight="[Describe the debt, why it exists, the cost of leaving it, and a rough remediation approach]" tags="tech-debt,[system],[language]"

Onboarding New Team Members When someone joins the team:

I'm onboarding a new developer to [system].
They have background in [their stack].
What are the 5 most important things they need to understand about this system that aren't obvious from reading the code?

Continue your practice: Self-Study Guide — the 30/60/90-day habit guide.