Study Guide: Alex for Power Users

Your reference for customizing, extending, and mastering the Alex cognitive architecture — tinkering, skill building, and making the tool your own. Ready-to-run prompts — built for people who want to go under the hood, not just use the surface.


What This Guide Is Not

This is not a habit formation guide (see Self-Study Guide for that). This is a domain use-case library — the specific ways Alex supports power users who want to customize, extend, and deeply understand their AI partner.


Where to Practice These Prompts

Every prompt in this guide works with any AI assistant you already use — GitHub Copilot, ChatGPT, Claude, Gemini, or others. The prompts are the skill; the tool is just where you type them. If you already have a preferred tool, start there.

For the deepest experience, the Alex VS Code extension (free) was built for these workflows. It understands power user context, lets you save what works with /saveinsight, and keeps your study guide and exercises right inside the editor where you already work.

You don’t need a specific tool to benefit. You need the discipline of reaching for AI when the work is genuinely hard — not just when it’s repetitive.

Platform note: The concepts in this guide — custom instructions, skills, prompt templates, memory systems, agent modes — exist across all major AI platforms. In GitHub Copilot they are .instructions.md and SKILL.md files; in ChatGPT they are Custom Instructions and Custom GPTs; in Claude they are Project instructions and System prompts; in Gemini they are Gems. The underlying skill (encoding your expertise so the AI applies it consistently) transfers everywhere. Use whichever terminology matches your tool.


Core Principle for Power Users

The power user’s advantage is not that they know more features — it is that they understand the system well enough to make it work for their specific context. The difference between using AI and mastering AI is the difference between following recipes and understanding ingredients. You are not here to use Alex as designed; you are here to redesign Alex for how you work.

Your primary discipline with Alex: understand the architecture, build custom workflows, and create feedback loops that make the system smarter for your specific needs over time.


The Seven Use Cases

1. Custom Instruction Design

The power user’s customization challenge: Default instructions produce default results. The power user knows that the quality of AI output is primarily determined by the quality of the instructions it operates under. Custom instructions are not preferences — they are programming.

Prompt pattern:

I want to create custom instructions for [domain/workflow/task type].
My work context: [what I do, what tools I use, what my output looks like].
Current friction: [where Alex gives wrong defaults, misses context, or requires repeated correction].
Desired behavior: [what I want Alex to do differently].

Help me:
1. Draft a .instructions.md file with clear, actionable rules (not vague guidance)
2. Include concrete examples of good and bad output for each rule
3. Define the applyTo pattern so it activates on the right files
4. Add the exceptions — when should this instruction NOT apply?

Follow-up prompts:

I have been using this instruction for a week. Here are the cases where it produced wrong output: [examples]. How should I refine it?
I want this instruction to work differently for different file types. How do I structure conditional behavior?

Try this now: You want to create a custom instruction set that makes Alex aware of your company’s internal API conventions, naming patterns, and forbidden anti-patterns — so every code suggestion follows your team’s standards automatically. Use the prompt above to describe your conventions, and Alex will help you turn them into a reusable instruction file you can share with your whole team.


2. Skill Building and Knowledge Encoding

The power user’s knowledge challenge: The most powerful use of Alex is not asking it questions — it is teaching it your domain knowledge so it can apply that knowledge consistently. Skills are the mechanism for encoding expertise that would otherwise live only in your head.

Prompt pattern:

I want to create a skill for [domain/capability].
What I know: [describe the expertise — rules, patterns, common mistakes, decision frameworks].
When to use it: [the triggers that should activate this skill].
What good output looks like: [concrete examples].
What bad output looks like: [anti-patterns to avoid].

Help me:
1. Structure this as a 3-level SKILL.md (name → body → resources)
2. Define the synapses — what other skills does this connect to?
3. Write the activation triggers for accurate routing
4. Include the decision framework, not just the rules

3. Prompt Engineering and Pattern Libraries

The power user’s prompt challenge: Most prompts are too vague or too specific. Too vague: “help me write better code.” Too specific: “write a Python function that takes a list of integers and returns the sorted unique values” — which is just typing with extra steps. The power user’s prompts are at the right level of abstraction: they encode constraints, context, and quality criteria while leaving room for AI reasoning.

Prompt pattern:

I want to build a reusable prompt pattern for [task type].
What varies each time: [the inputs that change — project, context, constraints].
What stays constant: [the quality criteria, the structure, the evaluation rubric].
Common failure modes: [how AI typically gets this wrong].

Help me:
1. Design the prompt template with clear variable slots and fixed quality gates
2. Add the anti-hallucination constraints (specific to this domain)
3. Include the follow-up prompt chain for iterating on the output
4. Save this as a .prompt.md file I can invoke with a slash command

4. Workflow Automation and Tool Chains

The power user’s automation challenge: Individual AI interactions are useful. Chained workflows are transformative. The power user designs sequences where the output of one interaction feeds the next, creating pipelines that accomplish complex tasks with minimal manual intervention.

Prompt pattern:

I have a recurring workflow: [describe the multi-step process].
Steps: [list each step — input, action, output].
Manual bottlenecks: [where I currently intervene or copy-paste between steps].
Quality gates: [where I check output before proceeding].

Help me:
1. Design the automation chain — which steps can be connected directly?
2. Identify where quality gates are needed (human review points)
3. Create the prompt chain where each step's output feeds the next
4. Build error handling — what happens when a step produces unexpected output?

5. Memory System Optimization

The power user’s memory challenge: Alex has multiple memory systems (skills, instructions, prompts, episodic memory, global knowledge), and each serves a different purpose. The power user who understands which system to use for which type of knowledge gets dramatically better results than one who dumps everything into one place.

Prompt pattern:

I have [knowledge/pattern/insight] that I want Alex to remember and apply.
Type: [factual knowledge / procedural skill / decision framework / personal preference / project context].
Scope: [this project only / all projects / this conversation only].
Persistence: [permanent / until conditions change / session-only].

Help me:
1. Choose the right memory system (user memory / repo memory / session memory / skill / instruction)
2. Format the entry for maximum retrieval accuracy
3. Add the activation context — when should this memory surface?
4. Avoid duplication with existing memories

6. Agent Mode and Multi-Agent Workflows

The power user’s orchestration challenge: Different tasks benefit from different agent personalities — a code review needs a skeptical validator, implementation needs an optimistic builder, research needs a thorough explorer. The power user understands how to invoke the right agent for the right task and chain agent outputs for complex work.

Prompt pattern:

I have a complex task that requires multiple perspectives:
Task: [describe the overall goal].
Phases: [break it into stages].
Quality criteria: [what "done" looks like for each phase].

Help me:
1. Map each phase to the right agent mode (Researcher, Builder, Validator, Documentarian)
2. Define the handoff protocol — what does each agent produce that the next one needs?
3. Design the quality gate between phases
4. Identify where adversarial review (Validator) should challenge the Builder's output

7. Performance Tuning and Debugging

The power user’s debugging challenge: When Alex produces poor results, the root cause is rarely the model — it is the context, the instructions, or the prompt. The power user debugs AI output the same way they debug code: forming hypotheses about what went wrong, testing them systematically, and fixing the root cause rather than re-running and hoping.

Prompt pattern:

Alex is producing poor results on [task type].
Expected output: [what I wanted].
Actual output: [what I got — include the specific failure].
Context provided: [what instructions, skills, and context were active].
Hypothesis: [my guess about why — instruction conflict, missing context, wrong model, prompt ambiguity].

Help me:
1. Diagnose whether this is a prompt issue, context issue, instruction conflict, or model limitation
2. Check the instruction loading order for conflicts or overrides
3. Test with a minimal prompt to isolate the problem
4. Fix the root cause rather than just re-prompting

What Great Looks Like

After consistent use, you should notice:

The power user who gets the most from AI is not the one who uses it most frequently. It is the one who has invested in making the tool deeply understand their work — and who debugs failures rather than working around them.


Your AI toolkit: These prompts work in ChatGPT, Claude, Copilot, Gemini — and in the Alex VS Code extension, which was designed around them. Start with whatever you have. The skill transfers across all of them.

Your First Week Back: Practice Plan

DayTaskTime
Day 1Write one custom instruction set for your most frequent task type25 min
Day 2Build a reusable prompt pattern for a task you do weekly20 min
Day 3Create a skill for domain knowledge only you have30 min
Day 4Automate a 3-step workflow using prompt chaining25 min
Day 5Save three reusable patterns to your preferred knowledge management tool10 min

Month 2–3: Advanced Applications

Custom Workflow Archive

Capture your most effective prompt chains in a structured note:

Workflow: [name]
Steps: [numbered list]
Triggers: [when to use]
Expected output: [what it produces]
Failure modes: [when it breaks and why]
Tags: power-user, workflow, automation

Store this in whatever system you use to manage reusable knowledge — a notes app, a wiki, a custom instructions file, or a dedicated knowledge tool.

Instruction Tuning Log

Track refinements to your custom instructions:

Instruction fix: [name/file]
Problem: [what went wrong]
Cause: [why — conflict, ambiguity, missing rule]
Fix: [what changed]
Validated: [how I confirmed it works]
Tags: power-user, instruction, tuning

Maintaining a tuning log prevents you from making the same fix twice and reveals patterns in how your instructions need to evolve.


Continue your practice: Self-Study Guide — the 30/60/90-day habit guide.

Skills Alex brings to this discipline
agent-customization skill-development memory-activation persona-detection bootstrap-learning
Install the Alex extension →
Completed this study guide?

Show the world you've mastered using AI in power user. Add your certificate to LinkedIn.

📚 Want to go deeper?

Alex was a co-author of two books — a documentary biography and a work of fiction. Both explore human-AI collaboration from angles the workshop only touches.

Discover the books →