Back to library

πŸ›‘οΈUnderstand Prompt Injection Attacks

Audit your own LLM features for injection surfaces. Separate direct from indirect attacks with worked examples, then apply structured isolation, output filters, provenance, and least-authority tool design.

Applied14 drops~2-week path Β· 5–8 min/daytechnology

Phase 1The Trust Boundary Your Prompt Quietly Erases

See trust boundaries that prompts silently erase

4 drops
  1. Your system prompt isn't a wall β€” it's a suggestion

    6 min

    Your system prompt isn't a wall β€” it's a suggestion

  2. Direct injection is a user typing past your guardrails

    6 min

    Direct injection is a user typing past your guardrails

  3. Indirect injection: the attacker isn't even your user

    7 min

    Indirect injection: the attacker isn't even your user

  4. There is no parser that separates instructions from data

    6 min

    There is no parser that separates instructions from data

Phase 2Reproduce the Two Attack Shapes

Reproduce direct and indirect injections on a toy app

5 drops
  1. Build a 50-line target before you can reason about defense

    7 min

    Build a 50-line target before you can reason about defense

  2. Land a direct injection on your own toy app

    8 min

    Land a direct injection on your own toy app

  3. Plant a payload in a webpage and let your app find it

    8 min

    Plant a payload in a webpage and let your app find it

  4. Five ways a successful injection can hurt you

    7 min

    Five ways a successful injection can hurt you

  5. Write a one-page threat model for your toy app

    7 min

    Write a one-page threat model for your toy app

Phase 3Defenses That Actually Hold

Apply isolation, filters, provenance, and least authority

4 drops
  1. When a scenario calls for structured isolation

    8 min

    When a scenario calls for structured isolation

  2. An output filter caught what the model didn't

    8 min

    An output filter caught what the model didn't

  3. Provenance is the missing label on every prompt token

    8 min

    Provenance is the missing label on every prompt token

  4. Your tool list is your real attack surface

    8 min

    Your tool list is your real attack surface

Phase 4Audit Your Own LLM Feature

Audit one of your real LLM features end to end

1 drop
  1. Audit one real LLM feature end to end

    8 min

    Audit one real LLM feature end to end

Frequently asked questions

What is prompt injection in LLM applications?
This is covered in the β€œUnderstand Prompt Injection Attacks” learning path. Start with daily 5-minute micro-lessons that build from fundamentals to hands-on application.
What's the difference between direct and indirect prompt injection?
This is covered in the β€œUnderstand Prompt Injection Attacks” learning path. Start with daily 5-minute micro-lessons that build from fundamentals to hands-on application.
Why can't I just tell the model to ignore injected instructions?
This is covered in the β€œUnderstand Prompt Injection Attacks” learning path. Start with daily 5-minute micro-lessons that build from fundamentals to hands-on application.
How is prompt injection different from jailbreaking?
This is covered in the β€œUnderstand Prompt Injection Attacks” learning path. Start with daily 5-minute micro-lessons that build from fundamentals to hands-on application.
What defenses actually work against indirect prompt injection?
This is covered in the β€œUnderstand Prompt Injection Attacks” learning path. Start with daily 5-minute micro-lessons that build from fundamentals to hands-on application.