Threat Modeling for a 3-Person Team (That You’ll Actually Run)
Threat modeling has a reputation problem.
It sounds like:
- Whiteboards covered in arrows
- 4-hour meetings
- Consultants asking “but where is the data really coming from?”
If you’re a 3-person team trying to ship a web app, that’s a non-starter.
This guide is the opposite:
Threat modeling as a series of small, repeatable conversations that fit into your normal work.
No giant diagrams. No new meetings. Just a few lightweight habits.
1. What “threat modeling” means for a tiny team
For a small team, threat modeling is:
“Spending 5–15 minutes thinking like an attacker before you ship something that matters.”
It answers four basic questions:
- What are we building?
- What can go wrong?
- What are we going to do about it?
- Did we actually do those things?
If you can answer those in a quick, structured way for your important changes, you’re already doing more threat modeling than most teams.
2. The three levels of “just enough” threat modeling
You don’t need the same ceremony for every change.
Think in three levels:
Feature-level (Threat Sketch)
- Per risky ticket
- 5–15 minutes
- Lives in the ticket / PR description
System-level (Big Picture)
- Once per quarter or big architectural change
- 45–60 minutes
- One simple diagram + top risks
After-incident (Learning loop)
- When something breaks or feels sketchy
- 20–30 minutes
- Captures what you missed and how to avoid it next time
You don’t need to be perfect at all three.
If you consistently do Level 1 + sometimes Level 2 + occasionally Level 3, you’re in a very good place.
3. Level 1: 10-minute “Threat Sketch” per risky story
This is the core habit.
Whenever you tag a ticket as sec:medium or sec:high, you add a Threat Sketch section.
3.1 Threat Sketch template
Add this block to your ticket template (or PR template):
Threat Sketch
What are we changing?
(1–3 bullets, plain language)
What can go wrong?
(List 3–5 realistic threats)
- …
What protects us?
(Existing or new controls)
- …
What’s new or scary here?
(New data, new external calls, new trust boundary, etc.)
- …
3.2 Example: new login endpoint
Ticket: “Implement /api/login with email + password”
Risk tag: sec:high (auth, sessions)
Threat Sketch
What are we changing?
- Adding a POST
/api/loginendpoint for email + password. - On success, returns a session cookie used by the frontend.
What can go wrong?
- Attacker brute-forces passwords.
- Attacker bypasses login flow (e.g., uses an internal-only endpoint).
- Session cookie can be stolen via XSS or insecure flags.
- Error messages leak whether an email exists (user enumeration).
What protects us?
- Use existing password hashing + user lookup helpers (no custom crypto).
- Add rate limits per IP + per email.
- Set cookies with
Secure,HttpOnly, andSameSite=Lax. - Return generic error messages: “invalid credentials” without revealing which field.
- Log failed logins (with care to avoid logging passwords).
What’s new or scary here?
- New public endpoint (no auth to call it).
- Session cookie becomes a primary auth token.
- Increased attack surface from bots hitting
/api/login.
You don’t need more than this for most stories. If you regularly write Threat Sketches like this, you’ve already done more threat modeling than many larger orgs.
4. Level 2: System-level threat model (once in a while)
Every quarter or major architecture change, do a 45–60 minute “big picture” pass.
Goal: one page you can point at and say:
“This is how our system hangs together and where we’re most worried.”
4.1 Draw a tiny architecture sketch
Keep it simple. For a typical SaaS app:
[ Browser / Mobile ]
|
v
[ API / Backend ] --- [ Third-Party APIs ]
|
v
[ Database ]
|
v
[ Background Jobs ]
You can add:
- Auth provider (OIDC)
- Admin panel
- Storage buckets, etc.
Don’t overdo it. Enough to talk, not enough to impress an auditor.
4.2 Identify assets & trust boundaries
On the whiteboard / doc:
Assets (things you care about)
- User data (profiles, invoices, documents)
- Money or credits
- Admin capabilities
- Secrets (tokens, API keys, encryption keys)
- Reputation / uptime
Trust boundaries
- Internet → your API
- Authenticated user → admin
- Your app → third-party APIs
- Your app → database
4.3 Walk 3–4 common flows
Pick flows that really matter:
- Login / signup
- Payment or checkout
- “Invite user” / “share link” features
- Admin actions (refunds, deleting accounts)
For each flow, answer three questions:
- What can go wrong? (Spoofing, data leak, abuse, tampering)
- What protects us today?
- What’s the next small improvement?
Example: “Admin deletes user account”
What can go wrong?
- Non-admin calls the endpoint.
- Admin’s session hijacked, attacker deletes accounts.
- No audit trail (can’t see who did what).
What protects us today?
- Endpoint requires
role = admin. - Uses secure session cookie, short session lifetime.
- Endpoint requires
Next small improvement?
- Add audit log:
who,when,which user. - Consider requiring re-auth or 2FA for destructive admin actions.
- Add audit log:
4.4 Capture the output
End with one page:
- Simple diagram
- List of top 5 risks
- One proposed improvement for each
Example:
Top 5 Risks (Q2)
- Password reset links have long lifetimes.
- Admin panel uses same domain as main app (higher XSS blast radius).
- Webhook receiver has no rate limiting or signature checks.
- Background jobs can call external APIs without egress restrictions.
- No consistent audit logging around account deletion / privilege changes.
Next steps:
- Shorten password reset tokens to 30 minutes.
- Move admin under separate subdomain; tighten CSP there.
- Add HMAC signature verification + rate limit to webhook endpoint.
- Plan basic egress control for job workers.
- Implement minimal audit logging for critical admin actions.
That’s your system-level threat model. Not a 50-page doc—just a map of where you’re most worried.
5. Level 3: After-incident learning (post-mortem threat modeling)
Whenever you have a security-flavored incident (or even a near-miss):
- Weird login behavior
- Token accidentally checked into Git
- Misconfigured bucket almost public
Run a 20–30 minute mini-retro focused on design assumptions.
5.1 After-incident template
Security Incident Learning Log
What happened?
- (Short narrative, 3–6 sentences.)
What did we assume was true?
- (Design assumptions that turned out wrong.)
Where would this live in our threat model?
- (Which flow / area in our system-level map?)
What changes now?
- In code/config:
- …
- In our Threat Sketch questions:
- Add/remove/adjust question(s).
- In runbooks/process:
- …
This keeps your threat model alive:
- You’re not just fixing bugs
- You’re updating how you think about future changes
6. Roles & rituals for a 3-person team
You don’t need new job titles, but you do need clear expectations.
6.1 Lightweight roles
Risk Caller Usually: tech lead or whoever owns the backlog.
- Tags tickets with
sec:low,sec:medium,sec:high. - Decides which stories need a Threat Sketch.
- Tags tickets with
Threat Sketch Driver Usually: the person implementing the ticket.
- Fills out the Threat Sketch in the ticket.
- Brings questions to kickoff.
Skeptic (rotating) Each sprint, one person plays “friendly attacker.”
- In kickoff: asks “how would I break this?”
- In PR review: looks specifically for auth/data/abuse issues.
6.2 Rituals (where this fits into your week)
Backlog grooming (30–60 min)
- Tag upcoming tickets with
sec:low/medium/high. - For
sec:mediumandsec:high, add a note: “Needs Threat Sketch before dev.”
Story kickoff (10–15 min)
- Threat Sketch Driver drafts the Threat Sketch.
- Skeptic pokes at it.
- You align on what “secure enough” means for this story.
Sprint wrap or monthly review (30 min)
- Revisit system-level map & top 5 risks.
- Add anything new you learned from incidents or close calls.
That’s it. You’re just editing existing rituals—not creating a brand-new ceremony.
7. Tiny library of abuse questions (cheat sheet)
Sometimes “think like an attacker” is too vague.
Here’s a cheat sheet to drop into your wiki or SECURITY.md.
Threat Modeling Cheat Sheet – Questions to Ask
Auth & identity
- What if someone skips this step and calls the “next” endpoint directly?
- What happens if a token is stolen or reused?
- Can a regular user reach admin-only functionality in any way?
Authorization & data access
- Who should be able to see/change this data? Who must not?
- Can user A trick the system into seeing data for user B?
- Are we relying on “hidden” fields or client-side checks?
Input & output
- What inputs can an attacker fully control here (headers, body, query)?
- Do we handle unexpected or malicious data (HTML, SQL-ish strings, huge payloads)?
- Are we echoing back any user input into HTML/JS/SQL?
External services
- What if the third-party API is:
- slow,
- down,
- or returns garbage?
- What if an attacker pretends to be that third party?
- Are we validating webhooks / callbacks with a signature or shared secret?
Abuse & scale
- What happens if this endpoint is called 10k times in 5 minutes?
- Can this feature be abused to spam emails, SMS, or notifications?
- Are there cheap operations an attacker could make us do at scale?
Secrets & config
- What if this config or API key leaks—what could someone do?
- Do we log anything that looks like secrets or PII?
- Do we have a way to rotate this key without breaking everything?
You don’t need to ask all of these every time. Pick the 3–5 that fit the story you’re working on.
8. Example walkthrough: “Share invoice link” feature
Let’s run one feature end-to-end so you can see how it all connects.
8.1 The feature
“Allow users to generate a shareable link to an invoice so customers can view and pay without logging in.”
Risk tag: sec:high
Why: unauthenticated access to sensitive financial data + payments.
8.2 Threat Sketch
Threat Sketch – Shareable Invoice Links
What are we changing?
- Backend endpoint to create a shareable URL for a specific invoice.
- Public GET endpoint where customers can view the invoice and pay without logging in.
What can go wrong?
- Links are guessable → attacker can enumerate invoices.
- Links never expire → old invoices stay exposed forever.
- Link holder can see more than intended (other invoices, customer info).
- Attacker reuses an old link after it should be invalid.
- Link is brute-forced to steal payment flows.
What protects us?
- Use long, random, unguessable tokens (at least 128 bits of entropy).
- Token only grants read access to a single invoice (no listing).
- Tokens have expiry (e.g., 7 days) and can be revoked by the owner.
- Rate limit requests to the public invoice endpoint.
- No PII beyond what’s necessary for the invoice; mask partial details where possible.
What’s new or scary here?
- Public endpoint that does not require auth.
- Linking payment actions to a bearer token in the URL.
- Threat of link sharing/forwarding beyond intended recipient.
8.3 System-level impact
On your system diagram, you might mark:
- New public endpoint:
/invoices/:token - New trust boundary: public → “invoice view by token”
- Data flow: browser → API → DB → payment processor
You might add one item to your top 5 risks:
“Bearer-token URLs for invoices/payment flows – ensure strong randomness, expiry, revocation, and scope.”
8.4 Design decisions that fall out
From the Threat Sketch + system view, you might decide:
Token format:
- Use cryptographically secure random IDs (no short IDs or predictable sequences).
Scope:
Token grants:
- Read-only access to that invoice,
- Ability to start payment,
- Nothing else (no profile access, no other invoices).
Expiry & revocation:
- Default expiry of 7 days.
- Allow the account owner to revoke a link at any time.
Logging & monitoring:
- Log token usage (without logging the full token).
- Alert if too many token-based accesses from same IP in short window.
This is threat modeling doing what it should do: changing design decisions before you write all the code.
9. Minimum viable threat modeling checklist (for a 3-person team)
Drop this into your wiki or SECURITY.md and treat it as the bar you’re aiming for:
Minimum Viable Threat Modeling – 3-Person Team
- We tag tickets with
sec:low,sec:medium, orsec:high. - We add a 5–15 minute Threat Sketch to all
sec:mediumandsec:hightickets. - We maintain a simple system diagram and a list of top 5 risks, updated at least once per quarter or big change.
- We run a 30-minute security mini-retro at least monthly: - Update our top risks. - Adjust our Threat Sketch questions as we learn.
- After any security-flavored incident or “close call”: - We write a short learning log (what happened, what we assumed, what changes).
- At least one person plays Skeptic each sprint: - Asks “how would I break this?” in kickoff and PR reviews.
- Our threat modeling artifacts live close to the work: - In tickets, PR templates, and a short doc in the repo.
If you can honestly check most of these boxes, you’re doing practical threat modeling at a level many larger organizations never reach.
And you’re doing it in minutes, not marathons.