Chapter 3. Vulnerability Management
Approaches to Handling Scanner Results
Teams typically choose one of three models. Pick the one that matches your risk tolerance and delivery needs.
| Approach | Description | Pros | Cons | When to use |
|---|---|---|---|---|
| A. Block All | Fail the build on any scanner finding. | Maximal caution. Nothing slips by. | High noise → developer fatigue. False positives block work. | Small codebases, demo-only, or very high-risk environments. |
| B. Tuned Blocking | Fail only on specific high-confidence, high-impact findings. Log the rest. | Catches the worst issues early. Preserves velocity. | Requires policy + tuning per tool. | Most teams. Good balance of safety + flow. |
| C. Post-Pipeline Tracking | Never fail the build; route all findings into a tracker and work them off by SLA. | Zero “scanner broke my PR” moments. Full context triage. | Risks being ignored unless you enforce SLAs & reviews. | When velocity is critical and you have disciplined triage/SLAs. |
Recommendation: Start with B (Tuned Blocking). Block only on issues you’re confident about; track everything else. Revisit thresholds as the team’s signal/noise improves.
When (and How) to Block — By Tool
Use these practical policies to decide what should actually fail the pipeline.
Secrets (TruffleHog)
- Block if: A finding is clearly a real credential (e.g., verified by the tool or matches a high-confidence detector you trust).
- Don’t block if: Ambiguous/entropy-only strings. Log and review.
- Tuning tips:
- Restrict scope (we scan
app/only). - Add allowlists for known fake/test keys.
- Restrict scope (we scan
- Make target:
make secrets→artifacts/trufflehog.json.
SAST (Semgrep)
- Block if: Rules tagged High/Critical or OWASP Top 10 patterns that are obviously exploitable (e.g., reflected XSS sinks, SQLi).
- Don’t block if: Style-only or low-confidence rules.
- Tuning tips:
- Keep a small curated ruleset in
tools/semgrep/. - Use
// nosemgrepsparingly with a comment explaining why.
- Keep a small curated ruleset in
- Make target:
make sast→artifacts/semgrep.json.
IaC (Checkov)
- Block if: Severe misconfigs (privileged pods,
runAsRoot, missingreadOnlyRootFilesystem, overly broad RBAC, public exposure). - Don’t block if: Cosmetic tags/labels or org-specific exceptions with justification.
- Tuning tips:
- Inline suppressions with reason (e.g.,
# checkov:skip=CKV_K8S_...: reason). - Maintain a small policy file of allowed skips.
- Inline suppressions with reason (e.g.,
- Make target:
make iac→artifacts/checkov.json.
SCA / Container Vulns (Trivy)
- Block if: Critical (and optionally High) vulns with fixes present—especially in the base image or critical runtime libs.
- Don’t block if: “Unfixed” items (track them), dev-only deps, or non-runtime paths.
- Tuning tips:
- Use flags like
--ignore-unfixedand a.trivyignorefor accepted CVEs. - Pin and regularly update base images.
- Use flags like
- Make target:
make sca→artifacts/trivy-image.json.
DAST (ZAP)
- Block if: For gated environments only (staging/prod) and confirmed exploitable Highs (rare for baseline).
- Default here: Don’t block; treat as actionable feedback for hardening (headers, input handling).
- Make target:
make zap→artifacts/zap/.
False Positives & Tuning (Quick Guide)
| Tool | Typical noise | How to tune |
|---|---|---|
| TruffleHog | Random high-entropy strings, test secrets | Limit to app/; prefer verified detectors for blocking; add explicit allowlists. |
| Semgrep | Low-severity style checks; matches in tests/generated code | Keep a curated ruleset; add // nosemgrep with justification; .semgrepignore for paths. |
| Checkov | Context-missing flags (e.g., enforcement elsewhere) | Inline # checkov:skip=...: reason; central list of accepted skips; target only high-risk rules for blocking. |
| Trivy | Large lists of Medium/Low or no-fix vulns | --severity HIGH,CRITICAL, --ignore-unfixed, .trivyignore; focus on runtime-reachable packages. |
| ZAP | Info-level headers, duplicates | Treat as “advise”; confirm manually; add headers (CSP, HSTS, XFO) and re-run. |
Tip: After your first run, invest 30–60 minutes tuning; you’ll earn that time back every day.
Vulnerability Management Lifecycle
Triage
Classify new findings: severity, exploitability, exposure (internet-facing?), and environment (prod vs. dev). Decide: Critical/High (now), Medium/Low (backlog), False Positive, Info.Assign / Ownership
Create a ticket (or use DefectDojo assignment) and give each finding an owner. Add a target due date based on severity (e.g., Critical ≤ 7 days, High ≤ 30 days).Validate
Reproduce or verify the issue. Mark False Positive if it’s not real; otherwise confirm scope. Add repro steps or links for the assignee.Remediate / Mitigate / Accept
- Fix (preferred): Code change, dependency upgrade, or config hardening.
- Mitigate: Temporary control (WAF rule, feature flag).
- Risk accept: Document rationale + an expiry/review date.
Retest
Re-run the relevant scan; confirm the finding disappears. Keep evidence (screenshot/report snippet or commit SHA).Close & Document
Close the item; note the fix and any preventative measures (e.g., new rule, pre-commit hook, pinned base image). Review accepted risks periodically.
What “Good” Looks Like (Checklist)
- Scans automated in CI (
sast,secrets,iac,sbom,sca,zapas needed). - Clear blocking policy documented and implemented (only high-signal issues fail the build).
- Exception flow exists (inline suppressions with reasons; reviewable allows).
- Single source of truth for findings (DefectDojo, issues, or a shared tracker).
- Regular triage cadence; every finding has an owner and due date.
- SLAs by severity (e.g., Critical ≤ 7d, High ≤ 30d) and reported on.
- Retest required before closure; evidence stored in
./artifacts(or tracker). - False positives managed (marked, suppressed, tuned at the tool).
- Risk acceptances documented with expiry/review.
- Metrics (open by severity, age, SLA compliance) reviewed; continuous tuning.
Practical Next Steps in This Repo
Start with Tuned Blocking:
- Secrets: block only high-confidence.
- SAST/IaC: block critical rules; log others.
- SCA: block Critical (and optionally High) with fixes.
Add a Findings Triage readme page for your team (who triages, when, SLAs).
(Optional) Bring up DefectDojo via Docker Compose and import today’s
artifacts/*.json.Track & close 2–3 issues end-to-end to practice the lifecycle.
You now have a clear, practical path from “scans produced findings” to “we handled them, and can prove it.”