Chapter 3. Vulnerability Management

Approaches to Handling Scanner Results

Teams typically choose one of three models. Pick the one that matches your risk tolerance and delivery needs.

ApproachDescriptionProsConsWhen to use
A. Block AllFail the build on any scanner finding.Maximal caution. Nothing slips by.High noise → developer fatigue. False positives block work.Small codebases, demo-only, or very high-risk environments.
B. Tuned BlockingFail only on specific high-confidence, high-impact findings. Log the rest.Catches the worst issues early. Preserves velocity.Requires policy + tuning per tool.Most teams. Good balance of safety + flow.
C. Post-Pipeline TrackingNever fail the build; route all findings into a tracker and work them off by SLA.Zero “scanner broke my PR” moments. Full context triage.Risks being ignored unless you enforce SLAs & reviews.When velocity is critical and you have disciplined triage/SLAs.

Recommendation: Start with B (Tuned Blocking). Block only on issues you’re confident about; track everything else. Revisit thresholds as the team’s signal/noise improves.


When (and How) to Block — By Tool

Use these practical policies to decide what should actually fail the pipeline.

Secrets (TruffleHog)

  • Block if: A finding is clearly a real credential (e.g., verified by the tool or matches a high-confidence detector you trust).
  • Don’t block if: Ambiguous/entropy-only strings. Log and review.
  • Tuning tips:
    • Restrict scope (we scan app/ only).
    • Add allowlists for known fake/test keys.
  • Make target: make secretsartifacts/trufflehog.json.

SAST (Semgrep)

  • Block if: Rules tagged High/Critical or OWASP Top 10 patterns that are obviously exploitable (e.g., reflected XSS sinks, SQLi).
  • Don’t block if: Style-only or low-confidence rules.
  • Tuning tips:
    • Keep a small curated ruleset in tools/semgrep/.
    • Use // nosemgrep sparingly with a comment explaining why.
  • Make target: make sastartifacts/semgrep.json.

IaC (Checkov)

  • Block if: Severe misconfigs (privileged pods, runAsRoot, missing readOnlyRootFilesystem, overly broad RBAC, public exposure).
  • Don’t block if: Cosmetic tags/labels or org-specific exceptions with justification.
  • Tuning tips:
    • Inline suppressions with reason (e.g., # checkov:skip=CKV_K8S_...: reason).
    • Maintain a small policy file of allowed skips.
  • Make target: make iacartifacts/checkov.json.

SCA / Container Vulns (Trivy)

  • Block if: Critical (and optionally High) vulns with fixes present—especially in the base image or critical runtime libs.
  • Don’t block if: “Unfixed” items (track them), dev-only deps, or non-runtime paths.
  • Tuning tips:
    • Use flags like --ignore-unfixed and a .trivyignore for accepted CVEs.
    • Pin and regularly update base images.
  • Make target: make scaartifacts/trivy-image.json.

DAST (ZAP)

  • Block if: For gated environments only (staging/prod) and confirmed exploitable Highs (rare for baseline).
  • Default here: Don’t block; treat as actionable feedback for hardening (headers, input handling).
  • Make target: make zapartifacts/zap/.

False Positives & Tuning (Quick Guide)

ToolTypical noiseHow to tune
TruffleHogRandom high-entropy strings, test secretsLimit to app/; prefer verified detectors for blocking; add explicit allowlists.
SemgrepLow-severity style checks; matches in tests/generated codeKeep a curated ruleset; add // nosemgrep with justification; .semgrepignore for paths.
CheckovContext-missing flags (e.g., enforcement elsewhere)Inline # checkov:skip=...: reason; central list of accepted skips; target only high-risk rules for blocking.
TrivyLarge lists of Medium/Low or no-fix vulns--severity HIGH,CRITICAL, --ignore-unfixed, .trivyignore; focus on runtime-reachable packages.
ZAPInfo-level headers, duplicatesTreat as “advise”; confirm manually; add headers (CSP, HSTS, XFO) and re-run.

Tip: After your first run, invest 30–60 minutes tuning; you’ll earn that time back every day.


Vulnerability Management Lifecycle

  1. Triage
    Classify new findings: severity, exploitability, exposure (internet-facing?), and environment (prod vs. dev). Decide: Critical/High (now), Medium/Low (backlog), False Positive, Info.

  2. Assign / Ownership
    Create a ticket (or use DefectDojo assignment) and give each finding an owner. Add a target due date based on severity (e.g., Critical ≤ 7 days, High ≤ 30 days).

  3. Validate
    Reproduce or verify the issue. Mark False Positive if it’s not real; otherwise confirm scope. Add repro steps or links for the assignee.

  4. Remediate / Mitigate / Accept

    • Fix (preferred): Code change, dependency upgrade, or config hardening.
    • Mitigate: Temporary control (WAF rule, feature flag).
    • Risk accept: Document rationale + an expiry/review date.
  5. Retest
    Re-run the relevant scan; confirm the finding disappears. Keep evidence (screenshot/report snippet or commit SHA).

  6. Close & Document
    Close the item; note the fix and any preventative measures (e.g., new rule, pre-commit hook, pinned base image). Review accepted risks periodically.


What “Good” Looks Like (Checklist)

  • Scans automated in CI (sast, secrets, iac, sbom, sca, zap as needed).
  • Clear blocking policy documented and implemented (only high-signal issues fail the build).
  • Exception flow exists (inline suppressions with reasons; reviewable allows).
  • Single source of truth for findings (DefectDojo, issues, or a shared tracker).
  • Regular triage cadence; every finding has an owner and due date.
  • SLAs by severity (e.g., Critical ≤ 7d, High ≤ 30d) and reported on.
  • Retest required before closure; evidence stored in ./artifacts (or tracker).
  • False positives managed (marked, suppressed, tuned at the tool).
  • Risk acceptances documented with expiry/review.
  • Metrics (open by severity, age, SLA compliance) reviewed; continuous tuning.

Practical Next Steps in This Repo

  1. Start with Tuned Blocking:

    • Secrets: block only high-confidence.
    • SAST/IaC: block critical rules; log others.
    • SCA: block Critical (and optionally High) with fixes.
  2. Add a Findings Triage readme page for your team (who triages, when, SLAs).

  3. (Optional) Bring up DefectDojo via Docker Compose and import today’s artifacts/*.json.

  4. Track & close 2–3 issues end-to-end to practice the lifecycle.

You now have a clear, practical path from “scans produced findings” to “we handled them, and can prove it.”