Chapter 3. Reviewing Results
SAST — Static Application Security Testing
What it is
SAST analyzes source code without executing it to find risky patterns (e.g., unsanitized inputs, unsafe APIs). In this project we use Semgrep (via the sast target) to scan the Go app and write results to artifacts/semgrep.json.
Why we need it
Catches code-level flaws early, while they’re cheap to fix, and shifts security left. Typical hits: injection, XSS, insecure crypto, unsafe deserialization, etc.
What it shows (how to read it)
Each finding includes a rule/check ID, severity, a message, and a location (file + line). Prioritize by severity and by “tainted data flow” (user input reaching a dangerous sink).
Use:
jq -r '.results[]? | "\(.extra.severity)\t\(.check_id)\t\(.path):\(.start.line)\t\(.extra.message)"' artifacts/semgrep.json | column -t | head
Signal vs noise
- Signal: “High/Critical” issues; user-controlled data in sinks (e.g., HTML output, DB queries).
- Noise: Style or low-confidence patterns that aren’t reachable or relevant. Triaging is normal.
Limits May miss runtime-only bugs and can raise false positives. Tune rules over time.
Makefile link
make sast (and optionally a gate like sast_block if you enable it).
Quick checklist
- Triage High/Critical first.
- Validate the code path (is the finding reachable?).
- Fix or suppress with rationale (minimize ignores).
- Re-run and confirm the artifact is clean(er).
SiC — Secrets in Code
What it is
Search for credentials in code (API keys, tokens, passwords). We use TruffleHog via secrets and write to artifacts/trufflehog.json.
In this book we favor offline, permissive scanning for training.
Why we need it Leaked secrets are a fast path to compromise (cloud keys, DB creds). Catching them before commit/merge is critical.
What it shows (how to read it) Look for the detector (e.g., AWS, GitHub PAT), the file/line, and whether it’s verified (if verification is enabled) or redacted. Example summaries:
jq -r 'select(.Source=="filesystem") | [.Detector, .Path, .RawV2] | @tsv' artifacts/trufflehog.json | head
Signal vs noise
- Signal: Real-looking keys/tokens, especially with known formats.
- Noise: Dummy/test strings, IDs that look random but aren’t secrets.
Limits Pattern/entropy-based: may miss weak-looking secrets or raise false positives. History scans can be noisy (we keep it local and focused for speed).
Makefile link
make secrets (scans app/ only in this starter).
Quick checklist
- If real: remove from code and rotate the secret immediately.
- If false positive: document and suppress carefully.
- Add guardrails (env vars, .gitignore, pre-commit hooks).
- Keep the scanner in CI.
IaC — Infrastructure as Code (Kubernetes, Terraform, etc.)
What it is
Static analysis of infra config for misconfigurations. We use Checkov to scan Kubernetes manifests and output artifacts/checkov.json.
Why we need it A secure app can be undone by an insecure deployment (e.g., privileged pods, public buckets, no encryption).
What it shows (how to read it)
Look for FAILED checks, their resource (e.g., Deployment), file/line, and recommended fix/guideline. Start with high-severity misconfigs: privileged containers, missing runAsNonRoot, lack of readOnlyRootFilesystem, wide network policies, etc.
jq -r '.results.failed_checks[]? | [.check_id, .check_name, .file_path, .resource] | @tsv' artifacts/checkov.json | column -t
Signal vs noise
- Signal: Privilege escalation, root containers, public exposure, missing auth/enc.
- Noise: Pure best-practices with low risk in your context (document if skipping).
Limits Template-only view (lacks runtime context), may need exceptions for valid deviations.
Makefile link
make iac
Quick checklist
- Fix high-risk misconfigs first.
- Apply secure defaults in base templates.
- Document justified exceptions (with reasons).
- Re-scan to confirm.
SBOM — Software Bill of Materials
What it is
An inventory of components (OS packages, libraries, modules). We use Syft to generate artifacts/sbom.json.
Why we need it Visibility: answer “What’s in our software?” It’s essential for rapid CVE impact checks, license review, and supply-chain transparency.
What it shows (how to read it) Base image OS and version, language/runtime versions, third-party packages. Use it to answer “Do we include X?” and to feed SCA.
jq -r '.artifacts[]? | [.name, .version, .type] | @tsv' artifacts/sbom.json | column -t | head
Signal vs noise
- Signal: Presence and versions of major components (OS, language runtime, key libs).
- Noise: Long tail of transitive packages (query as needed; don’t read linearly).
Limits Snapshot in time; may miss unusual binaries; large but queryable.
Makefile link
make sbom
Quick checklist
- Verify key components (OS, runtime) match expectations.
- Store SBOM as a release artifact.
- Use SBOM to drive SCA and incident response.
- Consider SBOM diffs per release.
SCA — Software Composition Analysis (Vulnerability Scanning)
What it is
Scan dependencies/OS packages for known CVEs. We use Trivy to scan the built image into artifacts/trivy-image.json.
Why we need it Most risk comes from third-party code. SCA flags known-bad versions and points to fixed versions.
What it shows (how to read it) For each vulnerable package: CVE ID, severity, installed vs fixed version, and references. Prioritize Critical/High. Example:
jq -r '.Results[]?.Vulnerabilities[]? | "\(.Severity)\t\(.PkgName)@\(.InstalledVersion)\tfix:\(.FixedVersion // "none")\t\(.VulnerabilityID)"' artifacts/trivy-image.json | column -t | head
Signal vs noise
- Signal: Critical/High vulns in reachable components.
- Noise: Low/Medium in unused tools; duplicates; “no fix” items (track & mitigate).
Limits Depends on vuln DB freshness; zero-days won’t appear; remediation can require upgrades.
Makefile link
make sca
Action plan
- Triage Critical/High immediately; note fixes.
- Upgrade base image / libs; re-scan.
- Mitigate or accept risk temporarily with rationale if no fix.
- Automate regular updates (e.g., Dependabot) and nightly scans.
DAST — Dynamic Application Security Testing
What it is
Probe the running app like an external attacker. We use OWASP ZAP (baseline) via zap and save reports under artifacts/zap/.
Why we need it Finds runtime issues: reflected XSS, missing headers, auth/logic exposure, misconfig in real responses.
What it shows (how to read it) Alerts grouped by type with risk level, description, solution, and example URLs. Expect common findings like missing security headers (CSP, X-Frame-Options). Treat High (e.g., exploitable injection) as urgent; fix Mediums for hardening.
Signal vs noise
- Signal: Confirmed vulns (XSS/SQLi), exposed admin endpoints, auth bypass.
- Noise: Info-level headers, duplicate URLs, occasional false positives—verify manually.
Limits Coverage depends on crawl/auth; can be slow or noisy; logic flaws may need manual testing.
Makefile link
make zap (runs after deploy in make cd)
Quick checklist
- Prioritize High/Medium alerts; reproduce locally.
- Add/adjust headers (CSP, X-Frame-Options, CORP, HSTS) or sanitize inputs.
- Re-scan after fixes.
- Schedule deeper scans periodically; script auth for protected areas if needed.
Evidence & Index (Optional but Recommended)
Bundle current artifacts and generate an index for quick review and sharing.
Make targets (optional additions):
make evidence→artifacts/evidence-YYYY-MM-DD.zipmake artifacts-index→artifacts/ARTIFACTS.md
These help you hand a reviewer/manager a portable proof-pack.
Putting It All Together
Each artifact covers a different angle:
- SAST → your code
- SiC → your secrets
- IaC → your infra configs
- SBOM → your ingredients
- SCA → known CVEs in those ingredients
- DAST → your live surface
Run them, read them, fix the high-signal items, and iterate. Over time you’ll tune rules, reduce noise, and ship with a clean, auditable security posture—locally, on your laptop.