Findings#
What Findings Are#
A finding is a discrete security issue discovered during a scan. Findings are the primary output of SilentBolt's security assessment pipeline — they represent the vulnerabilities, misconfigurations, and exposures that your team needs to evaluate and address.
Each finding carries technical details, severity ratings, risk scores, and a governance lifecycle that tracks how your team handles it.
Why Findings Exist#
Running a scan produces raw detection results. Findings transform those raw results into actionable security issues that can be triaged, tracked, assigned, and resolved as part of your organization's vulnerability management workflow.
Who Uses This#
- Security analysts — primary consumers; triage findings, investigate, update governance status.
- Team leads — monitor triage progress, identify bottlenecks, approve risk acceptance decisions.
- Developers — review findings related to their code or infrastructure for remediation guidance.
- MSSP operators — triage and report on findings across client domains.
How Findings Are Generated#
Findings come from two sources during a scan:
1. Vulnerability Scanning (Template-Based Engine)#
The platform runs a template-based vulnerability scanning engine against all discovered assets. Each template match produces a finding with:
- The matched vulnerability template ID and name.
- The matched URL, host, port, path, and scheme.
- Evidence (raw payload from the tool).
- A severity level from the template (critical, high, medium, low, info).
2. Heuristic Analysis#
During post-processing, SilentBolt's risk engine analyzes endpoint patterns and generates heuristic findings based on signals like:
- Admin panels exposed to the internet.
- Forgotten endpoints (present in prior scans but absent in the current scan).
- Endpoints with characteristics suggesting sensitive functionality (API keys, debug routes, file upload paths).
Understanding Finding Severity and Scoring#
Each finding carries multiple severity and scoring dimensions:
Severity Levels#
| Level | Meaning |
|---|---|
critical |
Actively exploitable with high impact. Immediate action required. |
high |
Significant risk. Should be addressed urgently. |
medium |
Moderate risk. Plan for remediation within normal cycles. |
low |
Minor risk. Address when convenient. |
info |
Informational. No immediate risk, but worth awareness. |
Technical Severity vs. Effective Severity#
- Technical severity is the raw severity from the detection tool or template, before any adjustments.
- Effective severity is the severity after accounting for the finding's governance status. For example, a high-severity finding marked as
accepted_riskwill have a reduced effective severity.
This dual-severity model lets your team maintain an accurate picture of actual risk alongside technical risk.
Scores#
- Technical score — a numeric value (typically 0–100) derived from the detection template severity, confidence, and stability.
- Effective score — the same score adjusted by governance state. Used for dashboard aggregations, trend charts, and prioritization.
- Risk score — an endpoint-level score based on heuristic patterns (e.g., admin paths score higher).
Confidence and Stability#
- Confidence (0.0–1.0) — how certain the tool is that this is a real issue, not a false positive.
- Stability — whether this finding appears consistently across scans.
Governance Actions#
Governance is the lifecycle that a finding moves through as your team triages and addresses it. Every finding starts as open and can transition through the following statuses:
| Status | Meaning | Typical Next Steps |
|---|---|---|
open |
Newly discovered; awaiting triage | Investigate, then move to another status |
in_progress |
Actively being investigated or remediated | Complete fix, then mark resolved |
false_positive |
Determined to not be a real vulnerability | Document reasoning; no further action |
accepted_risk |
Real issue, but organization accepts the risk | Must set an expiry date; re-evaluate on expiry |
resolved |
Remediated and verified | May transition to reopened if it reappears |
reopened |
Previously resolved but reappeared in a later scan | Re-investigate and re-remediate |
Transitioning Governance Status#
- Navigate to the finding detail page.
- Click the Governance dropdown or action button.
- Select the new status.
- Add a note explaining the reason (recommended; stored in the audit trail).
- For
accepted_risk: you must provide an expiry date — the date by which the risk should be re-evaluated.
Every governance transition is logged in an immutable audit trail with the actor, timestamp, from-status, to-status, and note.
Bulk Actions#
For efficiency, you can apply governance transitions to multiple findings at once:
- Select findings using the checkbox column.
- Click Bulk Actions.
- Choose a target status and provide a shared note.
- Confirm.
Bulk actions are individually logged — each finding gets its own audit trail entry.
Drift Detection#
When a scan completes, SilentBolt compares its findings against the baseline scan (first scan for the domain) and the previous scan (most recent prior scan). Each finding is labeled with a change type:
| Drift Label | Meaning |
|---|---|
new |
Not present in the baseline or previous scan |
changed |
Present previously, but attributes changed (e.g., severity upgrade) |
resolved |
Present in baseline/previous scan but absent in the current scan |
regression |
Was previously resolved but has reappeared |
Drift labels help you focus on what's actually new or worsening, rather than re-reviewing known issues.
AI Review Suggestions#
SilentBolt offers an optional AI-assisted triage capability. When an AI Review is triggered for a completed scan, the AI examines each finding and generates per-finding suggestions:
keep_open— the finding should remain open for manual investigation.false_positive_candidate— the AI believes this is likely a false positive.accept_risk_candidate— the AI suggests accepting the risk (analyst must still set expiry date via standard governance).resolve_candidate— the AI suggests marking as resolved.set_in_progress_candidate— the AI suggests moving to in-progress.needs_manual_review— the AI cannot confidently triage; requires human review.
Each suggestion includes a confidence score (0.0–1.0), a rationale explaining why the AI reached this conclusion, and a remediation hint.
Important: AI suggestions are recommendations only. No governance status changes occur automatically. Every suggestion must be explicitly approved or dismissed by an analyst. This is a deliberate design choice to keep human analysts in full control of governance decisions.
Key UI Elements on the Findings Page#
Findings List Page#
The main Findings page shows all findings across all scans, with filters and sorting:
| Column | Description |
|---|---|
| Title | Finding name/description |
| Severity | Color-coded severity badge |
| Effective Severity | Governance-adjusted severity |
| Domain | The target domain |
| Scan | Link to the originating scan |
| Status | Current governance status |
| Drift | Change type label (new, changed, etc.) |
| Confidence | Detection confidence score |
| Last Observed | Most recent scan where this finding appeared |
Filters:
- By severity, governance status, drift label, domain, scan, source tool.
- By date range (first observed, last observed).
Finding Detail Page#
The detail page for a single finding shows:
- Summary — title, description, severity badges, scores.
- Technical Evidence — matched URL, host, port, path, template ID, raw payload.
- Governance — current status, transition history (audit log), action buttons.
- AI Review — if an AI review has been run, the suggestion for this finding.
- Scan Context — link to the originating scan, drift label, observation history.
Common Actions#
| Action | How |
|---|---|
| View all findings | Findings (top nav) |
| Filter by severity | Findings → Severity filter → select level(s) |
| Triage a finding | Finding detail → Governance → select new status |
| Bulk triage | Findings list → select checkboxes → Bulk Actions |
| Trigger AI Review | Scan detail → AI Review → Trigger |
| Apply an AI suggestion | AI Review → select item → Apply |
| Dismiss an AI suggestion | AI Review → select item → Ignore |
| View audit trail | Finding detail → Governance History |
Best Practices#
- Triage critical and high findings first. Use severity filters to focus on what matters most.
- Document governance decisions. Always add a note when changing status — your future self and your team will thank you.
- Set realistic expiry dates for accepted risk. Don't accept risk indefinitely. Review accepted findings quarterly at minimum.
- Use drift labels to prioritize. New findings and regressions deserve more attention than unchanged ones.
- Use AI Review as a starting point, not a final decision. The AI is useful for accelerating bulk triage, but each organization has its own risk tolerance and context.
- Review the audit trail periodically to ensure governance decisions are being made consistently across the team.
Edge Cases and Warnings#
- Orphan findings. Some template-engine findings may not map to a specific endpoint (e.g., when the endpoint wasn't discovered during surface mapping). These "orphan" findings still retain their matched URL and host information and are visible in the findings list.
- Accepted risk expiry. When an accepted risk passes its expiry date, the finding should be re-evaluated. SilentBolt does not currently auto-reopen expired accepted risks, but this is planned.
- False positive confidence. Marking a finding as false positive does not suppress it in future scans. If the same vulnerability is detected again, a new finding will be created.
- AI Review limitations. The AI review requires a configured AI provider. Without an active provider, the review trigger is unavailable. The AI review uses a 4-minute timeout — very large finding sets may result in partial responses.
- Cross-scan finding identity. Each scan creates its own set of finding records. The "same" vulnerability found in two scans exists as two separate rows, linked by drift detection.