Attack Orchestration#

What Attack Orchestration Is#

Attack Orchestration is SilentBolt's AI-driven penetration testing capability. It extends the results of a completed scan into deeper, targeted security testing by chaining multiple security tools — each configured by AI based on your specific scan context.

Unlike the automated scan pipeline (which runs a fixed set of tools against every domain), orchestration sessions are on-demand, analyst-initiated, and AI-customized. The AI analyzes your scan results — discovered hosts, endpoints, technologies, and existing findings — and suggests which penetration test types are most relevant. You select, approve, and execute.

Why It Exists#

Automated scanning catches known vulnerabilities using templates. But real penetration testing requires:

  • Contextual decision-making — which tools to run depends on what was discovered.
  • Tool chaining — one tool's output informs another tool's configuration.
  • Specialized testing — web app pentest, API pentest, cloud security, and Kubernetes testing each require different tools and parameters.

Attack Orchestration bridges the gap between automated scanning and full manual penetration testing by using AI to plan and configure the test, while keeping the analyst in control of execution.

Who Uses This#

  • Security analysts — launch orchestration sessions to perform deeper testing on high-risk domains.
  • Pentest teams — use SilentBolt to automate the discovery and configuration phases of their engagements.
  • MSSP operators — extend scan deliverables with orchestrated pentest results for clients.

Relationship Between Scans and Orchestration#

Orchestration sessions are always linked to a completed scan. The scan provides the context the AI needs:

  • Number of hosts and endpoints.
  • Detected authentication mechanisms.
  • Technology stack (frameworks, servers, languages).
  • Existing findings (titles, severities).
  • Surface signals (forgotten endpoints, admin paths, API endpoints).

You cannot create an orchestration session without a completed scan.


Orchestration Session Lifecycle#

An orchestration session progresses through these statuses:

Status Description
draft Session created, no test types selected yet
preparing AI is generating tool configurations and building the workflow
ready Workflow is prepared and ready for execution
running Tools are executing sequentially
completed All steps finished successfully
failed One or more steps failed during execution
canceled User manually canceled the session

Step-by-Step Flow#

1. Create a Session#

From a completed scan's detail page, click Launch Orchestration (or navigate to Attack OrchestrationNew Session and select a scan).

The session is created in draft status.

2. Request AI Suggestions#

Click Get AI Suggestions. SilentBolt sends a summary of your scan context to the configured AI provider. The AI returns a ranked list of recommended test types, each with:

  • Test type — e.g., web_app_pentest, api_pentest, kubernetes.
  • Rationale — why this test type is relevant to your scan results.
  • Confidence score — how confident the AI is in this recommendation (only scores ≥ 0.6 are returned).

3. Select Test Types#

Review the AI suggestions and select which test types you want to execute. You can accept, reject, or modify the AI's recommendations.

Available test types:

Test Type Tooling Category Description
External Pentest Recon + enumeration + template detection engines Full external attack surface testing
Web App Pentest Web security testing engines Web application vulnerability testing
API Pentest API security testing engines API security testing
Kubernetes Container and cluster assessment engines Kubernetes cluster security
AWS Cloud posture assessment engine AWS infrastructure security
Azure Cloud posture assessment engines Azure cloud security
GCP Cloud posture assessment engine Google Cloud security
Mobile Mobile assessment engine Mobile application analysis
Phishing Simulation engine Phishing simulation
AD Password Audit Directory assessment engine Active Directory password testing
Okta Audit Custom tool Okta SSO security audit
OAuth Pentest Custom tool OAuth implementation testing
MFA Bypass Test Custom tool Multi-factor authentication testing

4. Prepare the Workflow#

Click Prepare. The session moves to preparing status.

Behind the scenes:

  1. SilentBolt calls the AI to generate tool-specific parameters for each tool in your selected test types. The AI configures arguments (scan targets, output formats, intensity settings) based on your scan context.
  2. Safety checks are applied — AI-generated arguments cannot override base safe-default arguments defined in the tool registry.
  3. Orchestration steps are created for each tool in sequence.
  4. A visual workflow graph is built, showing the tool execution order and dependencies.

Once preparation completes, the session moves to ready.

5. Execute#

Click Start. The session moves to running status.

Each step executes sequentially:

  1. A container is launched for the tool.
  2. The tool runs with the AI-generated + base arguments.
  3. Output is streamed live to the UI — updated every 2 seconds.
  4. Each step transitions to completed or failed.

After all steps in a test type complete, a report is generated for that test type.

6. Review Results#

Once the session reaches completed:

  • Review each step's output on the session detail page.
  • Download per-test-type reports (JSON format).
  • Use findings to inform further investigation or remediation.

AI Integration Details#

The AI is called twice during each session:

1. Test Type Suggestions#

The AI receives a scan context summary containing:

  • Domain, host count, endpoint count.
  • Detected authentication types and technology stack.
  • Finding summaries and severity distribution.
  • Boolean signals: has forgotten endpoints, has admin paths, has API endpoints.

It returns a ranked list of test types with rationale and confidence. Only test types with confidence ≥ 0.6 are included.

2. Tool Configuration#

For each tool in each selected test type, the AI generates additional CLI arguments. These are merged with the base arguments defined in the tool registry, following strict conflict-prevention rules:

  • AI cannot override base arguments.
  • AI arguments must be valid for the specific tool.
  • Dangerous or destructive arguments are blocked.

Supported AI Providers#

  • OpenAI
  • Google Gemini
  • Anthropic Claude

Users can configure their preferred provider and API key in Settings, or use the system default.


Safety and Analyst Control#

Attack Orchestration is designed with safety as a priority:

  • Analyst-initiated only. No orchestration happens automatically. Every session is explicitly created and started by a user.
  • AI suggestions are recommendations. The analyst chooses which test types to execute.
  • Base arguments are immutable. The AI customizes parameters but cannot override safe defaults.
  • Isolation. Every tool runs in an isolated container — tools cannot access the host system or each other.
  • Step-level visibility. Live output streaming means the analyst can monitor exactly what's happening.
  • Per-step timeouts. Each step has a configurable timeout (default: 600 seconds; extended browser-based testing: 900 seconds) to prevent runaway execution.
  • Cancellation. Users can cancel a running session at any time.

Common Actions#

Action How
Create a session Scan detail → Launch Orchestration (or Attack Orchestration → New Session)
Get AI suggestions Session detail → Get Suggestions
Select test types Session detail → check/uncheck suggested types
Prepare workflow Session detail → Prepare
Start execution Session detail → Start
Monitor execution Session detail → live output stream
Cancel a session Session detail → Cancel
Download reports Session detail → Reports → Download

Best Practices#

  • Use orchestration on high-value targets. Orchestration is most useful for domains with complex attack surfaces — many endpoints, diverse technology stacks, and existing high-severity findings.
  • Review AI suggestions critically. The AI makes informed recommendations, but you know your environment best. Deselect irrelevant tests.
  • Start with external pentest. For your first orchestration session, the external_pentest type covers the broadest scope and helps you understand the workflow.
  • Monitor live output. Don't just fire and forget. Watch the output to catch issues early and understand what the tools are finding.
  • Compare orchestration results with scan findings. Orchestration may discover issues that the automated scan pipeline missed, or confirm existing findings with additional evidence.

Limitations#

  • Sequential execution. Steps run one at a time, not in parallel. This means long sessions for test types with many tools.
  • No custom tool addition. You cannot currently add your own tools to the orchestration registry. Only tools defined in the built-in registry are available.
  • AI quality depends on provider. Different AI providers may produce different quality suggestions and configurations.
  • Requires completed scan. You must have at least one completed scan before launching orchestration.

Related Pages