Kubernetes Deployment
Kubernetes Deployment
The following files were used as context for generating this wiki page:
- .github/workflows/ghcr-publish.yml
- README.md
- RELEASE_3.0.0.md
- RELEASE_SUMMARY.md
- docs/DOCKER.md
- docs/V3_COMPLETE_GUIDE.md
- plugins/README.md
- wshawk/plugin_system.py
Purpose: This page documents deployment patterns for running WSHawk in Kubernetes environments. It covers Job-based scanning, CronJob scheduling, persistent web dashboard deployment, configuration management, and centralized vulnerability management integration. WSHawk's container-native design and REST API make it well-suited for orchestrated, multi-target scanning at scale.
Scope: For single-instance Docker usage, see Docker Usage Guide. For CI/CD pipeline integration (GitHub Actions, GitLab CI), see CI/CD Integration. For REST API details, see REST API Reference.
Kubernetes Deployment Overview
WSHawk supports three primary Kubernetes deployment patterns:
- Job-based single scans - One-time security assessments triggered programmatically
- CronJob scheduled scans - Recurring vulnerability assessments (daily, weekly)
- Persistent web dashboard - Long-running web management interface with SQLite persistence
All patterns leverage the official Docker images from Docker Hub (rothackers/wshawk) or GitHub Container Registry (ghcr.io/regaan/wshawk), which are built for both amd64 and arm64 architectures via GitHub Actions workflows.
Sources: README.md:64-78, docs/DOCKER.md:1-40, docs/V3_COMPLETE_GUIDE.md:352-361
Job-Based Single Scans
Kubernetes Jobs provide the simplest pattern for executing WSHawk scans as ephemeral tasks. Each Job runs a single scan against a target WebSocket endpoint and terminates upon completion.
Basic Job Manifest
apiVersion: batch/v1
kind: Job
metadata:
name: wshawk-scan-target1
namespace: security
spec:
ttlSecondsAfterFinished: 3600 # Clean up 1 hour after completion
backoffLimit: 2
template:
metadata:
labels:
app: wshawk
scan-target: target1
spec:
restartPolicy: OnFailure
containers:
- name: wshawk
image: rothackers/wshawk:3.0.0
imagePullPolicy: IfNotPresent
command: ["wshawk"]
args:
- "ws://target-service.production.svc.cluster.local:8080/ws"
- "--rate"
- "10"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumeMounts:
- name: reports
mountPath: /app/reports
volumes:
- name: reports
emptyDir: {}
Key Configuration Points:
- TTL Controller:
ttlSecondsAfterFinishedautomatically deletes completed Jobs to prevent cluster clutter - Backoff Limit:
backoffLimit: 2allows retry on transient failures (network issues, target unavailability) - Resource Limits: Memory limit prevents OOM kills during large scans with 22,000+ payloads (wshawk/scanner_v2.py)
- EmptyDir Volume: Suitable for ephemeral reports; use PersistentVolumeClaim for retention
Advanced Scan with Playwright and OAST
apiVersion: batch/v1
kind: Job
metadata:
name: wshawk-advanced-scan
spec:
template:
spec:
restartPolicy: OnFailure
initContainers:
- name: install-playwright
image: rothackers/wshawk:3.0.0
command: ["playwright", "install", "chromium"]
volumeMounts:
- name: playwright-cache
mountPath: /root/.cache/ms-playwright
containers:
- name: wshawk
image: rothackers/wshawk:3.0.0
command: ["wshawk-advanced"]
args:
- "wss://secure-target.example.com/api/ws"
- "--smart-payloads"
- "--playwright"
- "--full"
env:
- name: PYTHONUNBUFFERED
value: "1"
volumeMounts:
- name: playwright-cache
mountPath: /root/.cache/ms-playwright
- name: reports
mountPath: /app/reports
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: playwright-cache
emptyDir: {}
- name: reports
persistentVolumeClaim:
claimName: wshawk-reports
Advanced Features:
- InitContainer: Pre-installs Playwright Chromium browser for XSS verification (wshawk/headless_browser.py)
- Smart Payloads: Enables genetic mutation engine (
PayloadEvolver,FeedbackLoop) (wshawk/smart_payload_evolution.py) - OAST Integration: Blind vulnerability detection via
interact.sh(wshawk/oast_provider.py)
Sources: README.md:96-103, docs/DOCKER.md:124-131, docs/V3_COMPLETE_GUIDE.md:352-361
CronJob Scheduled Scanning
CronJobs enable continuous security monitoring by executing WSHawk scans on a recurring schedule. This pattern is ideal for regression testing and drift detection in production environments.
Daily Defensive Validation
apiVersion: batch/v1
kind: CronJob
metadata:
name: wshawk-defensive-daily
namespace: security
spec:
schedule: "0 2 * * *" # 2 AM daily
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: wshawk
image: rothackers/wshawk:3.0.0
command: ["wshawk-defensive"]
args:
- "wss://production-api.example.com/ws"
envFrom:
- configMapRef:
name: wshawk-config
- secretRef:
name: wshawk-secrets
volumeMounts:
- name: reports
mountPath: /app/reports
- name: scan-history
mountPath: /root/.wshawk
volumes:
- name: reports
persistentVolumeClaim:
claimName: wshawk-reports-pvc
- name: scan-history
persistentVolumeClaim:
claimName: wshawk-history-pvc
Defensive Validation Tests (executed by wshawk-defensive):
- DNS Exfiltration Prevention: Validates egress filtering (wshawk/defensive_validation.py)
- Bot Detection Effectiveness: Tests anti-bot measures against headless browsers
- CSWSH Protection: 216+ malicious Origin header tests (wshawk/cswsh_test.py)
- WSS Protocol Security: TLS version, cipher suite, and certificate validation
CronJob Configuration:
- Concurrency Policy:
Forbidprevents overlapping scans if previous job takes longer than expected - History Limits: Retains last 3 successful and failed jobs for audit trail
- Persistent Volumes: Maintains scan history in SQLite database (
~/.wshawk/scans.db) (wshawk/database.py)
Sources: README.md:199-238, docs/DOCKER.md:49-53, RELEASE_SUMMARY.md:15-19
Persistent Web Dashboard Deployment
The Web Management Dashboard (wshawk/web_dashboard.py) provides a persistent, team-accessible interface for scan orchestration, history visualization, and report management. It is deployed as a long-running Deployment with a Service for network access.
Dashboard Deployment Architecture
graph TB
subgraph "Kubernetes Cluster"
subgraph "security namespace"
Deploy["Deployment<br/>wshawk-dashboard<br/>replicas: 1"]
Svc["Service<br/>wshawk-dashboard-svc<br/>type: ClusterIP<br/>port: 5000"]
PVC1["PVC<br/>wshawk-db-pvc<br/>10Gi<br/>scans.db"]
PVC2["PVC<br/>wshawk-reports-pvc<br/>50Gi<br/>HTML reports"]
CM["ConfigMap<br/>wshawk-config<br/>wshawk.yaml"]
Secret["Secret<br/>wshawk-secrets<br/>WSHAWK_WEB_PASSWORD<br/>API keys"]
end
Ingress["Ingress<br/>wshawk.example.com<br/>TLS: enabled"]
end
Users["Security Team<br/>Browser Access"]
Jobs["Scan Jobs<br/>REST API Clients"]
DefectDojo["DefectDojo<br/>Vulnerability Management"]
Users -->|"HTTPS"| Ingress
Ingress --> Svc
Svc --> Deploy
Deploy -->|"reads"| CM
Deploy -->|"reads"| Secret
Deploy -->|"SQLite WAL"| PVC1
Deploy -->|"writes"| PVC2
Jobs -->|"POST /api/scans"| Svc
Deploy -->|"push findings"| DefectDojo
Diagram: Web Dashboard Deployment Architecture
Sources: README.md:112-136, docs/V3_COMPLETE_GUIDE.md:289-306, RELEASE_SUMMARY.md:15-19
Deployment Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: wshawk-dashboard
namespace: security
spec:
replicas: 1 # Single instance due to SQLite file locking
strategy:
type: Recreate # Required for SQLite persistence
selector:
matchLabels:
app: wshawk-dashboard
template:
metadata:
labels:
app: wshawk-dashboard
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: dashboard
image: rothackers/wshawk:3.0.0
command: ["wshawk"]
args:
- "--web"
- "--host"
- "0.0.0.0"
- "--port"
- "5000"
env:
- name: WSHAWK_WEB_PASSWORD
valueFrom:
secretKeyRef:
name: wshawk-secrets
key: web-password
- name: WSHAWK_API_KEY
valueFrom:
secretKeyRef:
name: wshawk-secrets
key: api-key
ports:
- containerPort: 5000
name: http
livenessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "500m"
volumeMounts:
- name: db-storage
mountPath: /root/.wshawk
- name: reports
mountPath: /app/reports
- name: config
mountPath: /app/wshawk.yaml
subPath: wshawk.yaml
volumes:
- name: db-storage
persistentVolumeClaim:
claimName: wshawk-db-pvc
- name: reports
persistentVolumeClaim:
claimName: wshawk-reports-pvc
- name: config
configMap:
name: wshawk-config
Service and Ingress
---
apiVersion: v1
kind: Service
metadata:
name: wshawk-dashboard-svc
namespace: security
spec:
type: ClusterIP
ports:
- port: 5000
targetPort: 5000
protocol: TCP
name: http
selector:
app: wshawk-dashboard
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wshawk-ingress
namespace: security
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- wshawk.example.com
secretName: wshawk-tls
rules:
- host: wshawk.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wshawk-dashboard-svc
port:
number: 5000
Key Design Decisions:
- Single Replica: SQLite uses file-based locking; multiple replicas would cause contention (wshawk/database.py:1-50)
- Recreate Strategy: Ensures graceful shutdown and DB consistency during updates
- Security Context: Runs as non-root user (
wshawk:1000) per container best practices (Dockerfile:40-45) - Health Probes: Validates Flask server responsiveness
- TLS Termination: Ingress handles HTTPS; internal communication uses HTTP
Sources: README.md:112-136, docs/DOCKER.md:195-226, RELEASE_SUMMARY.md:15-19
Storage and Persistence Strategies
WSHawk requires persistent storage for two primary use cases:
PersistentVolumeClaim Specifications
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wshawk-db-pvc
namespace: security
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard-rwo
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wshawk-reports-pvc
namespace: security
spec:
accessModes:
- ReadWriteMany # Allow multiple Jobs to write concurrently
storageClassName: nfs-client
resources:
requests:
storage: 50Gi
Storage Architecture
graph LR
subgraph "SQLite Database Storage (RWO)"
DB["scans.db<br/>SQLite with WAL mode<br/>~/.wshawk/scans.db"]
end
subgraph "Report Storage (RWX)"
Reports["HTML Reports<br/>wshawk_report_*.html<br/>Screenshots<br/>Traffic Logs"]
end
Dashboard["Dashboard Pod"] -->|"writes"| DB
Dashboard -->|"reads/writes"| Reports
Job1["Scan Job 1"] -->|"writes"| Reports
Job2["Scan Job 2"] -->|"writes"| Reports
Job3["Scan Job 3"] -->|"writes"| Reports
CronJob["CronJob"] -->|"reads"| DB
CronJob -->|"writes"| DB
CronJob -->|"writes"| Reports
Diagram: WSHawk Storage Architecture in Kubernetes
Storage Considerations:
| Component | Access Mode | Storage Class | Rationale |
|-----------|-------------|---------------|-----------|
| scans.db | ReadWriteOnce | Block storage (AWS EBS, GCE PD) | SQLite file locking requires exclusive access |
| Reports directory | ReadWriteMany | NFS, CephFS, EFS | Multiple Jobs write reports concurrently |
SQLite WAL Mode: WSHawk uses Write-Ahead Logging for crash recovery and concurrent reads (wshawk/database.py:25-40). The scans.db-wal and scans.db-shm files must reside on the same volume.
Sources: RELEASE_SUMMARY.md:15-19, docs/V3_COMPLETE_GUIDE.md:122-131, README.md:132-133
Configuration Management
WSHawk uses hierarchical configuration via wshawk.yaml (wshawk/config.py) with environment variable and secret resolution. Kubernetes ConfigMaps and Secrets provide the integration points.
ConfigMap for wshawk.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: wshawk-config
namespace: security
data:
wshawk.yaml: |
# WSHawk Configuration
scan:
rate_limit: 10
max_retries: 3
timeout: 30
use_smart_payloads: true
use_oast: true
use_playwright: false
resilience:
enable_circuit_breaker: true
backoff_multiplier: 2
max_backoff: 60
circuit_breaker_threshold: 5
integrations:
defectdojo:
enabled: true
url: "env:DEFECTDOJO_URL"
api_key: "env:DEFECTDOJO_API_KEY"
engagement_name: "WebSocket Security Assessment"
auto_create_findings: true
jira:
enabled: false
url: "env:JIRA_URL"
api_token: "env:JIRA_API_TOKEN"
project: "SEC"
auto_create_issue: true
severity_threshold: "HIGH"
webhooks:
slack:
enabled: true
webhook_url: "env:SLACK_WEBHOOK_URL"
channel: "#security-alerts"
reporting:
formats:
- html
- json
- sarif
include_screenshots: true
include_traffic_logs: true
cvss_threshold: 4.0
Secret for Sensitive Credentials
apiVersion: v1
kind: Secret
metadata:
name: wshawk-secrets
namespace: security
type: Opaque
stringData:
# Web Dashboard Authentication
web-password: "your-strong-password-here"
api-key: "your-api-key-here"
# DefectDojo Integration
DEFECTDOJO_URL: "https://defectdojo.example.com"
DEFECTDOJO_API_KEY: "dd_api_key_xxxxxxxxx"
# Jira Integration
JIRA_URL: "https://company.atlassian.net"
JIRA_API_TOKEN: "jira_token_xxxxxxxxx"
# Slack Webhook
SLACK_WEBHOOK_URL: "https://hooks.slack.com/services/T00/B00/xxxx"
Environment Variable Resolution Flow
graph LR
ConfigMap["ConfigMap<br/>wshawk-config"] -->|"mounts"| Pod["WSHawk Pod"]
Secret["Secret<br/>wshawk-secrets"] -->|"env vars"| Pod
Pod -->|"reads"| Config["wshawk.yaml<br/>(config.py)"]
Config -->|"parses"| Parser["WSHawkConfig<br/>resolve_secrets()"]
Parser -->|"env: prefix"| EnvVar["os.getenv()"]
Parser -->|"file: prefix"| FileRead["read from file"]
EnvVar --> Value["Resolved Values"]
FileRead --> Value
Value --> Scanner["WSHawkV2<br/>scanner_v2.py"]
Diagram: Configuration and Secret Resolution Pipeline
Secret Resolution Syntax:
env:VAR_NAME- Resolves to environment variableVAR_NAMEfile:/path/to/secret- Reads secret from file (useful for Kubernetes Secret volumes)
Sources: README.md:137-150, wshawk/config.py, docs/V3_COMPLETE_GUIDE.md:352-408
Multi-Service Scanning Architecture
Kubernetes enables parallel scanning of multiple WebSocket services across a cluster. This pattern uses a Job per target with centralized result aggregation.
Parallel Multi-Target Scanning
graph TB
subgraph "Orchestration Layer"
Controller["Scan Controller<br/>(K8s CronJob or Operator)"]
end
subgraph "Scanning Jobs"
Job1["Job: scan-service-a<br/>wshawk ws://service-a:8080/ws"]
Job2["Job: scan-service-b<br/>wshawk ws://service-b:8080/ws"]
Job3["Job: scan-service-c<br/>wshawk ws://service-c:8080/ws"]
Job4["Job: scan-service-d<br/>wshawk ws://service-d:8080/ws"]
end
subgraph "Target Services"
SvcA["service-a<br/>Namespace: prod"]
SvcB["service-b<br/>Namespace: prod"]
SvcC["service-c<br/>Namespace: staging"]
SvcD["service-d<br/>Namespace: dev"]
end
subgraph "Centralized Reporting"
PVC["Shared PVC<br/>wshawk-reports"]
DefectDojo["DefectDojo<br/>Vulnerability DB"]
Dashboard["WSHawk Dashboard<br/>Scan History"]
end
Controller --> Job1
Controller --> Job2
Controller --> Job3
Controller --> Job4
Job1 -->|"scans"| SvcA
Job2 -->|"scans"| SvcB
Job3 -->|"scans"| SvcC
Job4 -->|"scans"| SvcD
Job1 -->|"write report"| PVC
Job2 -->|"write report"| PVC
Job3 -->|"write report"| PVC
Job4 -->|"write report"| PVC
Job1 -->|"POST findings"| DefectDojo
Job2 -->|"POST findings"| DefectDojo
Job3 -->|"POST findings"| DefectDojo
Job4 -->|"POST findings"| DefectDojo
PVC --> Dashboard
DefectDojo --> Dashboard
Diagram: Multi-Service Parallel Scanning Architecture
Sources: docs/V3_COMPLETE_GUIDE.md:352-376
Service Discovery and Target List Generation
apiVersion: batch/v1
kind: Job
metadata:
name: wshawk-discovery
namespace: security
spec:
template:
spec:
serviceAccountName: wshawk-scanner
restartPolicy: OnFailure
containers:
- name: discover-and-scan
image: rothackers/wshawk:3.0.0
command: ["/bin/bash", "-c"]
args:
- |
# Discover all services with WebSocket annotation
kubectl get services --all-namespaces \
-l websocket=enabled \
-o json | jq -r '.items[] | "\(.metadata.namespace)/\(.metadata.name)"' > targets.txt
# Generate scan jobs
while read target; do
namespace=$(echo $target | cut -d'/' -f1)
service=$(echo $target | cut -d'/' -f2)
cat <<EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: wshawk-scan-${namespace}-${service}
namespace: security
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: wshawk
image: rothackers/wshawk:3.0.0
args:
- "ws://${service}.${namespace}.svc.cluster.local:8080/ws"
EOF
done < targets.txt
env:
- name: KUBECONFIG
value: /var/run/secrets/kubernetes.io/serviceaccount/token
Discovery Pattern:
- Query Kubernetes API for services with label
websocket=enabled - Generate Job manifests dynamically for each discovered service
- Each Job executes independently and writes results to shared storage
Sources: docs/V3_COMPLETE_GUIDE.md:204-221
DefectDojo Integration
DefectDojo integration enables centralized vulnerability management across all scans. WSHawk automatically pushes findings to DefectDojo via its REST API (wshawk/integrations/defectdojo.py).
DefectDojo Integration Flow
sequenceDiagram
participant Job as "WSHawk Job"
participant Scanner as "WSHawkV2"
participant Integrations as "DefectDojoIntegration"
participant DD as "DefectDojo API"
participant DB as "DefectDojo Database"
Job->>Scanner: Execute scan
Scanner->>Scanner: Detect vulnerabilities
Scanner->>Scanner: Calculate CVSS scores
Scanner->>Integrations: push_findings(vulnerabilities)
Integrations->>DD: GET /api/v2/products
DD-->>Integrations: Product ID
Integrations->>DD: GET /api/v2/engagements?product=X
DD-->>Integrations: Engagement ID or 404
alt Engagement not found
Integrations->>DD: POST /api/v2/engagements
DD-->>Integrations: New Engagement ID
end
loop For each vulnerability
Integrations->>DD: POST /api/v2/findings
DD->>DB: Store finding
DD-->>Integrations: 201 Created
end
Integrations-->>Scanner: Push complete
Scanner-->>Job: Scan complete
Diagram: DefectDojo Integration Sequence
DefectDojo Configuration in Kubernetes
# In ConfigMap wshawk.yaml
integrations:
defectdojo:
enabled: true
url: "env:DEFECTDOJO_URL"
api_key: "env:DEFECTDOJO_API_KEY"
product_name: "WebSocket Services"
engagement_name: "Kubernetes Scan - {{DATE}}"
scan_type: "WebSocket Security Assessment"
auto_create_engagement: true
deduplication: true # Avoid duplicate findings
Integration Features:
- Automatic Engagement Creation: Creates new engagement per scan date (wshawk/integrations/defectdojo.py:100-150)
- Product Mapping: Maps scans to DefectDojo products by service name
- CVSS Integration: Findings include calculated CVSS v3.1 scores (wshawk/cvss_calculator.py)
- Deduplication: Uses finding hash to prevent duplicate entries
Sources: README.md:31, RELEASE_SUMMARY.md:30-34, docs/V3_COMPLETE_GUIDE.md:342-343
Resource Management and Limits
Proper resource allocation prevents cluster instability during intensive scanning operations.
Resource Allocation Guidelines
| Scan Type | Memory Request | Memory Limit | CPU Request | CPU Limit | Rationale | |-----------|----------------|--------------|-------------|-----------|-----------| | Quick Scan | 128Mi | 256Mi | 100m | 250m | Basic payload injection | | Advanced (no Playwright) | 256Mi | 512Mi | 200m | 500m | Smart payload evolution | | Advanced (with Playwright) | 512Mi | 1Gi | 500m | 1000m | Chromium browser overhead | | Defensive Validation | 128Mi | 256Mi | 100m | 250m | Protocol tests only | | Web Dashboard | 256Mi | 512Mi | 200m | 500m | Flask + SQLite operations |
ResourceQuota for Security Namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: wshawk-quota
namespace: security
spec:
hard:
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
persistentvolumeclaims: "5"
pods: "50"
LimitRange for Job Defaults
apiVersion: v1
kind: LimitRange
metadata:
name: wshawk-limits
namespace: security
spec:
limits:
- type: Container
default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 200m
memory: 256Mi
max:
cpu: "2"
memory: 2Gi
min:
cpu: 100m
memory: 128Mi
Resource Considerations:
- Smart Payload Evolution:
PayloadEvolvergenetic algorithms consume ~100MB additional memory (wshawk/smart_payload_evolution.py:200-300) - Playwright Browser: Chromium requires ~300-500MB baseline (docs/DOCKER.md:124-131)
- Circuit Breakers:
ResilientSessionprevents resource exhaustion during target failures (wshawk/resilient_session.py:50-100)
Sources: docs/DOCKER.md:222-225, RELEASE_SUMMARY.md:9-13
Security Considerations
RBAC for Scanner Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: wshawk-scanner
namespace: security
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: wshawk-scanner-role
namespace: security
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: wshawk-scanner-binding
namespace: security
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: wshawk-scanner-role
subjects:
- kind: ServiceAccount
name: wshawk-scanner
namespace: security
Security Hardening Checklist
| Control | Implementation | File Reference |
|---------|----------------|----------------|
| Non-root execution | runAsUser: 1000, runAsNonRoot: true | Dockerfile:40-45 |
| Read-only root filesystem | readOnlyRootFilesystem: true | docs/DOCKER.md:217-220 |
| Secret management | Kubernetes Secrets with stringData | - |
| Network policies | Restrict egress to known endpoints | - |
| Pod security standards | restricted PSS profile | - |
| Image verification | Signed images from GHCR | .github/workflows/ghcr-publish.yml:1-50 |
Network Policy Example
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: wshawk-scanner-policy
namespace: security
spec:
podSelector:
matchLabels:
app: wshawk
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow scanning internal services
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 443
# Allow OAST (interact.sh)
- to:
- podSelector: {}
ports:
- protocol: TCP
port: 443
Security Principles:
- Principle of Least Privilege: Scanner service account has minimal required permissions
- Defense in Depth: Multiple layers (RBAC, network policy, pod security, resource limits)
- Immutable Infrastructure: Pods recreate rather than update in place (Deployment strategy: Recreate)
- Audit Logging: All scans persist to SQLite for forensic review (wshawk/database.py)
Sources: README.md:269-296, docs/DOCKER.md:205-226, Dockerfile:40-45
Complete Example: Production Deployment
This section provides a complete, production-ready deployment configuration combining all patterns.
Kustomization Structure
k8s/
├── base/
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ ├── pvc.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ └── cronjob.yaml
└── overlays/
├── production/
│ └── kustomization.yaml
└── staging/
└── kustomization.yaml
Base Kustomization
# k8s/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: security
resources:
- namespace.yaml
- configmap.yaml
- secret.yaml
- pvc.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
- cronjob.yaml
images:
- name: rothackers/wshawk
newTag: 3.0.0
commonLabels:
app.kubernetes.io/name: wshawk
app.kubernetes.io/version: "3.0.0"
app.kubernetes.io/managed-by: kustomize
Production Overlay
# k8s/overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: security-prod
patchesStrategicMerge:
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: wshawk-dashboard
spec:
template:
spec:
containers:
- name: dashboard
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
configMapGenerator:
- name: wshawk-config
files:
- wshawk.yaml=production-config.yaml
secretGenerator:
- name: wshawk-secrets
envs:
- production-secrets.env
Deployment Command
# Apply production configuration
kubectl apply -k k8s/overlays/production
# Verify deployment
kubectl get all -n security-prod
# Create one-time scan job
kubectl create job --from=cronjob/wshawk-defensive-daily \
wshawk-onetime-scan -n security-prod
# View logs
kubectl logs -f -n security-prod \
-l app.kubernetes.io/name=wshawk
# Access dashboard
kubectl port-forward -n security-prod \
svc/wshawk-dashboard-svc 5000:5000
Sources: docs/V3_COMPLETE_GUIDE.md:352-376, README.md:112-136
Summary
WSHawk's Kubernetes deployment patterns provide:
- Scalability: Parallel multi-service scanning via Jobs
- Automation: CronJob scheduling for continuous monitoring
- Persistence: Web dashboard with SQLite history and report retention
- Integration: Centralized DefectDojo vulnerability management
- Security: Non-root execution, RBAC, network policies, and secret management
Key Takeaways:
- Use Jobs for one-time scans and CronJobs for recurring assessments
- Deploy Web Dashboard as a single-replica Deployment with
ReadWriteOncePVC for SQLite - Configure shared PVC (
ReadWriteMany) for concurrent report writing from multiple Jobs - Leverage ConfigMaps and Secrets for hierarchical configuration with
env:andfile:resolution - Integrate with DefectDojo for centralized vulnerability tracking across all cluster services
- Apply resource limits, RBAC, and network policies for production security hardening
For Docker-only usage, see Docker Usage Guide. For CI/CD integration, see CI/CD Integration. For REST API automation, see REST API Reference.
Sources: README.md:1-310, docs/V3_COMPLETE_GUIDE.md:352-376, docs/DOCKER.md:1-240, RELEASE_SUMMARY.md:1-60