Security Scanning
Comprehensive guide to SysManage's security scanning infrastructure including vulnerability assessment, compliance scanning, automated security testing, and reporting.
Security Scanning Overview
SysManage implements a comprehensive security scanning infrastructure that performs automated vulnerability assessments, compliance checks, and security testing throughout the development and deployment lifecycle. The scanning system integrates multiple specialized tools to provide complete coverage of potential security issues.
Security Scanning Architecture
┌─────────────────────────────────────────────────────────────────┐ │ Security Scanning Pipeline │ ├─────────────────────────────────────────────────────────────────┤ │ ┌─────────────┐ ┌──────────────┐ ┌─────────────────────────┐ │ │ │ Static Code │ │ Dependency │ │ Infrastructure │ │ │ │ Analysis │ │ Scanning │ │ Scanning │ │ │ └─────────────┘ └──────────────┘ └─────────────────────────┘ │ ├─────────────────────────────────────────────────────────────────┤ │ ┌─────────────┐ ┌──────────────┐ ┌─────────────────────────┐ │ │ │ Secrets │ │ Container │ │ Compliance │ │ │ │ Detection │ │ Scanning │ │ Scanning │ │ │ └─────────────┘ └──────────────┘ └─────────────────────────┘ │ ├─────────────────────────────────────────────────────────────────┤ │ ┌─────────────┐ ┌──────────────┐ ┌─────────────────────────┐ │ │ │ Dynamic │ │ Network │ │ Web Application │ │ │ │ Analysis │ │ Scanning │ │ Scanning │ │ │ └─────────────┘ └──────────────┘ └─────────────────────────┘ │ ├─────────────────────────────────────────────────────────────────┤ │ ┌─────────────────────────────────────────────────────────────┐ │ │ │ Reporting & Integration │ │ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ │ │ │ SARIF │ │ GitHub │ │ Jira │ │ Slack │ │ Email │ │ │ │ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │ │ └─────────────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘ │ Scan Orchestration │ ┌─────────────────────────────────────────────────────────────────┐ │ Scanning Tools │ ├─────────────────────────────────────────────────────────────────┤ │ Bandit │ Semgrep │ Safety │ Snyk │ TruffleHog │ Trivy │ OWASP │ │ │ CodeQL │ SonarQube │ Nessus │ OpenVAS │ Nuclei │ Nikto │ etc. │ │ └─────────────────────────────────────────────────────────────────┘
Security Scanning Tools
Integrated Scanning Tools
SysManage leverages multiple specialized security scanning tools, each designed for specific security assessment areas.
🔍 Static Code Analysis
Bandit
Python security static analysis
- Common security issue detection
- CWE mapping and severity scoring
- Custom rule configuration
- CI/CD pipeline integration
Semgrep
Multi-language static analysis
- Cross-language security rules
- Custom pattern detection
- OWASP Top 10 coverage
- Real-time code scanning
CodeQL
Semantic code analysis
- Deep code understanding
- Complex vulnerability detection
- GitHub integration
- Custom query development
📦 Dependency Scanning
Safety
Python dependency vulnerability scanner
- PyPI vulnerability database
- License compliance checking
- Automated remediation suggestions
- Policy enforcement
Snyk
Multi-ecosystem vulnerability scanner
- npm, PyPI, Maven support
- Real-time vulnerability monitoring
- Fix recommendations
- Container image scanning
OWASP Dependency Check
Open source dependency scanner
- CVE database integration
- Multiple language support
- Build tool integration
- Extensive reporting formats
🔐 Secrets Detection
TruffleHog
High-entropy secrets scanner
- Git history scanning
- Real-time secret detection
- Custom entropy thresholds
- Multiple output formats
GitLeaks
Git repository secret scanner
- SAST secrets detection
- Pre-commit hook integration
- Custom rule configuration
- Baseline establishment
Detect Secrets
Enterprise secrets detection
- Inline detection and prevention
- Plugin architecture
- Allowlist management
- Team collaboration features
🌐 Infrastructure Scanning
Nessus
Comprehensive vulnerability scanner
- Network vulnerability assessment
- Configuration auditing
- Compliance reporting
- Credentialed scanning
OpenVAS
Open source vulnerability scanner
- Network security scanning
- Authenticated testing
- Custom vulnerability tests
- Detailed reporting
Nuclei
Fast vulnerability scanner
- Template-based scanning
- Community-driven templates
- High-speed scanning
- Custom template creation
CI/CD Pipeline Integration
Automated Security Scanning
Security scanning is integrated into the CI/CD pipeline to ensure continuous security assessment throughout the development lifecycle.
Pipeline Security Gates
1. 📝 Pre-commit
- Secrets detection (TruffleHog)
- Code formatting and linting
- Basic security pattern checks
- Commit message validation
2. 🔄 Pull Request
- Static code analysis (Bandit, Semgrep)
- Dependency vulnerability scanning
- Security policy compliance checks
- Code review integration
3. 🏗️ Build
- Container image scanning (Trivy)
- License compliance verification
- Supply chain security checks
- SBOM generation
4. 🧪 Test
- Dynamic application security testing
- API security testing
- Infrastructure security validation
- Performance security testing
5. 🚀 Deploy
- Configuration security validation
- Runtime security monitoring setup
- Security baseline establishment
- Deployment security verification
6. 🔍 Production
- Continuous vulnerability monitoring
- Runtime security analysis
- Compliance reporting
- Incident response integration
GitHub Actions Security Workflow
# .github/workflows/security.yml
name: Security Scanning
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
schedule:
# Run weekly security scans on Sundays at 2 AM UTC
- cron: '0 2 * * 0'
jobs:
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for comprehensive scanning
- name: Run TruffleHog
uses: trufflesecurity/trufflehog@main
with:
path: ./
base: main
head: HEAD
extra_args: --debug --only-verified
- name: Upload TruffleHog results
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: trufflehog-results.sarif
static-analysis:
name: Static Code Analysis
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install bandit[toml] safety semgrep
- name: Run Bandit
run: |
bandit -r backend/ -f json -o bandit-results.json || true
bandit -r backend/ -f sarif -o bandit-results.sarif || true
- name: Run Safety
run: |
safety check --json --output safety-results.json || true
- name: Run Semgrep
run: |
semgrep --config=auto --sarif --output=semgrep-results.sarif backend/ frontend/ || true
- name: Upload Bandit results
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: bandit-results.sarif
- name: Upload Semgrep results
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: semgrep-results.sarif
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Snyk
uses: snyk/actions/python@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=medium --sarif-file-output=snyk-results.sarif
- name: Upload Snyk results
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: snyk-results.sarif
- name: OWASP Dependency Check
uses: dependency-check/Dependency-Check_Action@main
with:
project: 'SysManage'
path: '.'
format: 'SARIF'
out: 'dependency-check-results'
- name: Upload Dependency Check results
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: dependency-check-results.sarif
container-scan:
name: Container Security Scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: |
docker build -t sysmanage:${{ github.sha }} .
- name: Run Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: 'sysmanage:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy results
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: trivy-results.sarif
codeql-analysis:
name: CodeQL Analysis
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python', 'javascript' ]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
- name: Autobuild
uses: github/codeql-action/autobuild@v2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
security-policy-check:
name: Security Policy Compliance
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check security.md
run: |
if [ ! -f SECURITY.md ]; then
echo "❌ SECURITY.md file is missing"
exit 1
fi
- name: Validate GitHub Security Policy
run: |
# Check for required security configurations
if [ ! -f .github/dependabot.yml ]; then
echo "❌ Dependabot configuration missing"
exit 1
fi
- name: Check for hardcoded secrets patterns
run: |
# Custom patterns for common secrets
if grep -r -E "(password|secret|key|token)\s*=\s*['\"][^'\"]{8,}['\"]" --include="*.py" --include="*.js" .; then
echo "❌ Potential hardcoded secrets detected"
exit 1
fi
notification:
name: Security Scan Notification
runs-on: ubuntu-latest
needs: [secrets-scan, static-analysis, dependency-scan, container-scan, codeql-analysis]
if: always()
steps:
- name: Notify Security Team
if: failure()
uses: 8398a7/action-slack@v3
with:
status: failure
channel: '#security-alerts'
webhook_url: ${{ secrets.SLACK_WEBHOOK }}
fields: repo,message,commit,author,action,eventName,ref,workflow
Vulnerability Management
Vulnerability Assessment Process
SysManage implements a comprehensive vulnerability management process that includes identification, assessment, prioritization, and remediation.
Vulnerability Management Workflow
1. 🔍 Discovery
- Automated scanning across all assets
- Threat intelligence integration
- Manual security testing
- Bug bounty program integration
2. 📊 Assessment
- CVSS scoring and risk assessment
- Asset criticality evaluation
- Exploitability analysis
- Business impact assessment
3. 📋 Prioritization
- Risk-based prioritization matrix
- SLA-based response times
- Resource allocation planning
- Stakeholder communication
4. 🔧 Remediation
- Patch management and deployment
- Configuration changes
- Compensating controls
- Acceptance of residual risk
5. ✅ Verification
- Remediation effectiveness testing
- Regression testing
- Security posture validation
- Documentation updates
6. 📈 Reporting
- Management dashboard updates
- Compliance reporting
- Trend analysis
- Lessons learned documentation
Risk-Based Prioritization Matrix
Likelihood | Impact | |||
---|---|---|---|---|
Low | Medium | High | Critical | |
Very High | Medium | High | Critical | Critical |
High | Low | Medium | High | Critical |
Medium | Low | Low | Medium | High |
Low | Low | Low | Low | Medium |
Vulnerability Management Implementation
# Vulnerability management system implementation
from dataclasses import dataclass
from enum import Enum
from datetime import datetime, timedelta
import requests
class Severity(Enum):
CRITICAL = "critical"
HIGH = "high"
MEDIUM = "medium"
LOW = "low"
INFO = "info"
class Status(Enum):
NEW = "new"
TRIAGED = "triaged"
IN_PROGRESS = "in_progress"
RESOLVED = "resolved"
ACCEPTED = "accepted"
FALSE_POSITIVE = "false_positive"
@dataclass
class Vulnerability:
id: str
title: str
description: str
severity: Severity
cvss_score: float
cve_id: str = None
affected_asset: str = None
discovery_date: datetime = None
status: Status = Status.NEW
assignee: str = None
due_date: datetime = None
remediation_effort: str = None
business_impact: str = None
class VulnerabilityManager:
def __init__(self):
self.sla_hours = {
Severity.CRITICAL: 4,
Severity.HIGH: 24,
Severity.MEDIUM: 72,
Severity.LOW: 168
}
def calculate_priority(self, vuln: Vulnerability) -> int:
"""Calculate vulnerability priority based on CVSS and business factors"""
base_score = vuln.cvss_score
# Asset criticality multiplier
asset_multiplier = self.get_asset_criticality(vuln.affected_asset)
# Exploitability factor
exploitability = self.check_exploit_availability(vuln.cve_id)
# Business impact factor
business_factor = self.assess_business_impact(vuln.affected_asset)
priority_score = (base_score * asset_multiplier *
exploitability * business_factor)
return min(int(priority_score), 100)
def assign_sla(self, vuln: Vulnerability) -> datetime:
"""Assign SLA based on severity and priority"""
sla_hours = self.sla_hours.get(vuln.severity, 168)
return vuln.discovery_date + timedelta(hours=sla_hours)
def create_remediation_ticket(self, vuln: Vulnerability):
"""Create ticket in issue tracking system"""
ticket_data = {
"title": f"[SECURITY] {vuln.title}",
"description": self.format_ticket_description(vuln),
"priority": self.calculate_priority(vuln),
"labels": ["security", "vulnerability", vuln.severity.value],
"assignee": self.get_security_lead(),
"due_date": self.assign_sla(vuln).isoformat()
}
# Create ticket in Jira/GitHub Issues
response = requests.post(
f"{ISSUE_TRACKER_API}/issues",
json=ticket_data,
headers={"Authorization": f"Bearer {API_TOKEN}"}
)
return response.json()
def send_notifications(self, vuln: Vulnerability):
"""Send notifications based on severity"""
if vuln.severity in [Severity.CRITICAL, Severity.HIGH]:
# Immediate notification for critical/high
self.send_slack_alert(vuln)
self.send_email_alert(vuln)
# Weekly digest for medium/low vulnerabilities
self.add_to_weekly_digest(vuln)
def generate_metrics_report(self) -> dict:
"""Generate vulnerability metrics for dashboard"""
return {
"total_vulnerabilities": self.count_by_status(),
"severity_distribution": self.count_by_severity(),
"sla_compliance": self.calculate_sla_compliance(),
"remediation_trends": self.get_remediation_trends(),
"top_vulnerable_assets": self.get_top_vulnerable_assets()
}
# Automated vulnerability scanning orchestrator
class ScanOrchestrator:
def __init__(self):
self.scanners = {
"nessus": NessusScanner(),
"openvas": OpenVASScanner(),
"nuclei": NucleiScanner(),
"nmap": NmapScanner()
}
async def run_comprehensive_scan(self, target_scope: list):
"""Run comprehensive vulnerability scan"""
results = {}
for scanner_name, scanner in self.scanners.items():
try:
scan_results = await scanner.scan(target_scope)
results[scanner_name] = scan_results
# Process and normalize results
vulnerabilities = await self.normalize_results(
scan_results, scanner_name
)
# Import into vulnerability management system
await self.import_vulnerabilities(vulnerabilities)
except Exception as e:
logger.error(f"Scanner {scanner_name} failed: {e}")
continue
return results
async def schedule_recurring_scans(self):
"""Schedule regular vulnerability scans"""
scan_schedules = [
{"type": "network", "frequency": "daily", "scope": "external"},
{"type": "web_app", "frequency": "weekly", "scope": "all_apps"},
{"type": "infrastructure", "frequency": "monthly", "scope": "internal"},
{"type": "compliance", "frequency": "quarterly", "scope": "all_systems"}
]
for schedule in scan_schedules:
await self.schedule_scan(schedule)
# Integration with threat intelligence
class ThreatIntelligence:
def __init__(self):
self.feeds = [
"https://cve.mitre.org/data/downloads/allitems.csv",
"https://api.first.org/data/v1/epss",
"https://api.vulndb.cyberriskanalytics.com"
]
async def enrich_vulnerability(self, vuln: Vulnerability):
"""Enrich vulnerability with threat intelligence"""
if vuln.cve_id:
# Get EPSS score (Exploit Prediction Scoring System)
epss_score = await self.get_epss_score(vuln.cve_id)
# Check for known exploits
known_exploits = await self.check_exploit_db(vuln.cve_id)
# Get threat actor attribution
threat_actors = await self.get_threat_actors(vuln.cve_id)
# Update vulnerability with intelligence
vuln.epss_score = epss_score
vuln.known_exploits = known_exploits
vuln.threat_actors = threat_actors
async def get_trending_threats(self) -> list:
"""Get trending threats and vulnerabilities"""
trending = []
# Analyze recent CVEs with high EPSS scores
recent_cves = await self.get_recent_cves(days=7)
for cve in recent_cves:
epss_score = await self.get_epss_score(cve.id)
if epss_score > 0.7: # High exploitation probability
trending.append({
"cve_id": cve.id,
"epss_score": epss_score,
"description": cve.description,
"affected_products": cve.affected_products
})
return trending
Compliance Scanning
Regulatory Compliance Assessment
SysManage includes comprehensive compliance scanning capabilities to ensure adherence to various regulatory frameworks and industry standards.
Supported Compliance Frameworks
📋 SOC 2
- Security control implementation
- Availability monitoring
- Processing integrity validation
- Confidentiality assessments
- Privacy control evaluation
🏛️ FedRAMP
- NIST 800-53 control mapping
- Continuous monitoring requirements
- Security assessment procedures
- Configuration management validation
- Incident response capabilities
🔐 ISO 27001
- Information security controls
- Risk management processes
- Asset management validation
- Access control implementation
- Business continuity planning
💳 PCI DSS
- Cardholder data protection
- Network security requirements
- Vulnerability management
- Access control measures
- Regular security testing
🏥 HIPAA
- Administrative safeguards
- Physical safeguards
- Technical safeguards
- PHI protection measures
- Breach notification procedures
🌍 GDPR
- Data protection by design
- Privacy impact assessments
- Data subject rights
- Breach notification compliance
- Consent management
OpenSCAP Compliance Scanning
# OpenSCAP compliance scanning configuration
# Install OpenSCAP tools
sudo apt-get install libopenscap8 openscap-utils scap-security-guide
# Download security content
sudo wget -O /usr/share/xml/scap/ssg/content/ssg-ubuntu2004-ds.xml \
https://github.com/ComplianceAsCode/content/releases/latest/download/ssg-ubuntu2004-ds.xml
# Run CIS Ubuntu 20.04 LTS benchmark
sudo oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_cis \
--results scan-results.xml \
--report scan-report.html \
/usr/share/xml/scap/ssg/content/ssg-ubuntu2004-ds.xml
# Run NIST 800-53 controls assessment
sudo oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_moderate \
--results nist-results.xml \
--report nist-report.html \
/usr/share/xml/scap/ssg/content/ssg-ubuntu2004-ds.xml
# Generate OVAL definitions scan
sudo oscap oval eval \
--results oval-results.xml \
--report oval-report.html \
/usr/share/xml/scap/ssg/content/ssg-ubuntu2004-oval.xml
# Custom compliance scanning script
#!/bin/bash
# /usr/local/bin/compliance-scan.sh
SCAN_DATE=$(date +%Y%m%d_%H%M%S)
RESULTS_DIR="/var/log/compliance-scans"
SCAP_CONTENT="/usr/share/xml/scap/ssg/content"
mkdir -p "$RESULTS_DIR/$SCAN_DATE"
# Function to run compliance scan
run_compliance_scan() {
local profile=$1
local profile_name=$2
local content_file=$3
echo "Running $profile_name compliance scan..."
oscap xccdf eval \
--profile "$profile" \
--results "$RESULTS_DIR/$SCAN_DATE/${profile_name}-results.xml" \
--report "$RESULTS_DIR/$SCAN_DATE/${profile_name}-report.html" \
--oval-results \
"$content_file"
# Generate JSON summary for API integration
oscap xccdf eval \
--profile "$profile" \
--results-arf "$RESULTS_DIR/$SCAN_DATE/${profile_name}-arf.xml" \
"$content_file"
}
# Run multiple compliance frameworks
run_compliance_scan "xccdf_org.ssgproject.content_profile_cis" "CIS" \
"$SCAP_CONTENT/ssg-ubuntu2004-ds.xml"
run_compliance_scan "xccdf_org.ssgproject.content_profile_moderate" "NIST-800-53" \
"$SCAP_CONTENT/ssg-ubuntu2004-ds.xml"
run_compliance_scan "xccdf_org.ssgproject.content_profile_pci-dss" "PCI-DSS" \
"$SCAP_CONTENT/ssg-ubuntu2004-ds.xml"
# Generate compliance dashboard data
python3 /usr/local/bin/generate-compliance-metrics.py "$RESULTS_DIR/$SCAN_DATE"
# Upload results to SysManage API
curl -X POST "https://sysmanage.company.com/api/compliance/results" \
-H "Authorization: Bearer $SYSMANAGE_API_TOKEN" \
-H "Content-Type: application/json" \
-d @"$RESULTS_DIR/$SCAN_DATE/compliance-summary.json"
echo "Compliance scan completed. Results saved to $RESULTS_DIR/$SCAN_DATE"
Custom Compliance Rules
# Custom compliance rule implementation
import yaml
from dataclasses import dataclass
from typing import List, Dict, Any
@dataclass
class ComplianceRule:
id: str
title: str
description: str
framework: str
control_id: str
severity: str
check_type: str
check_command: str
expected_result: str
remediation: str
class ComplianceScanner:
def __init__(self, rules_file: str):
self.rules = self.load_rules(rules_file)
self.results = []
def load_rules(self, rules_file: str) -> List[ComplianceRule]:
"""Load compliance rules from YAML configuration"""
with open(rules_file, 'r') as f:
rules_data = yaml.safe_load(f)
rules = []
for rule_data in rules_data['rules']:
rule = ComplianceRule(**rule_data)
rules.append(rule)
return rules
async def run_scan(self) -> Dict[str, Any]:
"""Execute all compliance rules and generate report"""
scan_results = {
"scan_id": self.generate_scan_id(),
"timestamp": datetime.now().isoformat(),
"total_rules": len(self.rules),
"passed": 0,
"failed": 0,
"skipped": 0,
"results": []
}
for rule in self.rules:
try:
result = await self.execute_rule(rule)
scan_results["results"].append(result)
if result["status"] == "pass":
scan_results["passed"] += 1
elif result["status"] == "fail":
scan_results["failed"] += 1
else:
scan_results["skipped"] += 1
except Exception as e:
logger.error(f"Rule {rule.id} execution failed: {e}")
scan_results["skipped"] += 1
# Calculate compliance score
total_evaluated = scan_results["passed"] + scan_results["failed"]
if total_evaluated > 0:
scan_results["compliance_score"] = (
scan_results["passed"] / total_evaluated * 100
)
else:
scan_results["compliance_score"] = 0
return scan_results
async def execute_rule(self, rule: ComplianceRule) -> Dict[str, Any]:
"""Execute individual compliance rule"""
result = {
"rule_id": rule.id,
"title": rule.title,
"framework": rule.framework,
"control_id": rule.control_id,
"severity": rule.severity,
"status": "unknown",
"actual_result": None,
"expected_result": rule.expected_result,
"message": "",
"remediation": rule.remediation
}
try:
if rule.check_type == "command":
actual_result = await self.execute_command(rule.check_command)
elif rule.check_type == "file_check":
actual_result = await self.check_file(rule.check_command)
elif rule.check_type == "service_check":
actual_result = await self.check_service(rule.check_command)
elif rule.check_type == "network_check":
actual_result = await self.check_network(rule.check_command)
else:
raise ValueError(f"Unknown check type: {rule.check_type}")
result["actual_result"] = actual_result
# Compare actual vs expected result
if self.compare_results(actual_result, rule.expected_result):
result["status"] = "pass"
result["message"] = "Compliance rule satisfied"
else:
result["status"] = "fail"
result["message"] = f"Expected: {rule.expected_result}, Got: {actual_result}"
except Exception as e:
result["status"] = "error"
result["message"] = f"Rule execution error: {str(e)}"
return result
# Example compliance rules configuration
# compliance-rules.yaml
rules:
- id: "SYS-001"
title: "SSH Protocol Version"
description: "Ensure SSH protocol version 2 is configured"
framework: "CIS"
control_id: "5.2.1"
severity: "high"
check_type: "command"
check_command: "grep '^Protocol' /etc/ssh/sshd_config"
expected_result: "Protocol 2"
remediation: "Edit /etc/ssh/sshd_config and set Protocol 2"
- id: "SYS-002"
title: "Root Login Disabled"
description: "Ensure SSH root login is disabled"
framework: "CIS"
control_id: "5.2.8"
severity: "critical"
check_type: "command"
check_command: "grep '^PermitRootLogin' /etc/ssh/sshd_config"
expected_result: "PermitRootLogin no"
remediation: "Edit /etc/ssh/sshd_config and set PermitRootLogin no"
- id: "NET-001"
title: "IP Forwarding Disabled"
description: "Ensure IP forwarding is disabled"
framework: "CIS"
control_id: "3.1.1"
severity: "medium"
check_type: "command"
check_command: "sysctl net.ipv4.ip_forward"
expected_result: "net.ipv4.ip_forward = 0"
remediation: "Set net.ipv4.ip_forward = 0 in /etc/sysctl.conf"
- id: "AUTH-001"
title: "Password Minimum Length"
description: "Ensure password minimum length is configured"
framework: "CIS"
control_id: "5.3.1"
severity: "medium"
check_type: "command"
check_command: "grep '^minlen' /etc/security/pwquality.conf"
expected_result: "minlen = 14"
remediation: "Set minlen = 14 in /etc/security/pwquality.conf"
- id: "LOG-001"
title: "Audit Log Storage Size"
description: "Ensure audit log storage size is configured"
framework: "CIS"
control_id: "4.1.1.1"
severity: "low"
check_type: "command"
check_command: "grep '^max_log_file' /etc/audit/auditd.conf"
expected_result: "max_log_file = 8"
remediation: "Set max_log_file = 8 in /etc/audit/auditd.conf"
Security Reporting & Analytics
Comprehensive Security Dashboard
SysManage provides real-time security dashboards and detailed analytics to track security posture and trends.
Key Security Metrics
🔍 Vulnerability Metrics
- Total vulnerabilities by severity
- Vulnerability discovery trends
- Time to remediation (TTR)
- SLA compliance rates
- Asset vulnerability density
- Remediation effectiveness
📊 Compliance Metrics
- Overall compliance score
- Framework-specific compliance
- Control implementation status
- Compliance trend analysis
- Audit readiness score
- Remediation priority queue
🔐 Security Posture
- Security score trending
- Threat exposure metrics
- Attack surface analysis
- Security control effectiveness
- Risk reduction progress
- Security investment ROI
⚡ Operational Metrics
- Scan coverage percentage
- False positive rates
- Scanner performance metrics
- Alert response times
- Team productivity metrics
- Tool effectiveness analysis
Security Metrics API
# Security metrics and reporting API
from fastapi import APIRouter, Depends, Query
from datetime import datetime, timedelta
from typing import Optional, List
router = APIRouter(prefix="/api/security/metrics")
@router.get("/vulnerability-summary")
async def get_vulnerability_summary(
start_date: Optional[datetime] = Query(None),
end_date: Optional[datetime] = Query(None),
asset_filter: Optional[List[str]] = Query(None)
):
"""Get vulnerability summary metrics"""
if not start_date:
start_date = datetime.now() - timedelta(days=30)
if not end_date:
end_date = datetime.now()
summary = await calculate_vulnerability_metrics(
start_date, end_date, asset_filter
)
return {
"period": {
"start": start_date.isoformat(),
"end": end_date.isoformat()
},
"summary": {
"total_vulnerabilities": summary.total,
"by_severity": {
"critical": summary.critical,
"high": summary.high,
"medium": summary.medium,
"low": summary.low
},
"newly_discovered": summary.new_discoveries,
"remediated": summary.remediated,
"overdue": summary.overdue
},
"trends": {
"discovery_rate": summary.discovery_trend,
"remediation_rate": summary.remediation_trend,
"backlog_trend": summary.backlog_trend
},
"sla_metrics": {
"on_time_remediation": summary.sla_compliance,
"average_ttr_hours": summary.avg_time_to_resolve,
"critical_ttr_hours": summary.critical_ttr,
"high_ttr_hours": summary.high_ttr
}
}
@router.get("/compliance-dashboard")
async def get_compliance_dashboard(
frameworks: Optional[List[str]] = Query(None)
):
"""Get compliance dashboard data"""
if not frameworks:
frameworks = ["CIS", "NIST-800-53", "SOC2", "PCI-DSS"]
compliance_data = {}
for framework in frameworks:
framework_data = await get_framework_compliance(framework)
compliance_data[framework] = {
"overall_score": framework_data.score,
"total_controls": framework_data.total_controls,
"implemented": framework_data.implemented,
"partially_implemented": framework_data.partial,
"not_implemented": framework_data.not_implemented,
"last_assessment": framework_data.last_scan.isoformat(),
"trend": framework_data.trend,
"critical_gaps": framework_data.critical_gaps
}
return {
"frameworks": compliance_data,
"overall_compliance": calculate_overall_compliance(compliance_data),
"recommendations": generate_compliance_recommendations(compliance_data)
}
@router.get("/security-score")
async def get_security_score():
"""Calculate overall security score"""
# Gather data from multiple sources
vulnerability_score = await calculate_vulnerability_score()
compliance_score = await calculate_compliance_score()
configuration_score = await calculate_configuration_score()
patch_score = await calculate_patch_management_score()
# Weighted security score calculation
weights = {
"vulnerabilities": 0.30,
"compliance": 0.25,
"configuration": 0.25,
"patch_management": 0.20
}
overall_score = (
vulnerability_score * weights["vulnerabilities"] +
compliance_score * weights["compliance"] +
configuration_score * weights["configuration"] +
patch_score * weights["patch_management"]
)
return {
"overall_score": round(overall_score, 2),
"grade": calculate_security_grade(overall_score),
"components": {
"vulnerability_management": {
"score": vulnerability_score,
"weight": weights["vulnerabilities"]
},
"compliance": {
"score": compliance_score,
"weight": weights["compliance"]
},
"configuration_security": {
"score": configuration_score,
"weight": weights["configuration"]
},
"patch_management": {
"score": patch_score,
"weight": weights["patch_management"]
}
},
"recommendations": generate_security_recommendations(overall_score),
"historical_trend": await get_security_score_trend(30) # 30 days
}
@router.get("/threat-landscape")
async def get_threat_landscape():
"""Get threat landscape analysis"""
threat_data = await analyze_threat_landscape()
return {
"threat_summary": {
"active_threats": threat_data.active_count,
"high_risk_assets": threat_data.high_risk_assets,
"attack_vectors": threat_data.attack_vectors,
"trending_threats": threat_data.trending_threats
},
"risk_analysis": {
"overall_risk_score": threat_data.risk_score,
"risk_factors": threat_data.risk_factors,
"mitigation_effectiveness": threat_data.mitigation_score
},
"recommendations": {
"immediate_actions": threat_data.immediate_actions,
"strategic_improvements": threat_data.strategic_improvements,
"resource_requirements": threat_data.resource_needs
}
}
# Automated reporting system
class SecurityReportGenerator:
def __init__(self):
self.templates = {
"executive": "executive_summary.html",
"technical": "technical_details.html",
"compliance": "compliance_report.html",
"trend": "trend_analysis.html"
}
async def generate_executive_report(self, period_days: int = 30):
"""Generate executive security summary report"""
end_date = datetime.now()
start_date = end_date - timedelta(days=period_days)
# Gather executive-level metrics
vulnerability_summary = await self.get_vulnerability_summary(start_date, end_date)
compliance_status = await self.get_compliance_status()
security_score = await self.get_security_score_trend(period_days)
key_achievements = await self.get_key_achievements(start_date, end_date)
upcoming_initiatives = await self.get_upcoming_initiatives()
report_data = {
"period": f"{start_date.strftime('%B %d, %Y')} - {end_date.strftime('%B %d, %Y')}",
"executive_summary": {
"overall_security_score": security_score.current,
"score_change": security_score.change,
"critical_vulnerabilities": vulnerability_summary.critical,
"compliance_score": compliance_status.overall_score,
"key_metrics": {
"vulnerabilities_remediated": vulnerability_summary.remediated,
"sla_compliance": vulnerability_summary.sla_compliance,
"new_controls_implemented": compliance_status.new_controls
}
},
"risk_assessment": {
"current_risk_level": self.calculate_risk_level(security_score.current),
"trend": security_score.trend,
"top_risks": await self.get_top_risks(5)
},
"achievements": key_achievements,
"upcoming_initiatives": upcoming_initiatives,
"recommendations": await self.generate_executive_recommendations()
}
return await self.render_report("executive", report_data)
async def schedule_automated_reports(self):
"""Schedule automated report generation"""
# Daily operational reports
schedule.every().day.at("08:00").do(
self.generate_and_send_daily_report
)
# Weekly management reports
schedule.every().monday.at("09:00").do(
self.generate_and_send_weekly_report
)
# Monthly executive reports
schedule.every().month.do(
self.generate_and_send_monthly_report
)
# Quarterly compliance reports
schedule.every(3).months.do(
self.generate_and_send_compliance_report
)
Integration & Automation
Tool Integration Ecosystem
SysManage integrates with various security tools and platforms to create a comprehensive security ecosystem.
Supported Integrations
🔧 Development Tools
- GitHub/GitLab/Bitbucket
- Jenkins/GitHub Actions/GitLab CI
- SonarQube/CodeClimate
- Docker Registry/Harbor
- Artifactory/Nexus
🎫 Issue Tracking
- Jira/Linear/Azure DevOps
- ServiceNow/Remedy
- PagerDuty/Opsgenie
- Slack/Microsoft Teams
- Email/SMS notifications
🔒 Security Tools
- SIEM systems (Splunk, ELK)
- Vulnerability scanners
- Threat intelligence platforms
- Security orchestration (SOAR)
- Compliance management tools
☁️ Cloud Platforms
- AWS Security Hub/GuardDuty
- Azure Security Center
- Google Cloud Security Command Center
- Kubernetes security tools
- Infrastructure as Code scanners
Webhook Integration Example
# Webhook integration for security events
from fastapi import APIRouter, BackgroundTasks
import httpx
router = APIRouter(prefix="/api/security/webhooks")
class WebhookManager:
def __init__(self):
self.subscribers = {
"vulnerability_discovered": [],
"compliance_failure": [],
"security_incident": [],
"remediation_completed": []
}
async def notify_subscribers(self, event_type: str, event_data: dict):
"""Notify all subscribers of security events"""
if event_type not in self.subscribers:
return
async with httpx.AsyncClient() as client:
for webhook_url in self.subscribers[event_type]:
try:
await client.post(
webhook_url,
json={
"event_type": event_type,
"timestamp": datetime.now().isoformat(),
"data": event_data
},
timeout=10.0
)
except Exception as e:
logger.error(f"Webhook notification failed: {e}")
@router.post("/notify/vulnerability")
async def notify_vulnerability_discovered(
vulnerability_data: dict,
background_tasks: BackgroundTasks
):
"""Handle vulnerability discovery notifications"""
# Process high/critical vulnerabilities immediately
if vulnerability_data.get("severity") in ["high", "critical"]:
# Create Jira ticket
background_tasks.add_task(
create_jira_ticket,
vulnerability_data
)
# Send Slack alert
background_tasks.add_task(
send_slack_alert,
vulnerability_data
)
# Trigger SOAR playbook
background_tasks.add_task(
trigger_soar_playbook,
"high_severity_vulnerability",
vulnerability_data
)
# Send to all registered webhooks
background_tasks.add_task(
webhook_manager.notify_subscribers,
"vulnerability_discovered",
vulnerability_data
)
return {"status": "notifications_queued"}
# SOAR integration
class SOARIntegration:
def __init__(self, soar_api_url: str, api_key: str):
self.api_url = soar_api_url
self.api_key = api_key
async def trigger_playbook(self, playbook_name: str, event_data: dict):
"""Trigger SOAR playbook for automated response"""
playbook_data = {
"playbook": playbook_name,
"inputs": event_data,
"triggered_by": "sysmanage_security_scanner",
"priority": self.calculate_priority(event_data)
}
async with httpx.AsyncClient() as client:
response = await client.post(
f"{self.api_url}/playbooks/trigger",
json=playbook_data,
headers={"Authorization": f"Bearer {self.api_key}"}
)
return response.json()
# Jira integration for ticket creation
class JiraIntegration:
def __init__(self, jira_url: str, username: str, api_token: str):
self.jira_url = jira_url
self.auth = (username, api_token)
async def create_security_ticket(self, vulnerability_data: dict):
"""Create Jira ticket for security vulnerability"""
ticket_data = {
"fields": {
"project": {"key": "SEC"},
"summary": f"[SECURITY] {vulnerability_data['title']}",
"description": self.format_description(vulnerability_data),
"issuetype": {"name": "Security Vulnerability"},
"priority": {"name": self.map_priority(vulnerability_data['severity'])},
"labels": ["security", "vulnerability", vulnerability_data['severity']],
"customfield_10001": vulnerability_data.get('cve_id'), # CVE ID field
"customfield_10002": vulnerability_data.get('cvss_score'), # CVSS field
"assignee": {"name": self.get_security_assignee(vulnerability_data)},
"duedate": self.calculate_due_date(vulnerability_data['severity'])
}
}
async with httpx.AsyncClient() as client:
response = await client.post(
f"{self.jira_url}/rest/api/3/issue",
json=ticket_data,
auth=self.auth
)
return response.json()
# Slack integration for real-time alerts
class SlackIntegration:
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
async def send_security_alert(self, event_data: dict):
"""Send security alert to Slack"""
severity_colors = {
"critical": "#FF0000",
"high": "#FF6600",
"medium": "#FFCC00",
"low": "#00CC00"
}
message = {
"text": f"🚨 Security Alert: {event_data['title']}",
"attachments": [
{
"color": severity_colors.get(event_data['severity'], "#808080"),
"fields": [
{
"title": "Severity",
"value": event_data['severity'].upper(),
"short": True
},
{
"title": "Asset",
"value": event_data.get('affected_asset', 'Unknown'),
"short": True
},
{
"title": "Description",
"value": event_data['description'],
"short": False
},
{
"title": "CVE ID",
"value": event_data.get('cve_id', 'N/A'),
"short": True
},
{
"title": "CVSS Score",
"value": str(event_data.get('cvss_score', 'N/A')),
"short": True
}
],
"actions": [
{
"type": "button",
"text": "View Details",
"url": f"https://sysmanage.company.com/vulnerabilities/{event_data['id']}"
},
{
"type": "button",
"text": "Assign to Me",
"name": "assign",
"value": event_data['id']
}
]
}
]
}
async with httpx.AsyncClient() as client:
await client.post(self.webhook_url, json=message)
Scanning Troubleshooting
Common Scanning Issues
🚫 False Positives
- Symptoms: Legitimate code flagged as vulnerable
- Causes: Overly aggressive rules, context misunderstanding
- Solutions: Tune rules, add suppressions, review manually
⏱️ Slow Scan Performance
- Symptoms: Long scan execution times
- Causes: Large codebases, resource constraints
- Solutions: Optimize scope, parallel scanning, resource scaling
🔧 Tool Integration Failures
- Symptoms: Scanner tools failing to execute
- Causes: Version incompatibilities, missing dependencies
- Solutions: Update tools, fix dependencies, check configurations
📊 Incomplete Coverage
- Symptoms: Missing vulnerabilities in results
- Causes: Limited scanner capabilities, misconfigurations
- Solutions: Multiple tools, manual reviews, expand rules
Scanner Diagnostic Commands
# Security scanner troubleshooting commands
# Check Bandit installation and version
bandit --version
python -m bandit --help
# Test Bandit with verbose output
bandit -r backend/ -v -f json
# Verify Semgrep installation
semgrep --version
semgrep --config=auto --dry-run backend/
# Test Safety dependency scanning
safety check --json --full-report
# Check Snyk authentication
snyk auth
snyk test --json
# Verify TruffleHog installation
trufflehog --version
trufflehog --debug git file://. --only-verified
# Test Docker security scanning with Trivy
trivy image --format json python:3.11-slim
# Debug OpenSCAP scanning
oscap info /usr/share/xml/scap/ssg/content/ssg-ubuntu2004-ds.xml
oscap xccdf eval --dry-run --profile cis /usr/share/xml/scap/ssg/content/ssg-ubuntu2004-ds.xml
# Check network scanner connectivity
nmap -sV -p 443 sysmanage.company.com
nuclei -t cves/ -u https://sysmanage.company.com
# Verify compliance scanner permissions
sudo oscap oval eval --verbose /usr/share/xml/scap/ssg/content/ssg-ubuntu2004-oval.xml