Security Assessments

Security assessments identify weaknesses in systems and processes. They provide evidence so teams can patch, mitigate, or remove risk. Clear scope and objectives keep the assessment useful and safe. Reports should be actionable for both technical and business teams.

Security Assessment Overview

Most assessments look for vulnerabilities, but depth varies by type. Some are checklist driven while others simulate real attacks. The choice depends on regulations, risk tolerance, and resources. Understanding the differences prevents mismatched expectations.

Vulnerability Assessment

A vulnerability assessment checks systems against known issues. It validates exposure without exploiting weaknesses for impact. This is ideal for routine hygiene and compliance baselines. It produces a prioritized list of findings to remediate.

Penetration Test

A penetration test attempts to exploit weaknesses in practice. It demonstrates how vulnerabilities can lead to real impact. This yields stronger evidence for urgency and investment. Pentests are deeper but typically less frequent.

Penetration Test Types

Black box tests start with no internal knowledge or access. Grey box tests use partial knowledge to focus effort and time. White box tests use full access for maximum coverage and depth. Each type trades realism for efficiency and transparency.

Penetration Test Areas

Application tests focus on web apps, APIs, and mobile systems. Network tests assess routers, firewalls, servers, and services. Physical tests evaluate access controls and on site security. Social engineering tests how humans handle security pressure.

Vulnerability Assessments vs Penetration Tests

Assessments find known issues without proving exploit impact. Pentests validate exploitability and show real risk scenarios. Assessments are frequent while pentests are periodic deep dives. Both are necessary to maintain a mature security program.

Other Security Assessment Types

Organizations also use audits and team exercises to validate posture. These activities test compliance, detection, and response readiness. They often have external requirements or strategic objectives. Selecting the right type depends on goals and risk.

Security Audits

Audits verify compliance against external standards or laws. They focus on evidence, policies, and control implementation. Audits are usually mandated by regulators or industry bodies. They are less technical but still critical for governance.

Bug Bounty

Bug bounty programs invite external researchers to report issues. They expand coverage but require strict scope and safe rules. Automation is often restricted to prevent service disruption. Well managed programs surface issues internal teams miss.

Red Team Assessment

Red teams simulate adversary campaigns over longer periods. They test detection, response, and organizational resilience. This approach highlights gaps across people, process, and tech. It is ideal for mature security programs with budget.

Purple Team Assessment

Purple teams mix red and blue expertise for shared learning. Findings are used immediately to improve detection and response. This creates faster feedback loops than isolated testing. It is excellent for training and capability building.

Vulnerability Assessments

A vulnerability assessment identifies and categorizes risk. It focuses on visible weaknesses rather than full exploitation. The goal is to prioritize remediation and reduce exposure. This is often the first phase in security improvement.

Methodology

A typical methodology starts with asset discovery and scoping. Scans run next, followed by validation and noise reduction. Results are prioritized and reported with clear remediation steps. Each step should be repeatable and evidence based.

Key Terms

Clear terminology avoids confusion in reports and meetings. Each term maps to a distinct part of the risk picture. Consistent language improves decision making and ownership. Use these definitions across teams and stakeholders.

Vulnerability

A vulnerability is a weakness in software or configuration. It can exist in applications, networks, or infrastructure. Many are tracked in CVE databases and scored with CVSS. Severity helps prioritize remediation across environments.

Threat

A threat is anything that can exploit a vulnerability. It includes attackers, malware, or internal misuse scenarios. Threat likelihood varies by exposure and attacker motivation. Not all vulnerabilities face the same threat level.

Exploit

An exploit is code or technique that triggers a weakness. It turns theoretical exposure into practical impact. Exploit availability raises urgency for remediation. Public exploits often accelerate real world attacks.

Risk

Risk is the chance that a threat causes harm to assets. It blends likelihood with potential impact on the business. Organizations use risk to prioritize and allocate resources. Clear risk statements improve reporting quality.

Asset Inventory

Asset inventory is critical for vulnerability management. You cannot protect what you cannot identify and classify. Inventories should cover IT, OT, cloud, and physical assets. Data classification guides appropriate controls and access.

Application and System Inventory

Inventories should include all data and systems in scope. That includes on prem data, cloud storage, and SaaS apps. Examples include AWS, GCP, Azure, and collaboration tools. Completeness prevents blind spots and hidden exposure.

Assessment Standards

Standards define what good security looks like in practice. They provide a common language for compliance and testing. Using standards improves repeatability and audit readiness. Choose standards that fit industry and regulatory needs.

Compliance Standards

Compliance standards define mandatory security controls. They are often tied to legal or contractual obligations. Noncompliance can lead to penalties and loss of trust. These standards shape assessment scope and reporting.

PCI DSS

PCI DSS applies to organizations handling payment cards. It defines controls for storing, processing, and transmitting data. Assessments verify segmentation, encryption, and monitoring. Regular scans and audits are required by the standard.

HIPAA

HIPAA protects healthcare data and patient privacy. It requires safeguards for access control and transmission. Assessments verify logging, encryption, and policy compliance. Violations can trigger legal and financial penalties.

FISMA

FISMA applies to US federal agencies and contractors. It defines risk management and security control baselines. Assessments map controls to NIST guidance and metrics. Documentation and continuous monitoring are emphasized.

ISO 27001

ISO 27001 defines an information security management system. It focuses on governance, risk management, and improvement cycles. Assessments validate policy, process, and control effectiveness. Certification requires ongoing evidence and audits.

Penetration Testing Standards

Pentest standards define consistent testing methodology. They outline phases, evidence, and reporting expectations. Standards increase trust in results across stakeholders. Choose a standard that fits engagement goals.

PTES

PTES provides a practical penetration testing framework. It covers pre engagement, intelligence, threat modeling, and testing. PTES emphasizes clear scope and structured reporting. It is widely used in consulting engagements.

OSSTMM

OSSTMM focuses on operational testing metrics and rigor. It emphasizes repeatability and measurable outcomes. The method can be documentation heavy but consistent. It is useful for structured assessments.

NIST

NIST provides guidance for security testing and controls. It aligns with federal standards and risk management practices. NIST frameworks emphasize documentation and consistency. They are common in regulated environments.

OWASP

OWASP standards focus on web application security testing. They provide checklists and methodologies for web risk. OWASP is especially useful for app and API testing. It is a common baseline for web security programs.


Vulnerability Scoring and Reporting

Scoring systems help prioritize vulnerabilities consistently. They turn technical findings into comparable severity ratings. This makes remediation planning more objective and efficient. Scores should be paired with business context.

Common Vulnerability Scoring System (CVSS)

CVSS is the most common industry scoring system. It uses base, temporal, and environmental metrics. Scores help compare severity across different systems. CVSS supports, but does not replace, risk analysis.

Severity Score

The severity score begins with base metrics. It reflects exploitability and potential impact on assets. This score is the foundation for the final rating. Base metrics stay stable unless context changes.

Exploitability Metrics

Exploitability metrics measure how hard exploitation is. They include attack vector, complexity, and privileges required. User interaction also influences feasibility and risk. Lower effort generally means higher exploitability.

Impact Metrics

Impact metrics assess confidentiality, integrity, and availability. They measure how much damage exploitation can cause. High impact means critical data or services are affected. Impact drives business urgency and prioritization.

Temporal Metrics Group

Temporal metrics reflect current exploit conditions. They model exploit maturity, available fixes, and confidence. These values change over time as tools and patches emerge. They help keep scores current and relevant.

Exploit Code Maturity

This metric reflects exploit availability in the wild. Proofs of concept carry less risk than weaponized code. Mature exploits raise the urgency of remediation. Track this metric for active exploitation trends.

Remediation Level

Remediation level reflects available fixes and mitigations. Official patches reduce risk faster than workarounds. Temporary fixes should be noted as partial remediation. This metric supports patch planning.

Report Confidence

Report confidence reflects the quality of evidence. Vendor confirmed issues have higher confidence. Unverified reports should be treated with caution. Confidence affects prioritization decisions.

Environmental Metrics Group

Environmental metrics adapt CVSS to specific environments. They account for asset value and control context. This ensures scores reflect real business impact. It is useful for enterprise prioritization.

Modified Base Metrics

Modified base metrics adjust exploitability and impact. They reflect local conditions such as network exposure. This helps tailor generic scores to real environments. Use them to avoid misleading severity ratings.

Calculating CVSS Severity

Final CVSS scores combine base, temporal, and environmental metrics. The base score is calculated first and then adjusted over time. Tools like the NVD calculator help standardize this process. Always document assumptions used for scoring.


Tool: CVSS Calculator

The NVD CVSS calculator allows standardized scoring. It guides you through each metric and produces a final score. Use it to validate ratings in reports and tickets. Keep the calculation link for audit trails.

Common Vulnerabilities and Exposures (CVE)

CVE is a public catalog of known vulnerabilities. It provides unique IDs used across tools and vendors. CVE data improves tracking and correlation of issues. Understanding the lifecycle helps with disclosure.

Open Vulnerability Assessment Language (OVAL)

OVAL is a language for describing system security states. It provides standard definitions for configuration checks. OVAL supports automation and consistent assessment logic. It is used in many enterprise scanners.

OVAL Process

The OVAL process defines how checks are created and used. It includes definition, interpretation, and evaluation steps. This process helps tools reach consistent conclusions. Accurate definitions reduce false positives.

OVAL Definitions

OVAL definitions describe specific system conditions. They specify required software versions and settings. These definitions drive automated vulnerability checks. Updating definitions keeps scans accurate.

CVE Lifecycle

The CVE lifecycle describes how IDs are assigned and published. It ensures consistent naming and responsible disclosure. Each stage coordinates researchers, vendors, and CNAs. Following the process prevents confusion in public records.

Stage 1: Determine if a CVE Is Needed and Relevant

Confirm the issue is a real vulnerability with clear impact. Check if a CVE already exists for the same issue. Define scope so the request is precise and justified. This avoids duplicate or unnecessary CVE requests.

Stage 2: Contact the Affected Vendor

Notify the vendor with reproducible technical evidence. Provide enough detail to validate the issue quickly. Early contact supports coordinated disclosure timelines. This also helps vendors prepare fixes.

Stage 3: Decide on a Vendor CNA or Third-Party CNA

Some vendors act as CNAs and can issue CVEs directly. If not, a third party CNA may be required. Choosing the right CNA speeds up the process. Document this decision for transparency.

Stage 4: Request a CVE ID via the CVE Web Form

Submit the request with affected versions and impact details. Accurate submissions reduce back and forth communication. Include references or proof of concept when appropriate. Keep submission records for later reference.

Stage 5: CVE Form Confirmation

The CNA confirms the request and checks completeness. You may be asked for clarification or extra evidence. Respond quickly to keep timelines on track. This step ensures data quality.

Stage 6: CVE ID Received

Once approved, you receive a unique CVE ID. Use this ID in reports, advisories, and tracking tools. Do not publish details before coordinated disclosure. The ID becomes the official reference point.

Stage 7: Public Disclosure of the CVE ID

The CVE ID is made public as disclosure proceeds. This enables vendors and users to track fixes. Public disclosure should align with patch availability. Coordination reduces risk to users.

Stage 8: CVE Announcement

The vulnerability is announced through advisories. This includes severity, impact, and mitigation guidance. Clear communication helps defenders respond quickly. Announcements should be accurate and timely.

Stage 9: Provide Information to the CVE Team

Final details are submitted to CVE databases. This includes description, references, and affected versions. Accurate data supports long term tracking and analytics. This completes the CVE record.

Responsible Disclosure

Responsible disclosure balances transparency and safety. It gives vendors time to fix issues before public release. This reduces risk of widespread exploitation. Clear timelines and communication are essential.


Nessus

Nessus is a widely used vulnerability scanner. It provides fast scanning, strong plugin coverage, and reports. Use it for routine assessments and compliance checks. Pair results with validation to reduce false positives.

Vulnerability Scanning Overview

Scanning tools automate detection but require context. They work best with asset inventory and scope control. Results should be validated before remediation planning. This prevents wasted effort and misprioritization.

Nessus Overview

Nessus uses plugins written in NASL to detect issues. It provides templates for different scan types and depth. The Essentials version is free with limited assets. Commercial versions add advanced features and scale.

OpenVAS Overview

OpenVAS is part of the Greenbone Vulnerability Management stack. It is open source and widely used for internal scanning. It provides a web UI and community maintained checks. OpenVAS can complement Nessus in large environments.

Running a Scan

A basic Nessus scan starts by creating a new scan profile. Choose a template such as Basic or Advanced Network Scan. Define targets, discovery settings, and assessment options. Credentialed scans increase depth when allowed.

Practical steps that work well for quick scans:

  1. Create a new scan and select a template.
  2. Add target IPs or ranges and name the scan.
  3. Configure discovery, port range, and service detection.
  4. Enable web tests if the scope includes applications.
  5. Add credentials or leave unauthenticated if required.
  6. Review advanced options and run the scan.

Advanced Configuration

Advanced configuration controls performance and scan depth. It lets you tune safe checks and concurrency settings. Policies can be reused across teams and engagements. This improves consistency and reduces mistakes.

Scan Policies

Scan policies define default behavior for multiple scans. They capture plugin sets, credentials, and performance options. Policies save time and enforce standard coverage. Use them to keep assessments consistent.

Creating a Scan Policy

Create a policy from a template and customize it. Save it under User Defined for reuse across projects. Document why each option is enabled or disabled. This helps justify choices in reports.

Nessus Plugins

Plugins drive the checks used by Nessus scans. They are grouped by severity and attack type. Understanding plugins helps interpret findings. It also supports tuning for false positives.

Excluding False Positives

False positives can be hidden with plugin rules. Specify the host and plugin ID to suppress results. Keep a record of exclusions and justify them. This prevents missing real issues later.

Credentialed Scans

Credentialed scans access systems with valid accounts. They return deeper data on patches and configuration. Supported targets include Linux, Windows, and databases. Credentials must be handled securely and rotated.

Working with Nessus Output

Nessus output should be reviewed for accuracy and impact. Report summaries help stakeholders understand risk quickly. Detailed findings support remediation and validation. Use consistent formats across engagements.

Nessus Reports

Nessus reports can be exported as PDF, HTML, or CSV. PDF and HTML include executive summaries and plugin links. CSV exports support custom analysis and dashboards. Choose formats based on audience needs.

Exporting Nessus Scans

Scans can be exported as .nessus or .db formats. The .nessus file is XML with results and configuration. The .db file includes knowledge base and audit trail data. Exports enable sharing and offline analysis.


Tool: nessus-report-downloader

The nessus-report-downloader script pulls reports via API. It simplifies batch exports in multiple formats. You provide credentials and select output types. Use it for automation and consistent reporting.

./nessus_downloader.rb

Scan Issues

Scan issues are common in complex networks. Firewalls can skew results or block probes. Sensitive systems may crash if scanning is too aggressive. Plan mitigation before scanning critical assets.

Issue Mitigation

If all ports look open or closed, adjust discovery settings. Disable host ping or use Advanced Scan templates when needed. Exclude fragile hosts or reduce plugin coverage to lower risk. Document any scope changes for transparency.

Network Impact

High scan rates can saturate links and services. Reduce concurrent checks or throttle performance options. Safe checks reduce impact but may miss some issues. Monitor network usage during large scans.

Tool: vnstat

vnstat can monitor bandwidth usage during scans. It helps quantify impact and detect congestion early. Use it on the interface that carries scan traffic. Comparisons before and after scans are most useful.

sudo vnstat -l -i eth0

OpenVAS

OpenVAS provides open source vulnerability scanning. It is managed through Greenbone Security Assistant. OpenVAS is useful for internal assessments and labs. Keep feeds updated for accurate results.

Installation

Install GVM packages and run the setup process. The setup downloads feeds and configures services. After setup, start the GVM services before scanning. Record credentials created during setup.

sudo apt-get update && sudo apt-get -y full-upgrade
sudo apt-get install gvm
sudo gvm-setup
sudo gvm-start

Scanning

OpenVAS scans are created and run from the web interface. The Scans section lists existing jobs and their status. Use targets and scan configs to control scope and depth. Authenticated scans provide the deepest results.

Configuration

Configuration starts with defining targets and credentials. You can tune ports, alive tests, and auth options. This ensures scans are accurate and do not overreach. Targets should be grouped by criticality.

Target Hosts

Create targets under the Configuration menu. You can add a single host or a list of IPs. Set port lists and credentials if required. These settings define scan reachability.

Scan Execution

Create a scan from the Scans menu and select a target. Choose a scan config that matches the desired depth. Run the scan and monitor progress in the interface. Large scans can take 30 to 60 minutes or more.

Exporting Results

OpenVAS reports can be exported for analysis and sharing. Reports include OS data, open ports, and services. Exporting results supports audit and remediation tracking. Choose formats that align with your workflow.

Export Formats

Common formats include XML, CSV, PDF, ITG, and TXT. XML is best for automation and structured processing. PDF works well for executive reporting and sharing. CSV supports custom analysis and dashboards.


Tool: openvasreporting

openvasreporting converts OpenVAS XML to other formats. It helps teams generate spreadsheets for analysis. Use it when you need structured output quickly. Provide the XML report and desired format.

python3 -m openvasreporting -i report.xml -f xlsx

Reports

Reports turn technical findings into actionable decisions. They should be clear, concise, and prioritized by risk. Include evidence, impact, and recommended fixes. Consistent structure improves stakeholder trust.

Executive Summary

The executive summary gives leadership the key risks. It should describe impact, exposure, and overall posture. Avoid technical detail and focus on business outcomes. This section drives budget and remediation decisions.

Assessment Overview

The overview explains objectives, methods, and tools used. It also states assumptions and any test constraints. Readers should understand what was tested and why. This provides context for the findings.

Scope and Duration

Scope defines what systems were in or out of testing. Duration describes when testing occurred and for how long. Clear scope prevents misinterpretation of results. It also helps compare results across assessments.

Vulnerabilities and Recommendations

Each finding should include evidence and impact detail. Recommendations must be specific, feasible, and prioritized. Include short term mitigation and long term remediation. This section is the core of the report.


Reference

This article is based on my personal study notes from the Information Security Foundations track.

Full repository: https://github.com/lameiro0x/pentesting-path-htb