SSL/TLS encryption capability is provided by:
certificates.
protocols.
passwords.
controls.
SSL and its successor TLS arecryptographic protocolsdesigned to provide secure communications over untrusted networks. The encryption capability comes from theTLS protocol suite, which defines how two endpoints negotiate security settings, authenticate, exchange keys, and protect data as it travels between them. During the TLS handshake, the endpoints agree on a cipher suite, establish shared session keys using secure key exchange methods, and then use symmetric encryption and integrity checks to protect application data against eavesdropping and tampering. Because TLS specifies these mechanisms and the sequence of steps, it is accurate to say that encryption capability is provided byprotocols.
Certificates are important but they are not the encryption mechanism itself. Digital certificates primarily supportauthentication and trustby binding a public key to an identity and enabling verification through a trusted certificate authority chain. Certificates help prevent impersonation and man-in-the-middle attacks by allowing clients to validate the server’s identity, and in mutual TLS they can validate both parties. However, certificates alone do not define how encryption is negotiated or applied; TLS does.
Passwords are unrelated to transport encryption; they are an authentication secret and do not provide session encryption for network traffic. “Controls” is too general: SSL/TLS is indeed a security control, but the question asks specifically what provides the encryption capability. That capability is implemented and standardized by the SSL/TLSprotocols, which orchestrate key establishment and encrypted communication.
What is the purpose of Digital Rights Management DRM?
To ensure that all attempts to access information are tracked, logged, and auditable
To control the use, modification, and distribution of copyrighted works
To ensure that corporate files and data cannot be accessed by unauthorized personnel
To ensure that intellectual property remains under the full control of the originating enterprise
Digital Rights Management is a set of technical mechanisms used to enforce the permitted uses of digital content after it has been delivered to a user or device. Its primary purpose is tocontrol how copyrighted works are accessed and used, including restricting copying, printing, screen capture, forwarding, offline use, device limits, and redistribution. DRM systems commonly apply encryption to content and then rely on a licensing and policy enforcement component that checks whether a user or device has the right to open the content and under what conditions. These conditions can include time-based access (expiry), geographic limitations, subscription status, concurrent use limits, or restrictions on modification and export.
This aligns precisely with option B because DRM is fundamentally aboutusage control of copyrighted digital works, such as music, movies, e-books, software, and protected media streams. In cybersecurity documentation, DRM is often discussed alongside content protection, anti-piracy measures, and license compliance. It differs from general access control and audit logging: access control determines who may enter a system or open a resource, while auditing records actions for accountability. DRM extends beyond simple access by enforcing what a legitimate user can do with the content once accessed.
Option A describes audit logging, option C describes general authorization and data access control, and option D is closer to broad information rights management goals but is less precise than the standard definition focused on controlling use and distribution of copyrighted works.
What things must be identified to define an attack vector?
The platform, application, and data
The attacker and the vulnerability
The system, transport protocol, and target
The source, processor, and content
Anattack vectoris the route or method used to compromise an environment, and it is typically described as the way athreat actorexploits avulnerabilityto gain unauthorized access, execute code, steal data, or disrupt services. To define an attack vector correctly, cybersecurity documents emphasize that you must identify both parts of that relationship:who or what is attackingandwhat weakness is being exploited. The “attacker” component represents the threat source or threat actor, including their capability and intent (for example, cybercriminals using phishing, insiders abusing access, or automated botnets scanning the internet). The “vulnerability” component is the specific weakness or exposure that enables success, such as a missing patch, weak authentication, misconfiguration, excessive permissions, insecure coding flaw, or lack of user awareness.
Without identifying the attacker, you cannot properly characterize the likely techniques, scale, and motivation driving the vector. Without identifying the vulnerability, you cannot define the practical entry point and control gaps that make the vector feasible. Together, attacker plus vulnerability allows defenders to map realistic scenarios, prioritize controls, and select mitigations that reduce likelihood and impact. Those mitigations may include patching, configuration hardening, strong authentication, least privilege, network segmentation, user training, and monitoring. The other options list technology elements that can be involved in an incident, but they do not capture the essential definition of an attack vector as an exploitation path driven by a threat actor leveraging a weakness
Other than the Requirements Analysis document, in what project deliverable should Vendor Security Requirements be included?
Training Plan
Business Continuity Plan
Project Charter
Request For Proposals
Vendor Security Requirements must be included in theRequest For Proposalsbecause the RFP is the formal mechanism used to communicate mandatory expectations to suppliers and to evaluate them consistently during selection. Cybersecurity and third-party risk management practices require that security expectations be establishedbeforea vendor is chosen, so the organization can assess whether a supplier can meet confidentiality, integrity, availability, privacy, and compliance obligations. Embedding requirements in the RFP makes them contractual in nature once incorporated into the final agreement and ensures vendors price and design their solution with security controls in scope rather than treating them as optional add-ons later.
Security requirements in an RFP typically cover topics such as secure development practices, vulnerability management, patching and support timelines, encryption for data at rest and in transit, identity and access controls, audit logging, incident notification timelines, subcontractor controls, data residency and retention, penetration testing evidence, compliance attestations, and right-to-audit provisions. The RFP also enables objective scoring by requesting documented evidence such as security certifications, control descriptions, and responses to standardized security questionnaires.
A training plan and business continuity plan are operational deliverables and do not drive vendor selection criteria. A project charter sets scope and governance at a high level, but it is not the primary procurement artifact for binding vendor security obligations. Therefore, the correct answer is Request For Proposals.
Separation of duties, as a security principle, is intended to:
optimize security application performance.
ensure that all security systems are integrated.
balance user workload.
prevent fraud and error.
Separation of duties is a foundational access-control and governance principle designed to reduce the likelihood of misuse, fraud, and significant mistakes by ensuring thatno single individual can complete a critical process end-to-end without independent oversight. Cybersecurity and audit frameworks describe this as splitting high-risk activities into distinct roles so that one person’s actions are checked or complemented by another person’s authority. This limits both intentional abuse, such as unauthorized payments or data manipulation, and unintentional errors, such as misconfigurations or accidental deletion of important records.
In practice, separation of duties is implemented by defining roles and permissions so that incompatible functions are not assigned to the same account. Common examples include separating the ability to create a vendor from the ability to approve payments, separating software development from production deployment, and separating system administration from security monitoring or audit log management. This is reinforced through role-based access control, approval workflows, privileged access management, and periodic access reviews that detect conflicting entitlements and privilege creep.
The value of separation of duties is risk reduction through accountability and control. When actions require multiple parties or independent review, it becomes harder for a single compromised account or malicious insider to cause large harm without detection. It also improves reliability by introducing checkpoints that catch mistakes earlier. Therefore, the correct purpose is to prevent fraud and error.
An internet-based organization whose address is not known has attempted to acquire personal identification details such as usernames and passwords by creating a fake website. This is an example of?
Breach
Phishing
Threat
Ransomware
Creating a fake website to trick individuals into entering usernames and passwords is a classic example of phishing. Phishing is a social engineering technique where an attacker impersonates a trusted entity to deceive a victim into disclosing sensitive information (credentials, personal data, payment details) or taking an action that benefits the attacker (downloading malware, approving an MFA prompt, wiring funds). A counterfeit login page is commonly used in credential-harvesting campaigns: the victim believes they are authenticating to a legitimate service, but the credentials are captured by the attacker and later used for account takeover. This is not necessarily a breach yet because the question describes an attempt to acquire credentials; a breach would be confirmed unauthorized access or disclosure. While phishing is a kind of threat, “threat” is too broad compared to the specific described behavior. It is also not ransomware, which focuses on encrypting or locking data and demanding payment. Cybersecurity documentation emphasizes layered defenses against phishing: user awareness training, email and web filtering, domain and certificate validation, anti-spoofing controls, strong authentication (especially MFA resistant to prompt fatigue), password managers that reduce credential entry on lookalike domains, and monitoring for suspicious logins. Because the attack relies on deception through a fake website to steal credentials, the best match is phishing.
What operational practice would risk managers employ to demonstrate the effectiveness of security controls?
Metrics Reporting
Change Management
Security Awareness Training
Penetration Testing
Risk managers demonstrate the effectiveness of security controls by usingmetrics reportingbecause metrics provide objective, repeatable evidence that controls are operating as intended and are producing measurable outcomes. In cybersecurity governance, “control effectiveness” is shown through performance indicators and trend data, not just by stating that a control exists. Metrics translate technical activity into risk-relevant results that leadership can understand and act on.
Common control-effectiveness metrics include patch compliance rates and time-to-remediate critical vulnerabilities, percentage of systems meeting secure configuration baselines, multifactor authentication coverage, privileged access review completion rates, mean time to detect and respond, incident volume and severity trends, phishing simulation outcomes, and the percentage of logs successfully collected and retained for monitoring. Risk managers also use key risk indicators to track whether residual risk is increasing or decreasing, and they compare results against defined thresholds and risk appetite.
Whilepenetration testingcan validate exposure and reveal weaknesses, it is periodic and scenario-based; it does not continuously demonstrate ongoing control performance across the environment.Change managementis essential for stability and risk reduction, but it is a process control rather than a reporting practice used to demonstrate effectiveness.Security awareness trainingimproves user behavior, yet effectiveness still needs measurement through metrics such as completion rates and simulated phishing results. Therefore, metrics reporting is the operational practice most directly used to demonstrate control effectiveness.
Why would a Business Analyst include current technology when documenting the current state business processes surrounding a solution being replaced?
To ensure the future state business processes are included in user training
To identify potential security impacts to integrated systems within the value chain
To identify and meet internal security governance requirements
To classify the data elements so that information confidentiality, integrity, and availability are protected
A Business Analyst documents current technology in the “as-is” state because business processes are rarely isolated; they depend on applications, interfaces, data exchanges, identity services, and shared infrastructure. From a cybersecurity perspective, replacing one solution can unintentionally change trust boundaries, authentication flows, authorization decisions, logging coverage, and data movement across integrated systems. Option B is correct because understanding the current technology landscape helps identify where security impacts may occur across the value chain, including upstream data providers, downstream consumers, third-party services, and internal platforms that rely on the existing system.
Cybersecurity documents emphasize that integration points are common attack surfaces. APIs, file transfers, message queues, single sign-on, batch jobs, and shared databases can introduce risks such as broken access control, insecure data transmission, data leakage, privilege escalation, and gaps in monitoring. If the BA captures current integrations, dependencies, and data flows, the delivery team can properly perform threat modeling, define security requirements, and avoid breaking compensating controls that other systems depend on. This also supports planning for secure decommissioning, migration, and cutover, ensuring credentials, keys, service accounts, and network paths are rotated or removed appropriately.
The other options are less precise for the question. Training is not the core driver for documenting current technology. Governance requirements apply broadly but do not explain why current tech must be included. Data classification is important, but it is a separate activity from capturing technology dependencies needed to assess integration security impacts.
What is defined as an internal computerized table of access rules regarding the levels of computer access permitted to login IDs and computer terminals?
Access Control List
Access Control Entry
Relational Access Database
Directory Management System
AnAccess Control List (ACL)is a structured, system-maintained list of authorization rules that specifieswho or what is allowed to access a resourceand what actions are permitted. In many operating systems, network devices, and applications, an ACL functions as an internal table that maps identities such as user IDs, group IDs, service accounts, or even device/terminal identifiers to permissions like read, write, execute, modify, delete, or administer. When a subject attempts to access an object, the system consults the ACL to determine whether the requested operation should be allowed or denied, enforcing the organization’s security policy at runtime.
The description in the question matches the classic definition of an ACL as a computerized table of access rules tied to login IDs and sometimes the originating endpoint or terminal context. ACLs are central to implementingdiscretionary access controland are also widely used in networking (for example, permitting or denying traffic flows based on source/destination and ports) and file systems (controlling access to folders and files).
AnAccess Control Entry (ACE)is only a single line item within an ACL (one rule for one subject). A “Relational Access Database” is not a standard security control term for authorization tables. A “Directory Management System” manages identities and groups, but it is not the same as the enforcement list attached to a specific resource. Therefore, the correct answer isAccess Control List.
Which capability would a solution option need to demonstrate in order to satisfy Logging Requirements?
Facilitates Single Sign-On
Records information about user access and actions in the system
Integrates with Risk Logging software
Offers both on-premise and as-a-service delivery options
Logging requirements in cybersecurity focus on ensuring the system can produce reliable, actionable records that support detection, investigation, compliance, and accountability. The most fundamental capability is the ability torecord information about user access and actionswithin the system. This includes authentication events such as logon success or failure, logoff, session creation, and privilege elevation; authorization decisions such as access granted or denied; and security-relevant actions such as viewing, creating, modifying, deleting, exporting, or transmitting sensitive data. Good security logging also captures context like timestamp synchronization, user or service identity, source device or IP, target resource, action performed, and outcome.
This capability supports multiple operational needs. Security monitoring teams rely on logs to identify anomalies like repeated failed logins, unusual access times, access from unexpected locations, or high-risk administrative changes. Incident responders need logs to reconstruct timelines, confirm scope, and preserve evidence. Auditors and compliance teams require logs to demonstrate control effectiveness, segregation of duties, and traceability of changes.
The other options are not sufficient to satisfy logging requirements. Single sign-on can simplify authentication but does not guarantee application-level activity logging. Integration with specialized tools may be useful, but the solution must first generate the required events. Deployment model options do not address whether the system can create detailed audit trails. Therefore, the required capability is recording user access and actions in the system.
What should organizations do with Key Risk Indicator KRI and Key Performance Indicator KPI data to facilitate decision making, and improve performance and accountability?
Achieve, reset, and evaluate
Collect, analyze, and report
Prioritize, falsify, and report
Challenge, compare, and revise
KRIs and KPIs are only useful when they are handled as part of a disciplined measurement lifecycle. Cybersecurity governance guidance emphasizes three essential activities:collect,analyze, andreport. Organizations must firstcollectKRI and KPI data consistently from reliable sources such as vulnerability scanners, SIEM logs, IAM systems, ticketing platforms, and asset inventories. Collection requires defined metric owners, clear definitions, standardized time windows, and data quality checks so results are comparable across periods and business units.
Next, organizationsanalyzethe data to understand what it means for risk and performance. Analysis includes trending over time, comparing results to targets and thresholds, correlating indicators to business outcomes, identifying outliers, and determining root causes. For KRIs, analysis highlights rising exposure or control breakdowns such as increasing critical vulnerabilities beyond SLA. For KPIs, analysis evaluates operational effectiveness such as mean time to detect and mean time to remediate.
Finally, organizationsreportresults to the right audiences with the right level of detail. Reporting supports accountability by assigning actions, tracking remediation progress, and escalating when thresholds are exceeded. It also supports decision making by showing where investment, staffing, or control changes will have the greatest risk-reduction and performance impact. The other options are not standard, auditable metric management activities and do not reflect the established lifecycle used in cybersecurity measurement programs.
Which of the following activities are part of the business analyst’s role in ensuring compliance with security policies?
Auditing enterprise security policies to ensure that they comply with regulations
Ensuring that security policies are reflected in the solution requirements
Testing applications to identify potential security holes
Checking to ensure that business users follow the security requirements
Business analysts support cybersecurity compliance primarily by ensuring that security and privacy expectations are translated into clear, testable requirements that are built into the solution. This includes eliciting applicable organizational security policies, standards, and control objectives, then mapping them into functional and non-functional requirements such as authentication methods, role-based access, logging and audit trail needs, encryption requirements, session controls, data retention, and segregation of duties. When security policies are reflected in the solution requirements, they become part of the delivery lifecycle: they can be designed, implemented, validated in testing, and verified during acceptance. This creates traceability from policy to requirement to control implementation, which is essential for audits and for demonstrating due diligence.
Option A is typically the responsibility of governance, risk, and compliance functions or internal audit, not the BA. Option C is usually performed by security testing specialists, QA teams, or application security engineers using techniques like SAST, DAST, and penetration testing. Option D is largely an operational management and compliance enforcement function, supported by training, monitoring, and disciplinary processes. The BA’s distinct contribution is ensuring policy-driven security controls are captured in requirements and embedded into the solution design and delivery artifacts.
Uploaded image
There are three states in which data can exist:
at dead, in action, in use.
at dormant, in mobile, in use.
at sleep, in awake, in use.
at rest, in transit, in use.
Data is commonly categorized into three states because the threats and protections change depending on where the data is and what is happening to it. Data at rest is stored on a device or system, such as databases, file shares, endpoints, backups, and cloud storage. The main risks are unauthorized access, theft of storage media, misconfigured permissions, and improper disposal. Controls typically include strong access control, encryption at rest with sound key management, secure configuration and hardening, segmentation, and resilient backup protections including restricted access and immutability.
Data in transit is data moving between systems, such as client-to-server traffic, service-to-service connections, API calls, and email routing. The primary risks are interception, alteration, and impersonation through man-in-the-middle techniques. Standard controls include transport encryption (such as TLS), strong authentication and certificate validation, secure network architecture, and monitoring for anomalous connections or data flows.
Data in use is actively processed in memory by applications and users, for example when a document is opened, a record is processed by an application, or data is displayed to a user. This state is challenging because data may be decrypted for processing. Controls include least privilege, strong authentication and session management, endpoint protection, application security controls, and secure development practices, with hardware-backed isolation when required.
Public & Private key pairs are an example of what technology?
Virtual Private Network
IoT
Encryption
Network Segregation
Public and private key pairs are the foundation ofasymmetric encryption, also calledpublic key cryptography. In this model, each entity has two mathematically related keys: apublic keythat can be shared widely and aprivate keythat must be kept secret. The keys are designed so that what one key does, only the other key can undo. This enables two core security functions used throughout cybersecurity architectures.
First,confidentiality: data encrypted with a recipient’s public key can only be decrypted with the recipient’s private key. This allows secure communication without having to share a secret key in advance, which is especially important on untrusted networks like the internet. Second,digital signatures: a sender can sign data with their private key, and anyone can verify the signature using the sender’s public key. This provides authenticity (proof the sender possessed the private key), integrity (the data was not altered), and supports non-repudiation when combined with proper key custody and audit practices.
These mechanisms underpin widely used security controls such as TLS for secure web connections, secure email standards, code signing, and certificate-based authentication. A VPN may use public key cryptography during key exchange, but the key pair itself is specifically anencryption technology. IoT and network segregation are unrelated categories.
What terms are often used to describe the relationship between a sub-directory and the directory in which it is cataloged?
Primary and Secondary
Multi-factor Tokens
Parent and Child
Embedded Layers
Directories are commonly organized in a hierarchical structure, where each directory can contain sub-directories and files. In this hierarchy, the directory that contains another directory is referred to as theparent, and the contained sub-directory is referred to as thechild. This parent–child relationship is foundational to how file systems and many directory services represent and manage objects, including how paths are constructed and how inheritance can apply.
From a cybersecurity perspective, understanding parent and child relationships matters because access control and administration often follow the hierarchy. For example, permissions applied at a parent folder may be inherited by child folders unless inheritance is explicitly broken or overridden. This can simplify administration by allowing consistent access patterns, but it also introduces risk: overly permissive settings at a parent level can unintentionally grant broad access to many child locations, increasing the chance of unauthorized data exposure. Security documents therefore emphasize careful design of directory structures, least privilege at higher levels of the hierarchy, and regular permission reviews to detect privilege creep and misconfigurations.
The other options do not describe this standard hierarchy terminology. “Primary and Secondary” is more commonly used for redundancy or replication roles, not directory relationships. “Multi-factor Tokens” relates to authentication factors. “Embedded Layers” is not a st
What is an external audit?
A review of security-related measures in place intended to identify possible vulnerabilities
A process that the cybersecurity follows to ensure that they have implemented the proper controls
A review of security expenditures by an independent party
A review of security-related activities by an independent party to ensure compliance
Anexternal auditis an independent evaluation performed by a party outside the organization to determine whether security-related activities, controls, and evidence meet defined requirements. Those requirements are typically drawn from laws and regulations, contractual obligations, and recognized standards or control frameworks. The defining characteristics areindependenceandattestation: the auditor is not part of the operational team being assessed and provides an objective conclusion about compliance or control effectiveness.
Unlike a vulnerability-focused review (often called a security assessment or technical audit) that primarily seeks weaknesses to remediate, an external audit emphasizes whether controls aredesigned appropriately, implemented consistently, and operating effectivelyover time. External auditors usually test governance processes, risk management practices, policies, access control procedures, change management, logging and monitoring, incident response readiness, and evidence of periodic reviews. They also validate documentation and sampling records to confirm that what is written is actually performed.
Option B describes an internal assurance activity, such as self-assessment or internal audit preparation, where the security team checks its own implementation. Option C is closer to a financial or procurement review and is not the typical definition of an external security audit. Therefore, the best answer is the one that clearly captures anindependent partyreviewing security activitiesto ensure compliancewith established criteria
If a Business Analyst is asked to document the current state of the organization's web-based business environment, and recommend where cost savings could be realized, what risk factor must be included in the analysis?
Organizational Risk Tolerance
Impact Severity
Application Vulnerabilities
Threat Likelihood
When analyzing a web-based business environment for potential cost savings, the Business Analyst must account forapplication vulnerabilitiesbecause they directly affect the organization’s exposure to cyber attack and the true cost of operating a system. Vulnerabilities are weaknesses in application code, configuration, components, or dependencies that can be exploited to compromise confidentiality, integrity, or availability. In web environments, common examples include insecure authentication, injection flaws, broken access control, misconfigurations, outdated libraries, and weak session management.
Cost-saving recommendations frequently involve consolidating platforms, reducing tooling, lowering support effort, retiring controls, delaying upgrades, or moving to shared services. Without including known or likely vulnerabilities, the analysis can unintentionally recommend changes that reduce preventive and detective capability, increase attack surface, or extend the time vulnerabilities remain unpatched. Cybersecurity governance guidance emphasizes that technology rationalization must consider security posture: vulnerable applications often require additional controls (patching cadence, WAF rules, monitoring, code fixes, penetration testing, secure SDLC work) that carry ongoing cost. These costs are part of the system’s “total cost of ownership” and should be weighed against proposed savings.
While impact severity and threat likelihood are important for overall risk scoring, the question asks what risk factor must be included when documenting the current state of a web-based environment. The most essential factor that ties directly to the environment’s condition and drives remediation cost and exposure isapplication vulnerabilities.
TESTED 24 Feb 2026
Copyright © 2014-2026 DumpsTool. All Rights Reserved