Which of the following describes the purpose of a snapshot?
To create a dynamic data replication
To create a synonym
To create a
To create an image of a database
The purpose of a snapshot is to create an image of a database. A snapshot is a copy of the state and content of a database at a specific point in time. A snapshot can be used for various purposes, such as backup and recovery, testing and development, reporting and analysis, etc. A snapshot can be created using various techniques, such as full copy, incremental copy, differential copy, etc. A snapshot can also be created using various tools or commands provided by the database system or software. The other options are either incorrect or irrelevant for this question. For example, dynamic data replication is a process that copies and synchronizes data from one database server (the source) to one or more database servers (the target) in real time; a synonym is an alias or an alternative name for an object in a database; C is an incomplete option. References: CompTIA DataSys+ Course Outline, Domain 5.0 Business Continuity, Objective 5.2 Given a scenario, implement backup and restoration of database management systems.
Which of the following resources is the best way to lock rows in SQL Server?
TID
SID
RID
PID
The resource that is the best way to lock rows in SQL Server is RID. RID, or Row IDentifier, is an attribute that uniquely identifies each row in a heap table in SQL Server. A heap table is a table that does not have a clustered index, which means that the rows are not stored in any particular order. A RID consists of the file number, page number, and slot number of the row in the database. A RID can be used to lock rows in SQL Server to prevent concurrent access or modification by other transactions or users. A RID lock is a type of lock that locks a single row using its RID. A RID lock can be applied using the HOLDLOCK or XLOCK hints in a SELECT statement. The other options are either not related or not effective for this purpose. For example, TID, or Transaction IDentifier, is an attribute that uniquely identifies each transaction in a database; SID, or Security IDentifier, is an attribute that uniquely identifies each user or group in a Windows system; PID, or Process IDentifier, is an attribute that uniquely identifies each process in an operating system. References: CompTIA DataSys+ Course Outline, Domain 3.0 Database Management and Maintenance, Objective 3.3 Given a scenario, implement database concurrency methods.
Which of the following is a reason to create a stored procedure?
To minimize storage space
To improve performance
To bypass case sensitivity requirements
To give control of the query logic to the user
A reason to create a stored procedure is to improve performance. A stored procedure is a set of SQL statements or commands that are stored and compiled in the database server, and can be executed by name or by a trigger. A stored procedure can improve performance by reducing the network traffic between the client and the server, as only the name or the parameters of the stored procedure need to be sent, rather than the entire SQL code. A stored procedure can also improve performance by reusing the same execution plan, as the stored procedure is compiled only once and cached in the server memory. The other options are either not true or not relevant for this purpose. For example, a stored procedure does not necessarily minimize storage space, as it still occupies space in the database server; a stored procedure does not bypass case sensitivity requirements, as it still follows the rules of the database system; a stored procedure does not give control of the query logic to the user, as it is defined and maintained by the database administrator or developer. References: CompTIA DataSys+ Course Outline, Domain 2.0 Database Deployment, Objective 2.2 Given a scenario, create database objects using scripting and programming languages.
Which of the following is a typical instruction that is found on a Linux command-line script and represents a system shell?
/bin/bash
#/bin/shell
>/bin/sh
#!/bin/bash
The instruction that is found on a Linux command-line script and represents a system shell is #!/bin/bash. This instruction is called a shebang or a hashbang, and it indicates the interpreter that should be used to execute the script. In this case, the interpreter is /bin/bash, which is the path to the bash shell, a common system shell for Linux. A system shell is a program that provides an interface for users to interact with the operating system, either through commands or scripts. A system shell can also perform various tasks, such as file management, process control, variable assignment, etc. The other options are either incorrect or not typical for this purpose. For example, /bin/bash is the path to the bash shell, but it does not indicate the interpreter for the script; #/bin/shell is not a valid shebang or a path to a system shell; >/bin/sh is a redirection operator followed by a path to a system shell, but it does not indicate the interpreter for the script. References: CompTIA DataSys+ Course Outline, Domain 2.0 Database Deployment, Objective 2.2 Given a scenario, create database objects using scripting and programming languages.
(Which of the following is the purpose of including a COLLATE clause in a column definition?)
To create a computed column that is included in an index
To specify how data is sorted in a database
To support functional dependency
To ensure all values entered into a column fall within a data range
The correct answer is B. To specify how data is sorted in a database. CompTIA DataSys+ explains that the COLLATE clause defines the collation rules applied to character-based data in a column. Collation determines how string data is compared, sorted, and evaluated, including rules for alphabetical order, case sensitivity, accent sensitivity, and character encoding. These rules directly affect query results, indexing behavior, and comparison operations involving textual data.
When a COLLATE clause is included in a column definition, it overrides the database or server default collation for that specific column. This is especially important in environments that support multiple languages or regional settings, where sorting and comparison rules may differ. For example, case-insensitive collations treat uppercase and lowercase letters as equivalent, while case-sensitive collations do not. DataSys+ highlights that incorrect collation settings can lead to unexpected query results, inefficient indexing, or inconsistent application behavior.
Option A is incorrect because computed columns and indexing behavior are defined through expressions and index definitions, not collation rules. Option C, functional dependency, relates to normalization theory and the relationship between attributes in a relational schema; collation has no impact on dependency enforcement. Option D describes data validation, which is handled through constraints such as CHECK, NOT NULL, or data types—not collation.
CompTIA DataSys+ emphasizes that DBAs must understand how collation affects sorting, grouping, and comparison operations, especially for reporting and internationalized applications. Proper use of the COLLATE clause ensures consistent behavior across queries and prevents subtle bugs related to string comparisons.
Therefore, the primary purpose of including a COLLATE clause in a column definition is to control how textual data is sorted and compared, making option B the correct and verified answer.
Which of the following types of RAID, if configured with the same number and type of disks, would provide the best write performance?
RAID 3
RAID 5
RAID 6
RAID 10
The type of RAID that would provide the best write performance if configured with the same number and type of disks is RAID 10. RAID 10, or RAID 1+0, is a type of RAID that combines mirroring and striping techniques to provide both redundancy and performance. Mirroring means that data is duplicated across two or more disks to provide fault tolerance and data protection. Striping means that data is split into blocks and distributed across two or more disks to provide faster access and throughput. RAID 10 requires at least four disks and can tolerate the failure of up to half of the disks without losing data. RAID 10 provides the best write performance among the RAID types because it can write data in parallel to multiple disks without parity calculations or overhead. The other options are either different types of RAID or not related to RAID at all. For example, RAID 3 is a type of RAID that uses striping with a dedicated parity disk to provide redundancy and performance; RAID 5 is a type of RAID that uses striping with distributed parity to provide redundancy and performance; RAID 6 is a type of RAID that uses striping with double distributed parity to provide extra redundancy and performance. References: CompTIA DataSys+ Course Outline, Domain 3.0 Database Management and Maintenance, Objective 3.1 Given a scenario, perform common database maintenance tasks.
(A database administrator has configured a Resource Governor with three resource pools. The first resource pool is assigned a minimum CPU and memory value of 25%. The second resource pool is assigned a minimum CPU and memory value of 35%. The database administrator wants to assign the maximum CPU and the maximum resource pool to the third resource. Which of the following is the maximum the database administrator should assign to the third resource?)
20%
40%
60%
100%
The correct answer is B. 40%. CompTIA DataSys+ explains that Resource Governor is used to manage and allocate CPU and memory resources among multiple workloads or resource pools within a database system. When configuring resource pools, administrators must ensure that the total allocation of minimum resources does not exceed 100%, as all resource pools share the same finite system resources.
In this scenario, the first resource pool has a minimum CPU and memory allocation of 25%, and the second resource pool has a minimum allocation of 35%. Together, these minimum guarantees account for 60% of total available resources. Resource Governor enforces these minimums to ensure that critical workloads always receive their reserved share of system resources, even under heavy load.
The remaining resources are calculated by subtracting the total minimum allocation from 100%.
100% − 60% = 40%
This remaining 40% represents the maximum CPU and memory that can be safely assigned to the third resource pool without violating Resource Governor rules. Assigning a higher value would exceed the total available system resources and would not be permitted under proper configuration guidelines emphasized in DataSys+.
Option A (20%) underutilizes available resources and is unnecessarily restrictive. Option C (60%) and D (100%) are invalid because they would cause the combined minimum allocations to exceed system capacity, leading to misconfiguration or enforcement failures. DataSys+ stresses careful capacity planning when using Resource Governor to balance performance, fairness, and system stability.
Resource Governor is especially useful in multi-tenant or mixed-workload environments, where critical applications must be protected from resource starvation. Proper calculation of minimums and maximums ensures predictable performance and prevents one workload from monopolizing system resources.
Therefore, the correct maximum allocation for the third resource pool is 40%, making option B the correct and fully verified answer aligned with CompTIA DataSys+ guidance.
A new retail store employee needs to be able to authenticate to a database. Which of the following commands should a database administrator use for this task?
INSERT USER
ALLOW USER
CREATE USER
ALTER USER
The command that the database administrator should use for this task is CREATE USER. The CREATE USER command is a SQL statement that creates a new user account in a database and assigns it a username and a password. The CREATE USER command also allows the database administrator to specify other options or attributes for the user account, such as default tablespace, quota, profile, role, etc. The CREATE USER command is the first step to enable a user to authenticate to a database. The other options are either invalid or not suitable for this task. For example, INSERT USER is not a valid SQL command; ALLOW USER is not a SQL command, but a keyword used in some database systems to grant permissions to users; ALTER USER is a SQL command that modifies an existing user account, but does not create a new one. References: CompTIA DataSys+ Course Outline, Domain 4.0 Data and Database Security, Objective 4.2 Given a scenario, implement security controls for databases.
Over the weekend, a company’s transaction database was moved to an upgraded server. All validations performed after the migration indicated that the database was functioning as expected. However, on Monday morning, multiple users reported that the corporate reporting application was not working.
Which of the following are the most likely causes? (Choose two.)
The access permissions for the service account used by the reporting application were not changed.
The new database server has its own reporting system, so the old one is not needed.
The reporting jobs that could not process during the database migration have locked the application.
The reporting application's mapping to the database location was not updated.
The database server is not permitted to fulfill requests from a reporting application.
The reporting application cannot keep up with the new, faster response from the database.
The most likely causes of the reporting application not working are that the access permissions for the service account used by the reporting application were not changed, and that the reporting application’s mapping to the database location was not updated. These two factors could prevent the reporting application from accessing the new database server. The other options are either irrelevant or unlikely to cause the problem. References: CompTIA DataSys+ Course Outline, Domain 3.0 Database Management and Maintenance, Objective 3.2 Given a scenario, troubleshoot common database issues.
A database administrator needs to ensure continuous availability of a database in case the server fails. Which of the following should the administrator implement to ensure high availability of the database?
ETL
Replication
Database dumping
Backup and restore
The option that the administrator should implement to ensure high availability of the database is replication. Replication is a process that copies and synchronizes data from one database server (the primary or source) to one or more database servers (the secondary or target). Replication helps ensure high availability of the database by providing redundancy, fault tolerance, and load balancing. If the primary server fails, the secondary server can take over and continue to serve the data without interruption or data loss. The other options are either not related or not suitable for this purpose. For example, ETL is a process that extracts, transforms, and loads data from one source to another for analysis or reporting purposes; database dumping is a process that exports the entire content of a database to a file for backup or migration purposes; backup and restore is a process that copies and recovers data from a backup device or media in case of a disaster or corruption. References: CompTIA DataSys+ Course Outline, Domain 5.0 Business Continuity, Objective 5.3 Given a scenario, implement replication of database management systems.
Which of the following commands is part of DDL?
UPDATE
GRANT
CREATE
INSERT
The command that is part of DDL is CREATE. CREATE is a SQL command that belongs to the category of DDL, or Data Definition Language. DDL is a subset of SQL commands that are used to define or modify the structure or schema of a database, such as tables, columns, constraints, indexes, views, etc. CREATE is a DDL command that is used to create a new object in a database, such as a table, column, constraint, index, view, etc. For example, the following statement uses the CREATE command to create a new table called employee with four columns:
CREATE TABLE employee (
emp_id INT PRIMARY KEY,
emp_name VARCHAR(50) NOT NULL,
emp_dept VARCHAR(20),
emp_salary DECIMAL(10,2)
);
Copy
The other options are either part of different categories of SQL commands or not SQL commands at all. For example, UPDATE is a SQL command that belongs to the category of DML, or Data Manipulation Language. DML is a subset of SQL commands that are used to manipulate or modify the data or content of a database, such as inserting, updating, deleting, or selecting data. GRANT is a SQL command that belongs to the category of DCL, or Data Control Language. DCL is a subset of SQL commands that are used to control or manage the access or permissions of users or roles on a database, such as granting or revoking privileges or roles. INSERT is a SQL command that belongs to the category of DML, or Data Manipulation Language. INSERT is a DML command that is used to insert new data into a table. References: CompTIA DataSys+ Course Outline, Domain 1.0 Database Fundamentals, Objective 1.2 Given a scenario, execute database tasks using scripting and programming languages.
An on-premises application server connects to a database in the cloud. Which of the following must be considered to ensure data integrity during transmission?
Bandwidth
Encryption
Redundancy
Masking
The factor that must be considered to ensure data integrity during transmission is encryption. Encryption is a process that transforms data into an unreadable or scrambled form using an algorithm and a key. Encryption helps protect data integrity during transmission by preventing unauthorized access or modification of data by third parties, such as hackers, eavesdroppers, or interceptors. Encryption also helps verify the identity and authenticity of the source and destination of the data using digital signatures or certificates. The other options are either not related or not sufficient for this purpose. For example, bandwidth is the amount of data that can be transmitted over a network in a given time; redundancy is the duplication of data or components to provide backup or alternative sources in case of failure; masking is a technique that replaces sensitive data with fictitious but realistic data to protect its confidentiality or compliance. References: CompTIA DataSys+ Course Outline, Domain 4.0 Data and Database Security, Objective 4.2 Given a scenario, implement security controls for databases.
Which of the following statements contains an error?
Select EmpId from employee where EmpId=90030
Select EmpId where EmpId=90030 and DeptId=34
Select* from employee where EmpId=90030
Select EmpId from employee
The statement that contains an error is option B. This statement is missing the FROM clause, which specifies the table or tables from which to retrieve data. The FROM clause is a mandatory clause in a SELECT statement, unless the statement uses a subquery or a set operator. The correct syntax for option B would be:
SELECT EmpId FROM employee WHERE EmpId=90030 AND DeptId=34
Copy
The other options are either correct or valid SQL statements. For example, option A selects the employee ID from the employee table where the employee ID is equal to 90030; option C selects all columns from the employee table where the employee ID is equal to 90030; option D selects the employee ID from the employee table without any filter condition. References: CompTIA DataSys+ Course Outline, Domain 1.0 Database Fundamentals, Objective 1.2 Given a scenario, execute database tasks using scripting and programming languages.
A database's daily backup failed. Previous backups were completed successfully. Which of the following should the database administrator examine first to troubleshoot the issue?
CPU usage
Disk space
Event log
OS performance
The first thing that the database administrator should examine to troubleshoot the issue is the event log. The event log is a file that records the events and activities that occur on a system, such as database backups, errors, warnings, or failures. By examining the event log, the administrator can identify the cause and time of the backup failure, and also check for any other issues or anomalies that may affect the backup process or the backup quality. The other options are either not relevant or not the first priority for this task. For example, CPU usage, disk space, and OS performance may affect the performance or availability of the system, but not necessarily cause the backup failure; moreover, these factors can be checked after reviewing the event log for more information. References: CompTIA DataSys+ Course Outline, Domain 5.0 Business Continuity, Objective 5.2 Given a scenario, implement backup and restoration of database management systems.
(Before installing a new database instance for an organization, a DBA needs to verify the amount of space, the hardware, and the network resources. Which of the following best describes this process?)
Performing patch management
Upgrading the database instance
Checking for database prerequisites
Provisioning the configuration
The correct answer is C. Checking for database prerequisites. According to CompTIA DataSys+ objectives, verifying prerequisites is a critical preparatory step before installing a new database instance. Database prerequisites refer to the minimum and recommended requirements that must be met to ensure a successful installation and stable operation of the database system.
This process typically includes validating disk space availability, CPU capacity, memory (RAM), and network resources, as well as confirming operating system compatibility, required libraries, kernel parameters, and supporting services. DataSys+ emphasizes that failure to meet prerequisites can lead to installation errors, poor performance, instability, or security vulnerabilities after deployment. Therefore, DBAs are expected to thoroughly assess the environment before proceeding with installation.
Option A, performing patch management, refers to applying updates and fixes to existing systems to address bugs or security issues. This occurs after software is installed and operational, not before a new database instance is deployed. Option B, upgrading the database instance, involves moving from one version of a database to another and assumes an existing installation is already in place. Option D, provisioning the configuration, focuses on allocating and configuring resources (such as creating instances, users, or storage structures) after prerequisites have been validated.
CompTIA DataSys+ clearly separates environment validation from deployment and configuration activities. Checking prerequisites is a risk-reduction step that ensures the infrastructure can support the database workload and performance expectations from the start. It also supports capacity planning, compliance, and long-term maintainability.
Therefore, the process of verifying space, hardware, and network resources before installing a database instance is best described as checking for database prerequisites, making option C the correct and fully verified answer.
Which of the following is used to write SQL queries in various programming languages?
Indexing
Object-relational mapping
Excel
Normalization
The option that is used to write SQL queries in various programming languages is object-relational mapping. Object-relational mapping (ORM) is a technique that maps objects in an object-oriented programming language (such as Java, Python, C#, etc.) to tables in a relational database (such as Oracle, MySQL, SQL Server, etc.). ORM allows users to write SQL queries in their preferred programming language without having to deal with the differences or complexities between the two paradigms. ORM also provides users with various benefits such as code reuse, abstraction, validation, etc. The other options are either not related or not effective for this purpose. For example, indexing is a technique that creates data structures that store the values of one or more columns of a table in a sorted order to speed up queries; Excel is a software application that allows users to organize and manipulate data in rows and columns; normalization is a process that organizes data into tables and columns to reduce redundancy and improve consistency. References: CompTIA DataSys+ Course Outline, Domain 1.0 Database Fundamentals, Objective 1.2 Given a scenario, execute database tasks using scripting and programming languages.
(Analysts are writing complex queries across live tables for a database administrator. Which of the following is the best solution for the analysts to implement in order to improve user performance?)
Creating views to support repeat queries
Removing data redundancy
Modifying the data in the table
Deleting records from the table
The correct answer is A. Creating views to support repeat queries. CompTIA DataSys+ emphasizes that views are an effective way to improve performance, usability, and consistency when users frequently run complex queries against live production tables. A view is a virtual table based on a predefined SQL query that presents data in a simplified and reusable format without physically storing the data.
When analysts run complex queries directly against live tables, it can increase CPU usage, I/O operations, and locking contention, potentially impacting other users and critical workloads. By creating views, the database administrator can encapsulate complex joins, filters, and aggregations into a single logical object. Analysts can then query the view instead of repeatedly executing expensive SQL statements. This reduces query complexity, improves readability, and can lead to more predictable performance. DataSys+ highlights views as a best practice for supporting reporting and analytical workloads while minimizing disruption to transactional systems.
Option B, removing data redundancy, is related to normalization and data integrity, not query performance optimization for analysts. Option C, modifying the data in the table, risks data integrity issues and does not address performance concerns. Option D, deleting records from the table, may reduce table size but is not an appropriate or safe method for improving user query performance and can result in data loss.
CompTIA DataSys+ also notes that views can enhance security by limiting analysts’ access to only the required columns or rows, further protecting live data. In some implementations, indexed or materialized views can offer additional performance benefits for repeated queries.
Therefore, the best solution to improve analyst performance when running complex, repeat queries on live tables is creating views to support repeat queries, making option A the correct and fully verified answer.
(A financial institution is running a database that contains PII. Database administrators need to provide table access to junior developers. Which of the following is the best option?)
Role assignment
Data masking
View creation
Policy enforcement
The correct answer is C. View creation. CompTIA DataSys+ places strong emphasis on protecting personally identifiable information (PII) while still enabling developers and analysts to perform their required tasks. In environments such as financial institutions, direct access to tables containing sensitive data presents a significant security and compliance risk, especially for junior developers who do not require full visibility of production data.
Creating views is a best-practice solution that allows database administrators to expose only the necessary columns and rows while hiding or excluding sensitive fields such as Social Security numbers, account numbers, or personal contact details. Views act as a controlled abstraction layer over the underlying tables, enabling developers to work with realistic data structures without direct access to raw PII. DataSys+ highlights views as an effective method for enforcing the principle of least privilege while maintaining usability and performance.
Option A, role assignment, is important for access control but is too broad on its own. Assigning a role may still allow access to sensitive columns unless combined with views or column-level permissions. Option B, data masking, is often used for non-production environments or dynamic data protection, but it may not be the best solution when developers need structured access to table data for development or testing purposes. Option D, policy enforcement, refers to governance and administrative controls but does not directly provide a technical mechanism for safe table access.
CompTIA DataSys+ stresses that database security should balance protection with operational efficiency. Views allow DBAs to centrally manage what data is exposed, simplify auditing, and reduce the risk of accidental data leakage. They are especially valuable in regulated industries where compliance and data minimization are critical.
Therefore, the best option for providing junior developers with access while protecting PII is view creation, making option C the correct and fully verified answer.
Which of the following indexes stores records in a tabular format?
Columnstore
Non-clustered
Unique
Secondary
The index that stores records in a tabular format is columnstore. A columnstore index is a type of index that stores and compresses data by columns rather than by rows. A columnstore index can improve the performance and efficiency of queries that perform aggregations, calculations, or analysis on large amounts of data, such as data warehouse or business intelligence applications. A columnstore index can also reduce the storage space required for data by applying various compression techniques, such as dictionary encoding, run-length encoding, bit packing, etc. The other options are either different types of indexes or not related to indexes at all. For example, a non-clustered index is a type of index that stores the values of one or more columns in a sorted order along with pointers to the corresponding rows in the table; a unique index is a type of index that enforces uniqueness on one or more columns in a table; a secondary index is an alternative term for a non-clustered index. References: CompTIA DataSys+ Course Outline, Domain 3.0 Database Management and Maintenance, Objective 3.1 Given a scenario, perform common database maintenance tasks.
(Which of the following are ORM tools? Select two.)
PL/SQL
XML
Entity Framework
T-SQL
Hibernate
PHP
The correct answers are C. Entity Framework and E. Hibernate. CompTIA DataSys+ defines Object-Relational Mapping (ORM) tools as software frameworks that allow developers to interact with relational databases using object-oriented programming concepts instead of writing raw SQL queries. ORM tools map database tables to application objects, rows to object instances, and columns to object attributes, simplifying database access and improving developer productivity.
Entity Framework is a widely used ORM tool in the Microsoft ecosystem, commonly paired with .NET applications. It enables developers to define data models in code and allows the framework to automatically generate SQL queries, manage relationships, and handle CRUD (Create, Read, Update, Delete) operations. DataSys+ highlights ORM tools like Entity Framework as a way to reduce repetitive SQL coding while maintaining consistency and abstraction between application logic and the database layer.
Hibernate is a popular ORM framework used primarily in Java-based applications. Similar to Entity Framework, Hibernate manages database interactions by mapping Java objects to relational database structures. It handles SQL generation, transaction management, and caching, making it a core example of an ORM solution referenced in DataSys+ learning objectives.
Option A, PL/SQL, and option D, T-SQL, are procedural SQL languages, not ORM tools. They are used to write stored procedures, functions, and scripts that execute directly within the database. Option B, XML, is a data format used for data representation and exchange, not for object-relational mapping. Option F, PHP, is a general-purpose programming language that can use ORM tools but is not itself an ORM framework.
CompTIA DataSys+ emphasizes understanding the distinction between database languages, programming languages, and abstraction frameworks like ORMs. Entity Framework and Hibernate clearly fit the definition of ORM tools by bridging object-oriented applications and relational databases.
Therefore, the correct and fully verified answers are C and E.
Which of the following is part of logical database infrastructure security?
Surveillance
Biometric access
Perimeter network
Cooling system
The option that is part of logical database infrastructure security is perimeter network. Perimeter network, also known as DMZ (Demilitarized Zone), is a network segment that lies between an internal network and an external network, such as the internet. Perimeter network provides an additional layer of security for the internal network by isolating and protecting the servers or services that are exposed to the external network, such as web servers, email servers, database servers, etc. Perimeter network also helps prevent unauthorized access or attacks from the external network to the internal network by using firewalls, routers, proxies, etc. The other options are either part of physical database infrastructure security or not related to database infrastructure security at all. For example, surveillance is a method of monitoring and recording physical activities or events in a location or resource; biometric access is a device that uses biological characteristics to control access to a physical location or resource; cooling system is a device or system that regulates the temperature and humidity of a location or resource. References: CompTIA DataSys+ Course Outline, Domain 4.0 Data and Database Security, Objective 4.1 Given a scenario, implement database infrastructure security.
Which of the following firewall types allows an administrator to control traffic and make decisions based on factors such as connection information and data flow communications?
Circuit-level
Stateful
Proxy
Packet
The firewall type that allows an administrator to control traffic and make decisions based on factors such as connection information and data flow communications is stateful. A stateful firewall is a type of firewall that tracks the state of each connection and packet that passes through it, and applies rules or policies based on the context and content of the traffic. A stateful firewall can control traffic and make decisions based on factors such as source and destination IP addresses, ports, protocols, session status, application layer data, etc. The other options are either different types of firewalls or not related to firewalls at all. For example, a circuit-level firewall is a type of firewall that monitors and validates the establishment of TCP or UDP connections; a proxy firewall is a type of firewall that acts as an intermediary between the source and destination of the traffic; a packet firewall is a type of firewall that filters packets based on their header information. References: CompTIA DataSys+ Course Outline, Domain 4.0 Data and Database Security, Objective 4.2 Given a scenario, implement security controls for databases.
(Which of the following normal forms (NFs) is considered the most preferable for relational database design?)
1NF
2NF
3NF
4NF
The correct answer is C. 3NF (Third Normal Form). According to CompTIA DataSys+, Third Normal Form is widely regarded as the most preferable and practical level of normalization for relational database design in real-world environments. It strikes an optimal balance between reducing data redundancy, maintaining data integrity, and preserving query performance.
To understand why 3NF is preferred, it is important to consider the progression of normalization. First Normal Form (1NF) ensures that tables contain atomic values and no repeating groups. Second Normal Form (2NF) builds on this by eliminating partial dependencies, ensuring that non-key attributes depend on the entire primary key. While these forms improve structure, they do not fully address all redundancy issues.
Third Normal Form (3NF) goes further by removing transitive dependencies, meaning that non-key attributes depend only on the primary key and not on other non-key attributes. This significantly reduces duplication of data and prevents common anomalies during INSERT, UPDATE, and DELETE operations. CompTIA DataSys+ emphasizes that eliminating transitive dependencies is critical for maintaining long-term data consistency and integrity.
Although Fourth Normal Form (4NF) and higher normal forms exist, DataSys+ notes that they are typically applied only in specialized cases involving complex multi-valued dependencies. Over-normalization beyond 3NF can increase schema complexity, require excessive joins, and negatively impact performance without providing proportional benefits in most transactional systems.
As a result, 3NF is considered the industry-standard target for relational database design. It provides a clean, maintainable schema that supports scalability, reduces redundancy, and aligns well with performance expectations. CompTIA DataSys+ highlights 3NF as the most commonly implemented and recommended normalization level for operational databases.
Therefore, the most preferable normal form for relational database design is 3NF, making option C the correct and fully verified answer.
(Which of the following best describes the function of a wildcard in the WHERE clause?)
An exact match is not possible in a CREATE statement.
An exact match is necessary in a SELECT statement.
An exact match is not possible in a SELECT statement.
An exact match is necessary in a CREATE statement.
The correct answer is C. An exact match is not possible in a SELECT statement. CompTIA DataSys+ documentation explains that wildcards are used in SQL primarily within the WHERE clause of a SELECT statement to enable pattern matching rather than exact value matching. Wildcards such as % and _ are commonly used with the LIKE operator to search for partial strings or variable patterns in character-based data.
In practical database usage, wildcards allow analysts and administrators to retrieve records when the full or exact value is unknown or unnecessary. For example, searching for all patient records with last names starting with “Mac%” or all email addresses ending in “@example.com” requires pattern-based matching. In these cases, an exact match is explicitly not required, which is the core purpose of wildcards in SQL queries.
Option A and D incorrectly reference the CREATE statement. Wildcards are not relevant to CREATE statements, which are used for defining database objects such as tables, indexes, or views. These statements require explicit definitions and do not support wildcard-based matching logic. Option B is also incorrect because a SELECT statement does not always require an exact match; this is precisely why wildcards exist and are heavily used in querying operations.
CompTIA DataSys+ emphasizes that understanding query flexibility is essential for data retrieval and reporting. Wildcards enhance query usability and efficiency by allowing broader result sets without complex logic or multiple conditions. They are particularly valuable in analytical, reporting, and troubleshooting scenarios where partial data exploration is required.
Therefore, the best description of the function of a wildcard in the WHERE clause is that it allows queries where an exact match is not required, making option C the correct and fully aligned answer according to CompTIA DataSys+ principles.
Which of the following should a company develop to ensure preparedness for a fire in a data center?
Deployment plan
Backup plan
Data retention policy
Disaster recovery plan
The document that a company should develop to ensure preparedness for a fire in a data center is a disaster recovery plan. A disaster recovery plan is a document that outlines how an organization will continue its operations in the event of a disaster or disruption, such as fire, flood, earthquake, cyberattack, etc. A disaster recovery plan typically includes the following elements: - The objectives and scope of the plan - The roles and responsibilities of the staff involved - The identification and assessment of the risks and impacts - The strategies and procedures for restoring the critical functions and data - The resources and tools required for the recovery process - The testing and maintenance schedule for the plan A disaster recovery plan helps an organization to minimize the damage and downtime caused by a disaster, as well as to resume normal operations as soon as possible. The other options are either different types of documents or not specific to fire preparedness. For example, a deployment plan is a document that describes how a system or software will be installed or launched; a backup plan is a document that specifies how data will be copied and stored for backup purposes; a data retention policy is a document that defines how long data should be kept and when it should be deleted or archived. References: CompTIA DataSys+ Course Outline, Domain 5.0 Business Continuity, Objective 5.4 Given a scenario, implement disaster recovery methods.
Which of the following constraints is used to enforce referential integrity?
Surrogate key
Foreign key
Unique key
Primary key
The constraint that is used to enforce referential integrity is foreign key. A foreign key is a column or a set of columns in a table that references the primary key of another table. A primary key is a column or a set of columns in a table that uniquely identifies each row in the table. Referential integrity is a rule that ensures that the values in the foreign key column match the values in the primary key column of the referenced table. Referential integrity helps maintain the consistency and accuracy of the data across related tables. The other options are either different types of constraints or not related to referential integrity at all. For example, a surrogate key is a column that is artificially generated to serve as a primary key, such as an auto-increment number or a GUID (Globally Unique Identifier); a unique key is a column or a set of columns in a table that uniquely identifies each row in the table, but it can have null values unlike a primary key; there is no such constraint as TID. References: CompTIA DataSys+ Course Outline, Domain 1.0 Database Fundamentals, Objective 1.2 Given a scenario, execute database tasks using scripting and programming languages.
(A database manager would like to reduce the overhead caused by log-based recoveries. Which of the following should the manager use to best mitigate the situation?)
Deadlocks
Checkpoints
Locks
Indexes
The correct answer is B. Checkpoints. CompTIA DataSys+ explains that checkpoints play a critical role in database recovery mechanisms by limiting the amount of transaction log data that must be processed during recovery. Log-based recovery relies on transaction logs to restore the database to a consistent state after a failure. If checkpoints are infrequent, the database engine must replay a large portion of the transaction log, which increases recovery time and system overhead.
A checkpoint is a process where the database writes all modified (dirty) pages from memory to disk and records a synchronization point in the transaction log. After a checkpoint completes, the database knows that all changes up to that point are safely stored on disk. During recovery, the system only needs to process log entries that occurred after the most recent checkpoint, significantly reducing recovery workload and time. DataSys+ highlights checkpoints as a key performance and availability feature, especially in systems with high transaction volumes.
Option A, deadlocks, are concurrency issues that occur when two or more transactions block each other indefinitely. While deadlock handling is important for transaction throughput, it does not reduce log-based recovery overhead. Option C, locks, control concurrent access to data and help maintain consistency, but they do not affect how much log data must be replayed during recovery. Option D, indexes, improve query performance but can actually increase logging activity because index changes are also logged.
CompTIA DataSys+ emphasizes that effective recovery planning includes optimizing logging behavior and checkpoint frequency. Properly configured checkpoints strike a balance between runtime performance and recovery efficiency by reducing excessive log replay without causing unnecessary disk I/O.
Therefore, to best mitigate the overhead caused by log-based recoveries, the database manager should use checkpoints, making option B the correct and fully verified answer.
A database administrator is concerned about transactions in case the system fails. Which of the following properties addresses this concern?
Durability
Isolation
Atomicity
Consistency
The property that addresses this concern is durability. Durability is one of the four properties (ACID) that ensure reliable transactions in a database system. Durability means that once a transaction has been committed, its effects are permanent and will not be lost in case of system failure, power outage, crash, etc. Durability can be achieved by using techniques such as write-ahead logging, checkpoints, backup and recovery, etc. The other options are either not related or not specific to this concern. For example, isolation means that concurrent transactions do not interfere with each other and produce consistent results; atomicity means that a transaction is either executed as a whole or not at all; consistency means that a transaction preserves the validity and integrity of the data. References: CompTIA DataSys+ Course Outline, Domain 1.0 Database Fundamentals, Objective 1.3 Given a scenario, identify common database issues.
A company wants to deploy a new application that will distribute the workload to five different database instances. The database administrator needs to ensure that, for each copy of the database, users are able to read and write data that will be synchronized across all of the instances.
Which of the following should the administrator use to achieve this objective?
[Peer-to-peer replication
Failover clustering
Log shipping
Availability groups
The administrator should use peer-to-peer replication to achieve this objective. Peer-to-peer replication is a type of replication that allows data to be distributed across multiple database instances that are equal partners, or peers. Each peer can read and write data that will be synchronized across all peers. This provides high availability, scalability, and load balancing for the application. The other options are either not suitable for this scenario or do not support bidirectional data synchronization. For example, failover clustering provides high availability but does not distribute the workload across multiple instances; log shipping provides disaster recovery but does not allow writing data to secondary instances; availability groups provide high availability and read-only access to secondary replicas but do not support peer-to-peer replication. References: CompTIA DataSys+ Course Outline, Domain 5.0 Business Continuity, Objective 5.3 Given a scenario, implement replication of database management systems.
(Which of the following types of scripting can be executed on a web browser?)
Server-side
PowerShell
Client-side
Command-line
The correct answer is C. Client-side. CompTIA DataSys+ explains that client-side scripting refers to code that is executed directly within the user’s web browser, rather than on a backend server or operating system shell. The most common and widely recognized client-side scripting language is JavaScript, which runs natively in modern web browsers and is used to enhance interactivity, validate input, and dynamically update content on web pages.
Client-side scripts execute after a web page is delivered to the user’s browser. This allows for immediate feedback, such as form validation, dynamic content updates, and interactive user interfaces, without requiring a round trip to the server. DataSys+ highlights client-side scripting as an important concept when discussing web applications that interact with databases, particularly because improper client-side controls can introduce security risks if not reinforced by server-side validation.
Option A, server-side scripting, runs on the web server rather than in the browser. Examples include PHP, Python, Ruby, and server-side JavaScript (such as Node.js). These scripts handle tasks like database queries, authentication, and business logic and are never executed in the browser itself. Option B, PowerShell, is a scripting and automation language used primarily in Windows environments for system and database administration tasks and cannot run inside a web browser. Option D, command-line scripting, refers to scripts executed in a terminal or shell environment, not within a browser context.
CompTIA DataSys+ emphasizes the distinction between client-side and server-side execution models because of their impact on performance, security, and data handling. Understanding where code executes is critical for protecting databases from exposure and enforcing proper validation controls.
Therefore, the type of scripting that can be executed in a web browser is client-side scripting, making option C the correct and fully verified answer.
TESTED 22 Feb 2026
Copyright © 2014-2026 DumpsTool. All Rights Reserved