What is the proper procedure for stopping asynchronous replication and in-progress transfers?
Removing the volume member from a protection group
Disabling the replication schedule
Disallowing the protection group at the target
According to the official Pure Storage FlashArray Asynchronous Replication Configuration and Best Practices Guide , the proper and immediate method to halt an active, in-progress asynchronous replication transfer is by disallowing the protection group at the target .
When you navigate to the target FlashArray and disallow the specific Protection Group, Purity immediately breaks the replication authorization for that group. If there is an in-progress snapshot transfer occurring at that exact moment, the transfer is immediately stopped, and the partially transferred snapshot data is discarded on the target side.
Here is why the other options are incorrect:
Disabling the replication schedule (B): Toggling the replication schedule to " Disabled " only prevents future scheduled snapshots from being created and sent. It does not kill or interrupt a replication transfer that is already currently in progress.
Removing the volume member from a protection group (A): Modifying the members of a protection group updates the configuration for the next snapshot cycle. It does not actively abort the transmission of the current point-in-time snapshot that the array is already busy sending over the WAN.
What is the proper configuration method to connect a volume to multiple hosts?
Connect the volume to a host group.
Connect a volume group to the host.
Connect the volume to each individual host.
In Pure Storage Purity OS, the absolute best practice and proper configuration method for sharing a single volume across multiple hosts—such as a VMware ESXi cluster or a Microsoft Windows Server Failover Cluster (WSFC)—is to connect the volume to a Host Group .
When you create a Host Group, you add the individual Host objects (which contain the WWPNs, IQNs, or NQNs) into that group. When a volume is then connected to the Host Group, Purity automatically ensures that the volume is presented to every host in that group using the exact same LUN ID . Consistent LUN IDs across all nodes in a cluster are a strict requirement for clustered file systems like VMFS and Cluster Shared Volumes (CSV) to function correctly and prevent data corruption.
Here is why the other options are incorrect:
Connect the volume to each individual host (C): This is known as creating " private connections. " If you manually connect a shared volume to multiple hosts individually, Purity might assign a different LUN ID to the volume for each host. Inconsistent LUN IDs will cause clustered operating systems to fail to recognize the disk as a shared resource. Private connections should only be used for boot LUNs or standalone standalone servers.
Connect a volume group to the host (B): In Purity, a " Volume Group " is a logical container used for applying consistent snapshot policies, replication schedules, or ActiveCluster configurations to a set of related volumes (like a database and its log files). Volume groups are not used for host presentation or access control.
Which command provides the negotiated port speed of an ethernet port?
pureport list
purenetwork eth list -- all
purehw list -- all -- type eth
On a Pure Storage FlashArray, Ethernet ports operate at both a physical hardware layer and a logical network configuration layer. If you need to verify the actual physical negotiated port speed of an Ethernet port (for example, verifying if a 25GbE port negotiated down to 10GbE due to switch configurations or cable limitations), you must query the hardware layer directly.
The command purehw list --all --type eth interacts directly with the physical NIC hardware components to report their true link status, health, and dynamically negotiated hardware link speed.
Here is why the other options are incorrect:
purenetwork eth list -- all (B): The purenetwork command suite is primarily focused on the logical Layer 2/Layer 3 networking stack. It is used to configure and list IP addresses, subnet masks, MTU sizes (Jumbo Frames), and routing, rather than focusing on the physical hardware negotiation details of the NIC itself.
pureport list (A): The pureport command suite is specifically used for managing and viewing storage protocol target ports. An administrator would use this to list the array ' s Fibre Channel WWNs or iSCSI IQNs to configure host zoning or initiator connections, not to verify Ethernet link negotiation speeds.
A storage administrator is tasked with providing real-time data and alerts to the Network Operations Center (NOC) dashboard.
What source should the information come from to provide real-time data?
Pure Performance Monitoring
Pure1
FlashArray
To provide true real-time data and alerts directly to a Network Operations Center (NOC) dashboard, the information must be sourced directly from the FlashArray . The FlashArray ' s Purity operating environment natively supports real-time data streaming and alerting integrations via protocols like Syslog, SNMP traps, and the local REST API. Polling the array directly or configuring it to push alerts guarantees that the NOC receives instantaneous, up-to-the-second notifications regarding array health, hardware faults, and performance metrics.
Here is why the other options are incorrect:
Pure1 (B): While Pure1 is Pure Storage ' s powerful, cloud-based monitoring and predictive analytics platform, it relies on phone-home telemetry data. This telemetry is batched and transmitted from the array to the Pure1 cloud on a short polling interval (typically a few minutes). Because of this transmission and processing interval, Pure1 provides near-real-time (lagging by a few minutes) and historical data. It is excellent for global fleet management and predictive support, but not for instantaneous, zero-latency NOC alerting.
Pure Performance Monitoring (A): This is a distractor. There is no standalone product or specific protocol in the Pure Storage ecosystem officially named " Pure Performance Monitoring. " Performance monitoring is simply a feature accessed via the FlashArray GUI/CLI or the Pure1 platform.
What does an asynchronous blackout window prevent?
In progress transfers that started before the blackout window.
New replication transfers that started before the blackout window.
New replication transfers from starting during the blackout window.
Definition of a Blackout Window: In Purity//FA, a Blackout Window is a scheduled period during which asynchronous replication is suspended. This is typically used by administrators to preserve WAN bandwidth during peak business hours or to prevent replication traffic from competing with high-priority local workloads (like a massive database batch job).
The " In-Progress " Rule: One of the most important characteristics of a blackout window is that it is non-disruptive to active transfers . If a replication job started at 7:55 AM and the blackout window begins at 8:00 AM, Purity will allow that specific transfer to continue until it finishes.
The Prevention Mechanism: Once the clock hits the start of the blackout window, the replication scheduler is effectively " paused. " No new snapshots will be queued for transfer, and no new replication sessions will be initiated until the window expires.
Why Option A is incorrect: Purity does not kill active transfers. Abruptly stopping a transfer would waste the bandwidth already consumed and require the entire delta-set to be re-calculated or re-sent later.
Why Option B is incorrect: The phrasing is logically inconsistent; you cannot prevent something that " started before " the window from being " new " during the window.
Best Practice: When configuring blackout windows, ensure that the " clear " time (the time between windows) is long enough to allow the array to catch up on the snapshots that were queued during the blackout, otherwise, you risk triggering Alert 51 (Replication Delayed) .
A customer with an X50R2 array connected to 110V power is due for an upgrade to the latest controller generation.
What is required to allow this upgrade to proceed?
Call support to schedule a power supply replacement.
Transition the array to DC power.
Switch the array to 220v power outlets.
As Pure Storage has iterated through FlashArray generations (moving from //X R2 to R3, R4, and beyond), the power density and performance capabilities of the controllers have increased significantly. Modern high-performance controllers, such as those found in the //X R4 or //XL series, have strict power requirements that often exceed what a standard 110V/120V (Low-Line) circuit can provide.
To support the higher wattage and current draw of modern CPUs and NVRAM modules, Pure Storage requires 200-240V (High-Line) power for its latest generation controllers. If an existing array is currently running on 110V power, it must be migrated to 220V power outlets before the upgrade can proceed. Attempting to run newer, high-spec controllers on 110V power could lead to power supply instability, insufficient cooling performance, or the controllers failing to boot entirely.
Here is why the other options are incorrect:
Call support to schedule a power supply replacement (A): The issue is not a faulty power supply; it is the external electrical infrastructure ' s inability to provide the necessary voltage/wattage for the new hardware. Replacing the power supply with the same model would not solve the voltage limitation.
Transition the array to DC power (B): While Pure Storage does offer DC power options for specific telco environments, this is not a standard requirement for a typical controller upgrade. Moving to standard high-line AC power (220V) is the standard prerequisite for data center environments.
An administrator setup replicated snapshots for a protection group last week. They left the local snapshot schedule disabled.
How many snapshots are stored locally on the source array?
0
All of the replicated snapshots are also stored locally.
1
Replication Fundamentals: On a Pure Storage FlashArray, replication is a snapshot-based process . To replicate a Protection Group (pgroup) to a target array, the system must first create a point-in-time snapshot of the volumes within that group on the source array.
The " Immutable " Rule: Even if the Local Snapshot Schedule is disabled, the act of replicating requires the existence of a local snapshot to serve as the " base " or " source " for the data transfer. Purity does not stream data directly from the active volume to the wire; it creates a snapshot and then replicates the unique blocks contained in that snapshot.
Accounting for Local Copies: When a Protection Group is configured for replication, every snapshot generated by the Replication Schedule is stored locally on the source array. These snapshots will remain on the source array until they are aged out according to the Local Retention policy (even if the local schedule itself is off, the retention policy still applies to those replicated snapshots).
Visibility: If you navigate to the Protection Group in the Purity GUI, you will see these snapshots listed under the " Snapshots " tab. They are functionally identical to local snapshots, meaning they can be used for local clones or restores without needing to pull data back from the target array.
Why Option A and C are incorrect: * Option A: If 0 snapshots were stored, there would be nothing to replicate.
Option C: While Purity uses the most recent snapshot as a reference for delta-tracking, it keeps the entire history of snapshots defined by your retention policy, not just a single one.
A storage administrator is troubleshooting multipathing issues.
What is the CLI command that allows the administrator to sample the I/O balance information at a consistent interval?
purehost monitor --balance --interval 15 --repeat 5
purehost monitor --balance --resample 5
purehost monitor --balance --interval 15
Command Purpose: The purehost monitor command is the primary tool in the Pure Storage CLI for observing real-time performance and connectivity health from the perspective of the hosts connected to the FlashArray.
The --balance Flag: When the --balance flag is added, the output shifts from general performance (IOPS, bandwidth, latency) to showing how I/O is distributed across the available paths (controllers and ports). This is critical for identifying " unbalanced " loads, which usually point to misconfigured MPIO (Multi-Path I/O) on the host side (e.g., a host only using one controller ' s ports).
Interval vs. Repeat:
The --interval flag specifies the time in seconds between each sample. In option C, --interval 15 tells the array to refresh the data every 15 seconds.
The --repeat flag (seen in option A) is used to limit the total number of samples taken before the command exits. However, in standard troubleshooting, the administrator typically wants a consistent stream of data until manually stopped (Ctrl+C).
--resample (seen in option B) is not a valid flag for the purehost monitor command in Purity.
Best Practice: When troubleshooting multipathing, Pure Storage recommends monitoring the balance to ensure that the " Relative I/O " percentage is roughly equal across all active paths. Large discrepancies often indicate that the host ' s MPIO policy is set to " Failover Only " instead of the recommended " Round Robin " or " Least Queue Depth. "
A network admin is trying to setup Pure1 access and phone home logging on the array.
What port should the admin ensure is opened for the array?
22
8117
443
Pure1 and Phone Home: Pure1 is a cloud-based management and monitoring platform. For the FlashArray to communicate its health data, performance metrics, and alerts to Pure Storage, it must be able to " Phone Home. "
The Power of HTTPS: This communication is encrypted and sent via the HTTPS protocol. Consequently, the standard port for HTTPS, TCP 443 , must be opened for outbound traffic on the management network.
Firewall Configuration: The array controllers initiate an outbound connection to the Pure Storage cloud (specifically to destinations like cloud-connect.purestorage.com). Because it is an outbound-initiated connection, most stateful firewalls will allow the return traffic automatically once port 443 is permitted.
Why Option A is incorrect: Port 22 (SSH) is used for secure shell access to the array ' s CLI. While essential for local administration, it is not used for the automated Phone Home or Pure1 telemetry data stream.
Why Option B is incorrect: Port 8117 was historically used for the " Remote Assist " tunnel in older Purity versions. However, even for Remote Assist, modern Purity versions have transitioned to using port 443 to simplify firewall rules for customers. It is not the standard port for the general Phone Home/Pure1 logging service.
Verification: An administrator can verify connectivity by running the purearray test phonehome command in the CLI. If port 443 is blocked, the test will fail with a connection timeout.
RFC2307 enables cross-protocol support for which two protocols?
NFS, S3
S3, SMB
NFS, SMB
Understanding RFC2307: RFC2307 is an extension to the LDAP (Lightweight Directory Access Protocol) schema that allows for the storage of Unix-style information (POSIX attributes) within a directory service, most commonly Microsoft Active Directory . These attributes include things like uidNumber (User ID), gidNumber (Group ID), and login shells.
The Cross-Protocol Challenge: In a Unified Storage environment where the same data needs to be accessed by both Windows clients (using the SMB protocol) and Linux/Unix clients (using the NFS protocol), the storage array must be able to map a Windows Security Identifier (SID) to a Unix UID/GID.
How Pure Uses It: When an administrator enables RFC2307 support in Purity//FA File Services, the FlashArray can query Active Directory to retrieve these POSIX attributes. This creates a 1:1 mapping between the Windows user and the Unix identity.
The Benefit: This mapping ensures that a user can create a file via SMB and another user (or the same user on a different system) can access or modify that same file via NFS while maintaining consistent permission enforcement and ownership records. Without this (or a similar mapping service like NIS or local files), cross-protocol access often results in permission " Mapping " errors or files being owned by " nobody. "
A customer is managing a VMware environment and has recently copied a snapshot from a Pure Storage array to a new volume. When attempting to mount this volume on a VMware virtual machine, the operation fails.
How should this be resolved?
Resignature the volume.
Reformat the volume.
Adjust the VMware security settings.
VMFS UUID Conflict: When you create a volume from a snapshot on a FlashArray, the data is a block-for-block copy of the original. This includes the VMFS metadata and the unique identifier ( UUID ) of the datastore.
ESXi Protection Mechanism: When an ESXi host sees a volume that contains a VMFS signature identical to one already mounted (or one it has seen before but on a different device ID), it flags the volume as a " snapshot. " To prevent data corruption or VM identity conflicts, VMware will not automatically mount this " duplicate " volume.
The Resignaturing Process: By choosing to Resignature the volume (via the " Mount Datastore " wizard in vCenter or the ESXi CLI), VMware assigns a new, unique UUID to the VMFS volume. This allows the host to treat it as a distinct, independent datastore.
Why Option B is incorrect: Reformatting the volume would indeed allow it to be mounted, but it would destroy all the data you just copied from the snapshot, defeating the purpose of the operation.
Why Option C is incorrect: While there are advanced settings in VMware (like LVM.enableResignature), they don ' t solve the issue of the operation " failing " during a standard mount attempt; they simply change how the host behaves. Resignaturing is the standard, safe, and recommended workflow for mounting snapshot copies in a VMware environment.
How would a FlashArray administrator view external latency for write requests for a specific volume?
Analysis; Performance; Volumes; Select the appropriate volume; Select " Write " and Deselect " Read " and " Mirrored Write "
Health; Network; Select the appropriate protocol; Select the appropriate port
Storage; Volumes; Select the appropriate volume; Details
The Analysis Tab: In the Pure Storage FlashArray GUI, the Analysis tab is the primary location for deep-dive performance troubleshooting and historical data visualization. While the Storage tab provides a real-time " at-a-glance " view of a volume, the Analysis tab allows for granular filtering of specific metrics.
Granular Metric Filtering: When troubleshooting latency, it is critical to distinguish between Read and Write operations, as they interact with the Purity operating environment differently (e.g., writes hitting NVRAM vs. reads hitting the Flash modules).
External vs. Internal Latency: Pure Storage differentiates between " Array Latency " (internal processing) and " External Latency " (the time seen by the host). By navigating to Analysis > Performance , an administrator can drill down into the Volumes sub-tab.
Selecting the Volume and Operations: Once a specific volume is selected, the chart typically defaults to a combined view. To isolate " external latency for write requests, " the administrator must use the legend/filters to select " Write " while deselecting " Read " and " Mirrored Write " (which refers to synchronous replication traffic in ActiveCluster environments). This provides a clean graph of the round-trip write latency specifically for that volume ' s host I/O.
Why other options are incorrect: Option B refers to physical port health and hardware status, not volume-level performance. Option C provides basic volume metadata and real-time total latency, but lacks the granular historical filtering (selecting/deselecting specific I/O types) required for detailed performance analysis.
A FlashArray//XL is used for NVMe-RoCE services. The array has been lightly loaded and has performed as expected. A new workload has been added to the array, which is within the array ' s performance envelope. The change has resulted in extreme latency and service outages for all workloads utilizing NVMe-RoCE.
Which misconfiguration is this a symptom of?
NVMe-RoCE is not supported by the new workload.
Priority Flow Control (PFC) is not configured properly.
The ports are configured with NVMe-RoCE and Replication concurrently.
Requirement for Lossless Ethernet: NVMe over RoCE (RDMA over Converged Ethernet) requires a lossless fabric to function correctly. Unlike standard iSCSI which uses TCP for error recovery, RoCE assumes the network will not drop packets. If the network is " lossy, " performance degrades significantly.
The Role of PFC: Priority Flow Control (PFC) (IEEE 802.1Qbb) is the specific mechanism used in Data Center Bridging (DCB) to provide flow control on a per-priority basis. It allows the switch to send a " pause " frame to the sender when buffers are full, preventing packet drops.
Symptom Analysis: In the scenario provided, the array itself is not overloaded ( " within the performance envelope " ). However, the addition of a new workload increased traffic to the point where buffer congestion occurred. Because PFC was likely misconfigured (either on the FlashArray ports, the network switches, or the host NICs), the network dropped packets instead of pausing traffic. This leads to " go-back-N " retransmissions and massive latency spikes that affect all workloads sharing that fabric.
Pure Storage Best Practices: Pure Storage documentation for NVMe-RoCE emphasizes that PFC must be enabled and consistent across the entire path. If there is a mismatch in PFC configuration, the resulting packet loss will cause the symptoms described: extreme latency and potential service outages.
What is a potential indicator of incorrect configuration on a Pure Storage FlashArray?
Blinking status LED on the controller
CRC errors on a single port
Unusual spikes in latency or IOPS
Understanding CRC Errors: CRC (Cyclic Redundancy Check) errors occur when the data received at a port does not match the checksum sent by the initiator or switch. In a Pure Storage environment, these are tracked per port and can be viewed via the CLI (purehw list) or the GUI.
Configuration vs. Hardware: While a failing SFP or a damaged fiber cable can cause CRC errors, they are a primary indicator of configuration mismatches in the SAN fabric. Common culprits include:
Port Speed Mismatches: Manually setting a port to 16Gbps when the switch is set to " Auto " or 8Gbps.
Duplex Mismatches: Though rare in modern Fibre Channel, it is a classic Ethernet/iSCSI configuration error.
MTU Mismatches: In iSCSI or NVMe-oF environments, if the FlashArray is configured for Jumbo Frames (MTU 9000) but the switch or host is at MTU 1500, packet fragmentation or CRC-like errors/drops will occur.
Why Option A is incorrect: A blinking status LED on a controller is often part of normal operation (indicating heartbeat or activity). A solid amber LED would be an indicator of a hardware failure, not necessarily a misconfiguration.
Why Option C is incorrect: While latency spikes can be caused by misconfiguration (like incorrect MPIO settings), they are more commonly symptoms of workload changes , " noisy neighbors, " or reaching the physical performance limits of the array. CRC errors are a much more specific diagnostic " smoking gun " for port and fabric configuration issues.
Best Practice: When CRC errors are detected, Pure Storage recommends first checking the physical layer (reseating SFPs/cables) and then verifying that the port speed and protocol settings on the FlashArray match the upstream switch configuration exactly.
A storage administrator has presented VMFS datastores from a FlashArray with 10TB of raw capacity.
Why would the administrator see system space when logging in to the FlashArray GUI?
Virtual machines have not yet issued an unmap command.
There is more than 2TB of reclaimable space on the FlashArray.
More than 2TB of volume snapshots were destroyed.
On a Pure Storage FlashArray, " System Space " is a specific GUI-reported metric. Purity has a predefined, hidden internal space budget—typically around 20% of the raw mapped capacity (which would be 2TB on a 10TB array)—reserved for internal array operations. This budget covers RAID/parity overhead, metadata, and reclaimable space (data from deleted volumes, snapshots, or overwritten blocks that are waiting for the backend garbage collection process to fully erase them from the flash chips).
Normally, this internal overhead stays below the 20% budget, and " System Space " displays as 0.00 in the GUI. However, if an administrator deletes a massive amount of data at once, causing the reclaimable space to exceed that 2TB budget, the overflow is prominently displayed in the GUI as " System Space. "
Here is why the other options are incorrect:
Virtual machines have not yet issued an unmap command (A): If a VMware VM deletes a file but the OS hasn ' t issued an UNMAP/TRIM command, the FlashArray is completely unaware that the data was deleted. Therefore, the array continues to report that capacity as standard Volume Space , not System Space.
More than 2TB of volume snapshots were destroyed (C): While destroying snapshots leads to reclaimable space, " reclaimable space " (Option B) is the specific, correct Purity architectural term and metric that the system uses to calculate the internal budget threshold.
What is the best practice for configuring VMFS UNMAP for ESXi 6.7 or later?
Set it to Fixed at 500MB/s.
Set it to Auto with High Priority.
Set it to Auto with Low Priority.
What is UNMAP?: UNMAP (SCSI command 0x42) is the mechanism that allows a host (like ESXi) to inform the storage array that specific blocks of data are no longer in use (e.g., after a VM is deleted or moved). This is critical for Pure Storage because it allows the array to reclaim that space and maintain high data reduction ratios.
Evolution in ESXi: In versions prior to 6.5, UNMAP was a manual process executed via the CLI. Starting with ESXi 6.7, VMware introduced Automatic Space Reclamation , which runs in the background.
The Pure Storage Recommendation: Pure Storage recommends setting the reclamation priority to Auto with Low Priority .
Low Priority: This ensures that the UNMAP commands are sent to the FlashArray at a steady, manageable rate (roughly up to 25 MB/s to 100 MB/s depending on the Purity version). Because FlashArrays are built on a high-performance metadata engine, " Low Priority " is more than sufficient to keep up with even high-churn environments without causing any contention for active application I/O.
Why avoid High Priority (Option B)?: Setting it to high priority or using a fixed high-burst rate can lead to " bursty " SCSI traffic. While the FlashArray can handle the load, it is considered a best practice to keep background maintenance tasks like space reclamation at a lower priority to ensure the " Big Three " (latency, bandwidth, IOPS) for production workloads remain optimized.
Verification: You can verify that UNMAP is working by looking at the Data Reduction metrics in the Purity GUI or Pure1. If the " Thin Provisioning " or " Reclaimed " numbers are increasing after file deletions, the host is correctly communicating its freed space to the array.
What should an administrator configure when setting up device-level access control in an NVMe/TCP network?
VLANs
NQN
LACP
In any NVMe-based storage fabric (including NVMe/TCP, NVMe/FC, and NVMe/RoCE), the standard method for identifying endpoints and enforcing device-level access control is the NQN (NVMe Qualified Name) .
The NQN serves the exact same purpose in the NVMe protocol as an IQN (iSCSI Qualified Name) does in an iSCSI environment, or a WWPN (World Wide Port Name) does in a Fibre Channel environment. It is a unique identifier assigned to both the host (initiator) and the storage array (target subsystem). When setting up access control on a Pure Storage FlashArray, the storage administrator must capture the Host NQN from the operating system and configure a Host object on the array with that specific NQN. This ensures that only the authorized host can discover, connect to, and access its provisioned NVMe namespaces (volumes).
Here is why the other options are incorrect:
VLANs (A): Virtual LANs are used for network-level isolation and segmentation at Layer 2 of the OSI model. While you might use a VLAN to separate your storage traffic from your management traffic, it is a network security measure, not a device-level access control mechanism for the storage protocol itself.
LACP (C): Link Aggregation Control Protocol (LACP) is a network protocol used to bundle multiple physical network links into a single logical link for redundancy and increased bandwidth. It has nothing to do with storage access control or mapping volumes to hosts.
TESTED 05 Apr 2026
Copyright © 2014-2026 DumpsTool. All Rights Reserved