Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Article Number: 000018989


Isilon, PowerScale, OneFS - How to safely shut down an Isilon cluster prior to a scheduled power outage

Summary: This article provides the procedure for properly shutting down your Dell Isilon cluster and includes information about the risks associated with an improper cluster shutdown. Some steps should be ran 4-8 weeks before the scheduled upgrade. ...

Article Content


Instructions

Introduction

This article provides the procedure for properly shutting down your Dell Isilon cluster and includes information about the risks associated with an improper cluster shutdown. 
 

CAUTION: Improperly shutting down the cluster may lead to data availability and integrity issues.


Nodes that are improperly shut down in the cluster should not be without system power for longer than the life of the NVRAM battery, which is approximately 3 to 5 days, depending on the type of node. If data is still stored in a node journal and a node is without system power for longer than the NVRAM battery life, data is lost and the cluster must be rebuilt.

Contact Dell Isilon Technical Support for assistance if you have questions about the procedures or information in this article.
 

Procedure

The cluster shutdown procedure requires root credentials and serial console access to nodes in the cluster. The procedure is divided into five phases.
 


Read the entire procedure before beginning the shutdown process. This ensures that you understand the context and order for completing each step.
 

CAUTION: If you are running a version of OneFS that reached its end of service life (EOSL), upgrade to a supported version of OneFS.
 

 

Phase 1: Perform preventative maintenance.

These steps are performed approximately 4-8 weeks before the scheduled shutdown. The purpose of this phase is to identify unknown or latent hardware or firmware issues that can impede the shutdown procedure.
 

CAUTION: Dell strongly advises that you follow all the steps in Phase 1 before shutting down your Isilon cluster.

 


If circumstances require an immediate cluster-wide shutdown, you can shut down all nodes simultaneously using the OneFS command-line interface or the OneFS Web administration interface.

Dell strongly recommends following all the steps in Phase 3 to preserve the integrity of data if there is an emergency shutdown procedure.
 

1. Upload logs for historical reference if needed.

           

 # isi_gather_info

2. Perform or Request an Isilon Health Check.

# isi_gather_info             

*The Health Check is not intended to fix cluster issues, or assess the cluster's configuration, performance, or workflow.
 

3. Perform a "cold reboot" of each node by performing the following steps. A maintenance window should be scheduled for this activity:

Shut down each node in your cluster one at a time.

 

NOTE: This process allows you to identify any memory errors or drive failure modes that are only detected when the node is powered back on.

To shut down each node:

NOTE: This process is disruptive to all connections, except NFSv3. Contact Isilon support for assistance with instructions on a longer process that does not disrupt client activity while the nodes are being rebooted for this maintenance test.

A. Open an SSH connection to any node. Shut down each node by running the following command:

isi config
shutdown <node_lnn>

B. Verify that each node has powered off by confirming that the green power indicator LED on the back of the node is no longer illuminated.

C. Press the power button to power the node back on.

D. Verify that the node has rejoined the cluster and is healthy by running the isi status -q command and looking for -OK- in the Health DASR column of the output.

E. If a node encounters issues indicated in the Health DASR column, or fails to rejoin the cluster, resolve these issues before shutting down the next node.

An example of an issue is selected. Node 1 has rejoined the cluster successfully, but the Health DASR column indicates that it needs attention.

mycluster-1# isi status -q

Cluster Name: mycluster
Cluster Health:     [ ATTN]
Cluster Storage:  HDD                 SSD           
Size:             11G (23G Raw)       0 (0 Raw)     
VHS Size:         11G                
Used:             7.9G (69%)          0 (n/a)       
Avail:            3.5G (31%)          0 (n/a)       
                   Health  Throughput (bps)  HDD Storage      SSD Storage
ID |IP Address     |DASR |  In   Out  Total| Used / Size     |Used / Size
-------------------+-----+-----+-----+-----+-----------------+-----------------
  1|10.1.16.141    |-A-- |    0| 150K| 150K| 2.0G/ 2.8G( 69%)|    (No SSDs)   
  2|10.1.16.142    |-OK- |  98K|  13K| 112K| 2.0G/ 2.8G( 69%)|    (No SSDs)   
  3|10.1.16.143    |-OK- |    0|  44K|  44K| 2.0G/ 2.8G( 69%)|    (No SSDs)   
  4|10.1.16.144    |-OK- |    0|  512|  512| 2.0G/ 2.8G( 69%)|    (No SSDs)   
-------------------+-----+-----+-----+-----+-----------------+-----------------
Cluster Totals:          |  98K| 208K| 306K| 7.9G/  11G( 69%)|    (No SSDs)   
Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only   
2. Double-check the health of your entire cluster after you have rebooted each node. Open an SSH connection to any node and run the isi status -q command.
Verify that every node's Health DASR column reads -OK-.

3. Resolve any hardware issues uncovered by the reboot before proceeding to the next phase.

 

NOTE: If time does not permit for a cold-reboot approach for each node, you can proactively uncover some latent hardware issues by instead performing a rolling reboot or "warm reboot" by running the following command for each node:

 

isi config
reboot <node_lnn>

However, Dell strongly recommends using the cold-reboot approach to more effectively identify latent hardware issues.

 

4. Schedule a maintenance window for a total cluster shutdown.

 

 Phase 2: Shut down each node in the cluster.

These steps are to be performed on the day that you shut down your Isilon cluster. During a cluster-wide shutdown, some factors may impact or delay the shutdown process. For example, outstanding data writes to a node might affect the shutdown. The purpose of steps 1-2 is to ensure that all clients are disconnected from the cluster and data is properly saved from node journals to the file system prior to running the shutdown command. If you have iSCSI clients, ensure you shut down clients before the iSCSI service is disabled.

Step 3 describes how to shut down each node in your cluster sequentially using a serial console. This method is recommended because it enables you to verify that each node is properly shut down before proceeding to the next node, and make adjustments or fix issues as needed to ensure a proper cluster shutdown. However, this method may be time-consuming because it requires connecting a serial console to each node to run the shutdown command. The section, "Shut down all nodes in your cluster simultaneously," describes how to use the OneFS command-line interface or the OneFS web administration interface to shut down your cluster. This method is less time-consuming than step 3, but makes it more challenging to identify nodes which encounter issues during the shutdown process.
 

1. Isilon recommends isolating the cluster from clients to ensure that write-heavy clients do not impede the shutdown procedure. You can do this by disabling the client-facing services running on your cluster. Perform the following procedure to disable client-facing services:

A. Identify the client-facing services or protocols that are running on your cluster by running the following commands for each client-facing service:

  • isi services apache2
  • isi services isi_hdfs_d
  • isi services isi_iscsi_d
  • isi services ndmpd
  • isi services nfs
  • isi services smb
  • isi services vsftpd

B. Document the services that are "enabled" on your cluster based on the output for each command. Selected in the example below, the SMB service is enabled whereas the NFS service is disabled:

mycluster-4# isi services smb
Service 'smb' is enabled.
mycluster-4# isi services nfs
Service 'nfs' is disabled.
mycluster-4#

C. Disable client-facing services. After this step, all clients immediately lose connection to the cluster. To disable a service, run the following command that is related to the service that you have enabled:

  • isi services apache2 disable
  • isi services isi_hdfs_d disable
  • isi services isi_iscsi_d disable
  • isi services ndmpd disable
  • isi services nfs disable
  • isi services smb disable
  • isi services vsftpd disable

If you have iSCSI clients, ensure that that iSCSI clients have unmounted their LUNs prior to performing step 2. Run the isi iscsi list command to confirm that all iSCSI clients are disconnected from the cluster.

 

NOTE: If you are disabling the iSCSI service, make sure that you have shut down iSCSI clients before running the isi_iscsi_d disable command. Disruption to a mounted iSCSI LUN could cause damage to the client, which typically requires recovery from backup.

 

2. Move data writes stored in node journals to the file system by running the isi_for_array isi_flush command. Output similar to the following appears on each node:

mycluster-4# isi_for_array isi_flush
mycluster-1: Flushing cache...
mycluster-1: Cache flushing complete.

 

NOTE: On a large cluster with a high number of outstanding writes, this step may take several minutes to complete.

If a node fails to flush its data, you receive output similar to the following below, where node 1 and node 2 fail their flush command:

 

mycluster-4# isi_for_array isi_flush
mycluster-1: Flushing cache...
vinvalbuf: flush failed, 1 clean and 0 dirty bufs remaining
mycluster-2: Flushing cache...
fsync: giving up on dirty

 

Run the isi_for_array isi_flush command again. If any node fails to flush, contact Dell Isilon Technical Support. All nodes must successfully flush before proceeding to the next step.

 

NOTE: If you remove a power source from a node that has not flushed data from its journal to the file system, the risk of data loss increases substantially. Contact Dell Isilon Technical Support if you need assistance with the shutdown procedure.

 

3. Shut down each node in the cluster sequentially and monitor the output. This approach is recommended because it enables you to identify and resolve any issues before shutting down the next node in the cluster. Shut down each node by performing the following steps:

 

CAUTION: Do not run the isi_for_array shutdown -p command to shut down your cluster.


Any node that panics or reboots at this step is a node that requires further investigation. In particular, all nodes must flush data from the node journal to the file system before proceeding.

 

WARNING: If you remove a power source from a node that has not flushed data from its journal to the file system, the risk of data loss increases substantially. Contact Dell Isilon Technical Support if you need assistance with the shutdown procedure.
 

 

A. Attach a serial console to each node.

B. Run the following command:

isi config
shutdown

 

When the node is successfully shut down, output similar to the following appears:

Powering the system off using ACPI

NOTE:If you do not have access to your nodes through a keyboard, video, mouse (KVM) switch and must use a laptop instead, this step may take hours to complete.
 

C. Watch the console and look for hardware-related failure events. Successful node journal saves are Selected in the following output variations:

2014-03-22T00:35:19Z <1.5> mycluster-3(id11) isi_save_journal[44868]: Attempting to save journal to default location
2014-03-22T00:35:19Z <1.5> mycluster-3(id11) isi_save_journal[44868]: Saving journal to /var/journal/journal.gz
2014-03-22T00:35:19Z <1.5> mycluster-3(id11) isi_save_journal[44868]: All data saved successfully

2014-03-22T00:37:29Z <1.5> mycluster-3(id11) isi_save_journal[45074]: Attempting to save journal to default location
2014-03-22T00:37:29Z <1.5> mycluster-3(id11) isi_save_journal[45074]: A valid backup journal already exists. Not saving.

An example of a node journal save failure is highlighted in the output below:
2014-03-21T23:39:09Z <1.4> mycluster-3(id11) /sbin/shutdown: ERROR: Validation failed for backup journal. Shutdown aborted
2014-03-21T23:39:09Z <1.4> mycluster-3(id11) /sbin/shutdown: Failed command output:

 

If you receive an error that the node journal did not save, you can manually save the journal by performing the steps in Phase 3.



Shut down all nodes in the cluster simultaneously


If there is an emergency, you can shut down all nodes in the cluster simultaneously. However, this method is not recommended because it does not enable you to monitor the status and output of each node in case an issue occurs. If you choose to follow these steps, Dell strongly recommends following all the steps in Phase 3 to verify that all nodes have properly shutdown after performing the procedures below.
 

NOTE: Any node that panics or reboots at this step is a node that requires further investigation. In particular, all nodes must flush data from the node journal to the file system before proceeding.
 
WARNING: If you remove a power source from a node that has not flushed data from its journal to the file system, the risk of data loss increases substantially. Contact Dell Isilon Technical Support if you need assistance with the shutdown procedure.
 


To shut down all nodes in your cluster, use the OneFS command-line interface or the OneFS web administration interface.


From the OneFS command-line interface

Run the following command:
# isi config shutdown all

 

NOTE: Do not run the isi_for_array shutdown -p command to shut down your cluster.
  

From the OneFS web administration interface.
In OneFS 8.0 and later:

  1. Click Cluster Management > Hardware Configuration > Shutdown & Reboot Controls

  2. Click Shut down, and then click Submit.

  3. Click Yes to confirm. A page appears stating that the cluster is now shutting down.

 

Phase 3: Verify that that nodes have successfully shut down.

Confirm that the nodes have properly shut down by looking at the power indicator light-emitting diode (LED) on the back of the node. All power indicator LEDs should appear dark, or OFF. This indicates that the node has successfully shutdown.
 

WARNING: If a node has not successfully shut down and you disconnect the power source to the node, the chance of data loss increases substantially. Recovering data requires a lengthy recovery procedure, and sometimes a complete cluster rebuild.
  

** Contact Dell Isilon Technical Support if you have any doubts about the success of the shutdown operation, such as if the node does not shut down or the journal is not saved**.
 

If the power indicator light on the back of the node is still illuminated, the node has not shut down. If the node has not shut down, or if you receive console output indicating that the node journal did not save properly (from Phase 2, step 3c), you must manually save the journal to ensure that that data is committed to disk before shutting down the node.


To manually save the journal and shut down the node, perform the following steps:
 

  1. Attach a serial console to the node. Determine if the node is responsive to the command-line interface.

A. If the node is responsive to the command-line interface, reboot the node by running the following command:

# isi config reboot

B. If the node is not responsive to the command-line interface, manually reboot the node by pressing and holding the power button on the back of the node. This causes the node to power off. Wait 30 seconds and then press the power button once to boot the node backup again. Go to the next step.
 

WARNING: Manually rebooting the node is advised for this step only. Do not manually shut down the node for any other condition. It can lead to data loss.
 

 

2. After rebooting the node, log back in and use the following steps to save the journal:

A. Attempt to gracefully shut down the node again by running the following command:

# isi config shutdown

B. If the output still indicates that the journal did not save, manually save the journal by running the following command:

# isi_save_journal

C. If the journal still does not save, unmount the file system, /ifs and then force save the journal by running the following commands:

# isi_kill_busy && umount /ifs


Then force save the journal by running.

D. Verify that the journal is saved by running the isi_checkjournal command.

# isi_checkjournal


 

E. Do not go to the next step until output indicates that the journal is successfully saved.

Contact Dell Isilon Technical Support if needed.

 

 Phase 4: Disconnect the power source.

After your cluster has successfully shut down and the nodes are powered off, only then can the power source be disconnected from the cluster.
 

WARNING: If a node has not been successfully shut down, do not disconnect the node's power source. Doing so may result in data loss, a lengthy recovery procedure, and sometimes a complete cluster rebuild.


NVRAM batteries

When a client writes a file to a node, the writes are first stored in nonvolatile RAM (NVRAM) hosted on the node's journal card. Sometime later, OneFS commits those writes to disk. To protect the data stored in NVRAM if an unscheduled power outage, each node is equipped with NVRAM batteries (two for redundancy). A node that is powered off but remains connected to a power source continues to refresh its NVRAM batteries. When the power source is disconnected from the node, the NVRAM batteries start to drain. Battery life in the current generation of nodes (X200, S200, X400, and NL400) is approximately five days. In the previous generation of nodes, NVRAM battery life is approximately three days.

Dell Technologies recommends properly shutting down nodes to avoid relying on NVRAM batteries for a substantial length of time during a power outage.

NOTE: For more information about how Isilon uses NVRAM to preserve data integrity, see the "Structure of the file system" section in the OneFS web administration and CLI administration guides.

 

If the NVRAM batteries on a node drain completely, the node boots to read-only mode and stays in read-only mode for approximately 30 minutes until the NVRAM batteries fully charge. When the batteries are recharged, the node automatically returns to normal read/write mode.

WARNING: If data is still stored in NVRAM because of an improper shutdown, and a node is without system power for longer than the NVRAM battery life, you experience data loss, a lengthy recovery procedure, and sometimes a complete cluster rebuild.

 

Phase 5: Power on each node in the cluster

These steps are to be performed when you are ready to restart your Isilon cluster.

  1. Restore the power source to each node.

  2. Press the power button on the front panel or the back of each node to boot them.

  3. After all nodes have been powered on, run the isi status -q command to review the health of your cluster. Verify that all nodes are OK in the Health DASR column and are not in a read-only (R) mode before proceeding. For a healthy cluster, output similar to the following should appear:
Cluster Name: mycluster
Cluster Health:     [ OK ]
Cluster Storage:  HDD                 SSD           
Size:             11G (23G Raw)       0 (0 Raw)     
VHS Size:         11G                
Used:             7.9G (69%)          0 (n/a)       
Avail:            3.5G (31%)          0 (n/a)       
                   Health  Throughput (bps)  HDD Storage      SSD Storage
ID |IP Address     |DASR |  In   Out  Total| Used / Size     |Used / Size
-------------------+-----+-----+-----+-----+-----------------+-----------------
  1|10.1.16.141    |-OK- |    0| 150K| 150K| 2.0G/ 2.8G( 69%)|    (No SSDs)   
  2|10.1.16.142    |-OK- |  98K|  13K| 112K| 2.0G/ 2.8G( 69%)|    (No SSDs)   
  3|10.1.16.143    |-OK- |    0|  44K|  44K| 2.0G/ 2.8G( 69%)|    (No SSDs)   
  4|10.1.16.144    |-OK- |    0|  512|  512| 2.0G/ 2.8G( 69%)|    (No SSDs)   
-------------------+-----+-----+-----+-----+-----------------+-----------------
Cluster Totals:          |  98K| 208K| 306K| 7.9G/  11G( 69%)|    (No SSDs)   
Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only   

 

4. See the list of enabled services that was created in Phase 2, Step 1b and enable the services that were disabled by running one or more of the following commands:
  • isi services apache2 enable
  • isi services isi_hdfs_d enable
  • isi services isi_iscsi_d enable
  • isi services ndmpd enable
  • isi services nfs enable
  • isi services smb enable
  • isi services vsftpd enable
5. Verify that your clients can connect to the cluster and perform their usual workflows. Your cluster should be functioning normally.


Phase 6: POST CHECK - Run a Health Check on the cluster

  1. Upload a full log gather

# isi_gather_info --esrs
  1. Perform or Request an Isilon Health Check by the Remote Reactive (Customer Support) Team.

Steps to run health checks.
Isilon: How to Run the Isilon On-Cluster Analysis Tool or Dell Technologies PowerScale HealthCheck - PowerScale Info Hub
 

  1. Request a health check using Remote Reactive Support Team

This is available to all customers with an active maintenance agreement for clusters on supported code versions.
If you meet these requirements, open a Service Request (SR) on the Dell Online Support site requesting an "Isilon Health Check."


*The Health Check is not intended to fix cluster issues, or assess the cluster's configuration, performance, or workflow.

 

Additional Information

Article Properties


Affected Product

Isilon

Product

Isilon

Last Published Date

10 Apr 2024

Version

11

Article Type

How To