Solutions Products Services Download Order Knowledge Base Support
 
Knowledge Base
 
 

How to: Identify drives order in XFS NAS

Generally, NAS storages, like Buffalo Terastation, Iomega StorCenter and Synology, depend on software RAID configurations built on data partitions – the largest partitions – of each drive. These NASes use XFS file system distributed across the data partitions.

To successfully assemble the RAID configuration for further data recovery, users need to know the correct order of RAID disks of the NAS.

The article below explains how to identify the order of drives in four-disk XFS-based NAS of Buffalo Terastation, Iomega StorCenter, Synology and similar NAS models.


Contents


Ways and means

Before you start recovering data from your XFS NAS and, if necessary, the reconstruction of embedded RAID, you should know RAID parameters and the order of RAID drives.

The best means to identify the drive order is the analysis of RAID drive content using known data fragments at data partition starts. CI Hex Viewer offers the most effective means and techniques for content analysis. At the same time, some powerful data recovery applications offer easier ways to identify RAID parameters – automatic RAID detection.

s NAS storages don't provide direct logical access to their file systems and XFS NASes are not an exclusion, you should begin with storage disassembling and connecting its drives to a recovery PC. Please read HOW TO: Connect IDE/SATA drive to a recovery PC for the instructions.




Automatic RAID detection

XFS-based NASes usually apply Multiple Devices (MD) software RAID configurations. Such RAID configurations are created with well-known mdadm utility and are capable of describing linear (JBOD), multipath, RAID0 (stripe), RAID1 (mirror), RAID5 and RAID6 configurations. This utility creates pseudo-partitions with metadata information sufficient to build RAID automatically.

SysDev Laboratories advise their UFS Explorer software as powerful utilities which support automatic detection, reconstruction and data recovery from software RAID configurations. UFS Explorer RAID Recovery was specially developed to work with complex RAID systems. UFS Explorer Professional Recovery offers a professional approach to data recovery. This software features embedded tools for RAID recovery. Other UFS Explorer products work with RAID systems via plug-in modules. For more detailed information, please, go to http://www.ufsexplorer.com/products.php.

We advise UFS Explorer RAID Recovery for your NAS as the software specially created to work with RAID configurations.

To build RAID automatically with UFS Explorer RAID Recovery you should:

  • Run the software;

  • Make sure that all NAS drives (or drive image files) are opened;

  • Select ANY data partition of the software RAID to add it to virtual RAID;

  • Once the partition is added and MD metadata is detected, the software will ask whether you want to try to assemble RAID automatically;

  • Press 'Yes' to build RAID automatically: The software will load disk partitions in the correct order and with correct RAID parameters;

  • Press 'Build' to add RAID to UFS Explorer for further operations.

Note: If RAID parameters of the NAS were reset to a different RAID level, drive order or stripe size, the previous RAID configuration requires manual detection. Press 'No' in the software dialog, refuse automatic RAID assembling and use the manual specification of RAID parameters.




Analyzing disk content

The best way to detect RAID parameters and identify the order of RAID drives precisely is to conduct the in-depth analysis of disk contents. CI Hex Viewer software provides effective means for qualitative low-level data analysis. This software is distributed free of charge.

To prepare for content analysis you should carry out the following actions:

  1. Connect the drives to a recovery PC;
    Linux users: do not mount file systems from NAS drives!
    Mac users: avoid all diagnosis, repair and similar operations on disks using disk utilities!
  2. Boot the PC, install and run CI Hex Viewer software;
    Windows XP and below: run the software as Administrator;
    Windows Vista/7 with UAC: run the software as Administrator using the context menu;
    Mac OS: sign in as the system Administrator when the program starts;
    Linux: from the command line run 'sudo cihexview' or 'su root -c cihexview'.
  3. Click 'Open Disk Storage' (Ctrl+Shift+”O”); open the data partition of each NAS drive.

Each NAS drive has the same partition structure: 1-3 small “system” partitions (with the total size of about several gigabytes) and a large data partition (usually over 95% of total drive capacity). For further information about partitions layout please visit the following web-page.




RAID configuration and advanced detection of drives order

To start disk content analysis, open the hexadecimal view of each data partition of all NAS drives in CI Hex Viewer.
You can see an example of content analysis for a default RAID5 configuration with the 64KB stripe size and the XFS file system.



XFS start

Fig. 1. XFS file system start (superblock).


The start block (or superblock) of XFS file system contains a “XFSB” string at the start, values of file system parameters and many zeros. A valid superblock never contains any non-zero data at a range from 0x100..0x200 bytes. This property makes it easy to identify superblock validity.



I-nodes block

Fig. 2. XFS I-nodes block.


In this XFS file system the I-nodes block lies at offset 64KB. In RAID 0 and RAID 5 layouts with the default 64K stripe size the I-nodes block locates at zero offset of the data partition of Drive2.
I-nodes can be identified by the “IN” string (“49 4E” byte sequence) at the start of each 256 (0x100) byte blocks. Each I-node describes a file system object.

The upper digit of the third byte defines the object type. 4X byte indicates a directory and 8X – a file.
In Figure 2 the first I-node indicates a directory and the second one – a file.



Parity block

Fig. 3. RAID5 parity block.


The parity block contains a mixture of data from data blocks of other drives. It may look like “trash” with visible fragments of data from data blocks.

Even if the parity block contains a valid “XFSB” string, unlike the superblock it contains non-zero data at 0x100...0x200 bytes range; that makes it different from the superblock. Please also note that the parity block usually contains much more non-zero bytes.

Now, using this known content and assuming that the start block is the first block of the data partition of the given drive, you can define the RAID configuration:


RAID 5:
  • Only one first block will contain the superblock (Fig.1);

  • If the stripe size is 64KB (usual for Terastation), one of the first blocks will contain I-nodes; the first I-node indicates a directory (root directory). If the root directory contained few files, their names are given in the I-node body (as in Fig.2);

  • The start block of the third drive will contain the data or I-nodes table;

  • The start block of the fourth drive will contain parity (Fig. 3);

  • Applying XOR operation to bytes from the start blocks of each disk at the same byte position gives zero result.

One can define the RAID5 configuration as RAID with only one superblock in the start block and parity. XOR operation over the bytes of each start block at the same byte position gives zero result.

Drives order drive with the superblock – the first; drive with the root directory – the second; drive with parity – the fourth one; the remaining drive – the third. The parity check procedure includes such steps:

  1. Choosing partition offset with non-zero data;

  2. Running calculator (e.g. Windows standard calculator);

  3. Choosing 'View' as 'Scientific' or 'Programming', switching the mode from 'Dec' to 'Hex';

  4. Typing the hexadecimal digit from the first drive, pressing 'Xor' button;

  5. Typing the hexadecimal digit from the next drive at the exactly same offset and pressing 'Xor' again;

  6. Repeating till the last drive. Before you enter the digit from the last drive, the calculator must show the same number as at the specified position of the last disk. 'Xor' operation will give zero.

A non-zero value for any of the offsets indicates either a calculation error or absence of parity.


RAID 0:
  • Only one first block contains the superblock (Fig.1);

  • If the stripe size is 64KB (usual for Terastation), one of the first blocks will contain I-nodes; the first I-node must indicate a directory (root directory). If the root directory contains files, their names are given in the I-node body (as in Fig.2);

  • Other first blocks do not contain other superblocks or parity;

  • Other drives may contain more I-nodes in the first block.

One can define the RAID0 configuration as RAID with only one superblock in the start block and without parity.

The drives order is the following: drive with the superblock – the first; drive with the root directory – the second. The 3rd and the 4th drives can be not identified at once, but you can try both and find which of them is the right one.



RAID 10/0+1:
  • The first blocks of two drives contain a valid superblock (Fig.1);

  • Other two drives contain data in the start block and for 64KB stripe size – I-nodes.

One can define the RAID10/0+1 configuration as RAID with two superblocks in the start blocks.

The drives order is as follows: drive with the superblock – the first, drive without a superblock (data or I-nodes) – the second. This configuration has two such pairs and both of them can be used for data recovery.



RAID 1 and multi-part storage:
  • First blocks of each drive contain a valid superblock (Fig.1).

One can define RAID1 and multi-part storage as RAID with superblocks in all start blocks.

Drives order is the following: Any drive from RAID 1 gives all the data. For multi-part storage each drive has a separate valid file system.

If the content analysis gives a contradictory result and you are still unsure about the drives order, try all combinations and choose the matching one.


Note: UFS Explorer software doesn't modify the data on the storage. You can try different RAID combinations until you get the appropriate one.





Final notes

In case of physical damage it is strongly recommended to bring your NAS to a specialized data recovery laboratory in order to avoid data loss.

If you feel unsure about conducting data recovery operations from your NAS by yourself or not confident about the RAID configuration of your NAS, do not hesitate to turn to professional services provided by SysDev Laboratories.

For data recovery professionals SysDev Laboratories offers expert NAS storage analysis on commercial basis.




Last update: 20.10.2016