Solutions Products Services Download Order Knowledge Base Support
 
Knowledge Base
 
 

Mass NAS: Typical data organization and data recovery

Modern computer market provides an ample supply of mass NAS storages which, depending on the retailer, slightly differ in firmware, settings, actual data layout and other features. But, principally, most widely-used NAS storages have equal data organization: NAS consist of from one to several disks usually organized into complex RAID systems.

Major recovery techniques for NAS systems are based on the principles of data recovery from complex RAID systems. This article gives helpful information about mass NAS storages using examples of Buffalo TeraStation, Iomega, Synology and other similar solutions.

Contents


Data organization

Primarily, NAS devices serve as shared storages providing access to data over a local network. In general, most NASes have common storage structure and data organization. Actual data layout, however, depends on NAS vendor and embedded configuration.


Storage structure

Each NAS disk arranges data on four disk partitions:

  • Firmware-reserved partition. This partition contains technical information for firmware. On 1TB TeraStation, for example, this partition is 0.6GB in size identified as 'Linux native' and formatted with SGI XFS file system. Available on the 1st and the 2nd NAS disks only.

  • Swap partition. Contains swap for NAS firmware.

  • Data partition. This partition stores user data. On 1TB TeraStation, for example, it is 232GB partition, identified as 'Linux native'. Actual size depends on NAS settings.

  • Padding partition. This partition is used to unify data partition size regardless of actual disks number. The size depends on disk model. On 1TB TeraStation it is identified as 'Linux native' but contains no file system.

Disk partitioning style is standard DOS-style (MBR-based) and is readable by any software.



RAID configuration and data organization

Depending on the configuration RAID offers several possible data organization methods for Data partitions:

  • RAID5. The most widely-used configuration. In RAID5 mode user data are located across Data partitions of all four disks. Usual parity distribution is backward-dynamic (left-symmetric). Stripe size depends on the settings (usually 64KB). Disks order for RAID is consequent: the 1st disk of NAS is the 1st disk of RAID etc. Data partition on TeraStation, for example, is formatted as SGI XFS, on Synology – as Ext3.

  • RAID0. Usually user data are arranged as a single full-capacity storage or a pair of RAID0 stripe sets with two independent partitions (different 'share' virtual folders on NAS). Both contain the same file system type but different data.

  • RAID10 or RAID0+1. The mirror of two RAID0 stripe sets or stripe set of two mirrors. User data are arranged in the same way as in RAID0, but with only one 'share' and both stripe sets contain the same information.

  • JBOD. Data partitions are concatenated to yield maximum storage capacity. User data are spanned across all data partitions.

  • Individual drives. In NAS drives not organized in RAID each data partition bases on an independent file system.


Before you start data recovery from your NAS you should identify actual configuration of the storage. For more information about RAID systems, please, refer to RAID: structure and recovery article.




When recovery is required?

Due to their evident advantages NAS storages have already become an essential part of everyday work for home users and SMBs. NAS vendors began to offer quite cost-efficient solutions what increased their availability on the market. Despite enhanced reliability of these storages these days they are still exposed to failures resulting in storage inaccessibility or even data loss.
Most common data loss causes include:

  • Loss of NAS link;
  • Offline array or 'four red lights';
  • Data corruption due to power outages;
  • Firmware crash or failed boot;
  • Disk(s) failure;
  • Controller failure;
  • Electrical or mechanical damages.

User errors causing data loss include:

  • Faulty firmware update resulting in reset of embedded RAID settings;
  • Files deletion;
  • Rebuilding embedded RAID configuration on live data resulting in disks re-formatting.

If you are sure that NAS disks didn't sustain any physical damage and remained workable you may start data recovery following the instructions given below. If the disks have any physical defects caused by mechanic, thermal or electric damage it's strongly recommended to have your data recovered in a specialized data recovery laboratory.

For efficient recovery from NAS storages SysDev Laboratories advise their UFS Explorer software. UFS Explorer RAID Recovery was specially designed to work with complex RAID systems. UFS Explorer Professional Recovery offers professional approach to data recovery process. Other UFS Explorer products work with RAID systems via plug-in modules. All the software products apply powerful mechanisms to allow you to achieve maximum possible recovery result and are 100% reliable to guarantee complete safety of the data stored on your NAS. For more detailed information, please, go to http://www.ufsexplorer.com/products.php.




Getting started

As NAS devices don't provide low-level access to data you have to disassemble the storage and connect its hard drives to a recovery computer before you start data recovery. To do this:

Mark the order of NAS disks!
  • Remove hard disk drives from NAS;
  • Identify interface type of the drives: modern NASes use SATA drives; older storages may still use PATA/IDE drives;
  • Connect disks to a personal computer.

If the recovery computer doesn't provide sufficient number of disk adapter interfaces you can:

  • Install additional PCI hard disk adaptor;
  • Use USB hard disk adaptors;
  • Attach disks one-by-one and make full disk images. This solution is recommended but on condition that you have enough free disk space.

Warning: Turn off the computer and unplug power cable before you install any PCI device or connect/disconnect SATA/PATA drives to avoid electric damage!

After you ensure access to NAS data you can start data recovery process.




Data Recovery

Among all data recovery software we advise UFS Explorer RAID Recovery for your NAS as a software specially designed to work with complex RAID configurations.

The entire data recovery process with UFS Explorer RAID Recovery requires a few simple steps:

  • Get low-level access to NAS disks: connect them to a recovery computer or save disk images as described above. To get direct access log in with the rights of local system Administrator;

  • Identify the type of RAID configuration: refer to 'RAID recovery' in the user manual for instructions;

  • Open the disks: refer to 'Operations' in the user manual;

  • If RAID doesn't require reconstruction, start data recovery from the Data partition of NAS drive (Data partition is usually the largest one): refer to 'Operations: Lost files recovery' in the user manual;

  • If RAID requires reconstruction you have to build a virtual RAID storage before you start data recovery: refer to 'RAID recovery: Building RAID' in the user manual for instructions. RAID reconstruction result appears as a 'Virtual RAID' in the list of storages. Refer to 'Operations: Storages tree navigation' in the user manual for more information. After virtual RAID reconstruction is completed you may start data recovery as described in 'Operations: Lost files recovery' of the user manual;

  • On completion of all required operations you can copy your files to a safe location.



Final notes

In case of any physical damages it's strongly recommended to bring your NAS to a specialized data recovery laboratory in order to avoid data loss.

If you feel unsure about conducting data recovery operations from your NAS by yourself or not confident about RAID configuration in your NAS, feel free to use professional services provided by SysDev Laboratories.

For data recovery professionals SysDev Laboratories offer expert NAS storage analysis on commercial basis.




Last update: 20.10.2016