Standard RAID Levels - NetwaxLab

Breaking

Facebook Popup

BANNER 728X90

Tuesday, November 11, 2014

Standard RAID Levels

The standard RAID levels are a basic set of RAID configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from general purpose computer hard disk drives. The most common types today are RAID 0 (striping), RAID 1 and variants (mirroring), RAID 5 (distributed parity) and RAID 6 (dual parity). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association in the Common RAID Disk Drive Format (DDF) standard.

#1  RAID 0


RAID 0
A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped), without parity information and with speed as the intended goal. RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical disk out of two or more physical ones.

A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 320 GB disk, the size of the array will be 240 GB (120 GB × 2).

 \begin{align} \mathrm{Size} & = 2 \cdot \min \left( 120\,\mathrm{GB}, 320\,\mathrm{GB} \right) \\
& = 2 \cdot 120\,\mathrm{GB} \\
& = 240\,\mathrm{GB} \end{align}

The diagram shows how the data is distributed into Ax stripes to the disks. Accessing the stripes in the order A1, A2, A3, ... provides the illusion of a larger and faster drive. Once the stripe size is defined on creation it needs to be maintained at all times.

Performance


RAID 0 is also used in areas where performance is desired and data integrity is not very important, for example in some computer gaming systems. Although some real-world tests with computer games showed a minimal performance gain when using RAID 0, albeit with some desktop applications benefiting, another article examined these claims and concluded: "Striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance."

Characteristics and Advantages


  • RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written to a separate disk drive.
  • I/O performance is greatly improved by spreading the I/O load across many channels and drives.
  • Best performance is achieved when data is striped across multiple controllers with only one drive per controller.
  • No parity calculation overhead is involved.
  • Very simple design.
  • Easy to implement.

#2  RAID 1


RAID 1
RAID Level 1 provides redundancy by writing all data to two or more drives. The performance of a level 1 array tends to be faster on reads and slower on writes compared to a single drive, but if either drive fails, no data is lost. This is a good entry-level redundant system, since only two drives are required; however, since one drive is used to store a duplicate of the data, the cost per megabyte is high. This level is commonly referred to as mirroring.

There is no striping. RAID-1 provides the best performance and the best fault-tolerance in a multi-user system.

Characteristics & Advantages


  • One Write or two Reads possible per mirrored pair.
  • Twice the Read transaction rate of single disks, same Write transaction rate as single disks.
  • 100% redundancy of data means no rebuild is necessary in case of a disk failure, just a copy to the replacement disk.
  • Transfer rate per block is equal to that of a single disk.
  • Under certain circumstances, RAID 1 can sustain multiple simultaneous drive failures.
  • Simplest RAID storage subsystem design.


#3  RAID 2


A RAID 2 stripes data at the bit (rather than block) level, and uses a Hamming code, is intended for use with drives which do not have built-in error detection. The disks are synchronized by the controller to spin at the same angular orientation (they reach Index at the same time), so it generally cannot service multiple requests simultaneously. Extremely high data transfer rates are possible. This is the only original level of RAID that is not currently used.
RAID 2

All hard disks eventually implemented Hamming code error correction. This made RAID 2 error correction redundant and unnecessarily complex. This level quickly became useless and is now obsolete. There are no commercial applications of RAID 2.

All SCSI drives support built-in error detection, so this level is of little use when using SCSI drives.

Characteristics & Advantages


  • "On the fly" data error correction.
  • Extremely high data transfer rates possible.
  • The higher the data transfer rate required, the better the ratio of data disks to ECC disks.
  • Relatively simple controller design compared to RAID levels 3, 4 & 5.

#4  RAID 3


A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the characteristics of RAID 3 is that it generally cannot service multiple requests simultaneously. This happens because any single block of data will, by definition, be spread across all members of the set and will reside in the same location. So, any I/O operation requires activity on every disk and usually requires synchronized spindles.
RAID 3 setup of 6-byte blocks and two parity bytes,
shown are two blocks of data in different colors.

This makes it suitable for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random disk locations will get the worst performance out of this level.

The requirement that all disks spin synchronously, a.k.a. lockstep, added design considerations to a level that didn't give significant advantages over other RAID levels, so it quickly became useless and is now obsolete. Both RAID 3 and RAID 4 were quickly replaced by RAID 5. RAID 3 was usually implemented in hardware, and the performance issues were addressed by using large disk caches.

Characteristics & Advantages


  • Very high Read data transfer rate.
  • Very high Write data transfer rate.
  • Disk failure has an insignificant impact on throughput.
  • Low ratio of ECC (Parity) disks to data disks means high efficiency.

#5  RAID 4


A RAID 4 uses block-level striping with a dedicated parity disk.

In the example on the right, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.

RAID 4 setup with dedicated parity disk with each color
representing the group of blocks in the respective 
parityblock (a stripe)
RAID 4 is very uncommon, but one enterprise level company that has previously used it is NetApp. The aforementioned performance problems were solved with their proprietary Write Anywhere File Layout (WAFL), an approach to writing data to disk locations that minimizes the conventional parity RAID write penalty. By storing system metadata (inodes, block maps, and inode maps) in the same way application data is stored, WAFL is able to write file system metadata blocks anywhere on the disk. This approach in turn allows multiple writes to be "gathered" and scheduled to the same RAID stripe—eliminating the traditional read-modify-write penalty prevalent in parity-based RAID schemes.

Characteristics & Advantages


  • Very high Read data transaction rate.
  • Low ratio of ECC (Parity) disks to data disks means high efficiency.
  • High aggregate Read transfer rate.

#6  RAID 5


A RAID 5 comprises block-level striping with distributed parity. Unlike in RAID 4, parity information is distributed among the drives. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks.

RAID 5 setup with distributed parity with each color representing
the group of blocks in the respective parity 
block (a stripe).
This diagram shows left asymmetric algorithm
This can speed small writes in multiprocessing systems, since the parity disk does not become a bottleneck. Because parity data must be skipped on each drive during reads, however, the performance for reads tends to be considerably lower than a level 4 array. The cost per megabyte is the same as for level 4.

Characteristics & Advantages


  • Highest Read data transaction rate.
  • Medium Write data transaction rate.
  • Low ratio of ECC (Parity) disks to data disks means high efficiency.
  • Good aggregate transfer rate.

#7  RAID 6


RAID 6 extends RAID 5 by adding an additional parity block; thus it uses block-level striping with two parity blocks distributed across all member disks.

Performance (speed)


RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture – in software, firmware or by using firmware and specialized ASICs for intensive parity calculations. It can be as fast as a RAID-5 system with one fewer drive (same number of data drives).
RAID 6 setup, which is identical to RAID 5 other than the
addition of a second parity
 block

Implementation


According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations (parity and Reed-Solomon), orthogonal dual parity check data and diagonal parity, have been used to implement RAID Level 6."

Computing Parity


Two different syndromes need to be computed in order to allow the loss of any two drives. One of them, P can be the simple XOR of the data across the stripes, as with RAID 5. A second, independent syndrome is more complicated and requires the assistance of field theory.

To deal with this, the Galois field GF(m) is introduced with m=2^k, where GF(m) \cong F_2[x]/(p(x)) for a suitable irreducible polynomial p(x) of degree k. A chunk of data can be written as d_{k-1}d_{k-2}...d_0 in base 2 where each d_i is either 0 or 1. This is chosen to correspond with the element d_{k-1}x^{k-1} + d_{k-2}x^{k-2} + ... + d_1x + d_0 in the Galois field. Let D_0,...,D_{n-1} \in GF(m) correspond to the stripes of data across hard drives encoded as field elements in this manner (in practice they would probably be broken into byte-sized chunks). If g is some generator of the field and \oplus denotes addition in the field while concatenation denotes multiplication, then \mathbf{P} and \mathbf{Q} may be computed as follows (n denotes the number of data disks):


\mathbf{P} = \bigoplus_i{D_i} = \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \mathbf{D}_2 \;\oplus\; ... \;\oplus\; \mathbf{D}_{n-1}

\mathbf{Q} = \bigoplus_i{g^iD_i} = g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; g^2\mathbf{D}_2 \;\oplus\; ... \;\oplus\; g^{n-1}\mathbf{D}_{n-1}

For a computer scientist, a good way to think about this is that \oplus is a bitwise XOR operator and g^i is the action of a linear feedback shift register on a chunk of data. Thus, in the formula above, the calculation of P is just the XOR of each stripe. This is because addition in any characteristic two finite field reduces to the XOR operation. The computation ofQ is the XOR of a shifted version of each stripe.
Mathematically, the generator is an element of the field such that g^i is different for each nonnegative i satisfying i < n.
If one data drive is lost, the data can be recomputed from P just like with RAID 5. If two data drives are lost or a data drive and the drive containing P are lost, the data can be recovered from P and Q or from just Q, respectively, using a more complex process. Working out the details is extremely hard with field theory. Suppose that D_i and D_j are the lost values with i \neq j. Using the other values of D, constants A and B may be found so that D_i \oplus D_j = A and g^iD_i \oplus g^jD_j = B:


A = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{D_\ell} = \mathbf{P} \;\oplus\; \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \dots \;\oplus\; \mathbf{D}_{i-1} \;\oplus\;  \mathbf{D}_{i+1} \;\oplus\;  \dots \;\oplus\; \mathbf{D}_{j-1}  \;\oplus\; \mathbf{D}_{j+1} \;\oplus\;  \dots \;\oplus\;  \mathbf{D}_{n-1}


B = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{g^{\ell}D_\ell} = \mathbf{Q} \;\oplus\; g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; \dots \;\oplus\; g^{i-1}\mathbf{D}_{i-1} \;\oplus\;  g^{i+1}\mathbf{D}_{i+1} \;\oplus\;  \dots \;\oplus\; g^{j-1}\mathbf{D}_{j-1}  \;\oplus\; g^{j+1}\mathbf{D}_{j+1} \;\oplus\;  \dots \;\oplus\; g^{n-1}\mathbf{D}_{n-1}

Multiplying both sides of the equation for B by g^{n-i} and adding to the former equation yields (g^{n-i+j}\oplus1)D_j = g^{n-i}B\oplus A and thus a solution for D_j, which may be used to compute D_i.

The computation of Q is CPU intensive compared to the simplicity of P. Thus, a RAID 6 implemented in software will have a more significant effect on system performance, and a hardware solution will be more complex.

Characteristics & Advantages


  • RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (dual parity).
  • Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures.
  • RAID 6 protects against multiple bad block failures while non-degraded.
  • RAID 6 protects against a single bad block failure while operating in a degraded mode.
  • Perfect solution for mission critical applications.

RAID Level Comparison



No comments:

Post a Comment