Redundant Arrays of Inexpensive Disks (Raids)
Redundant Arrays of Inexpensive Disks (Raids)
When we use a disk, we sometimes wish it to be faster; I/O operations are slow and thus can be the bottleneck for the entire system. When we use a disk, we sometimes wish it to be larger; more and more data is being put online and thus our disks are getting fuller and fuller. When we use a disk, we sometimes wish for it to be more reliable; when a disk fails, if our data isnt backed up, all that valuable data is gone. In this note, we introduce the Redundant Array of Inexpensive Disks better known as RAID [P+88], a technique to use multiple disks in concert to build a faster, bigger, and more reliable disk system. The term was introduced in the late 1980s by a group of researchers at U.C. Berkeley (led by Professors David Patterson and Randy Katz and then student Garth Gibson); it was around this time that many different researchers simultaneously arrived upon the basic idea of using multiple disks to build a better storage system [BG88, K86,K88,PB86,SG86]. From the outside, a RAID looks like a disk: a group of blocks each of which one can read or write. Internally, however, the RAID is a complex beast, consisting of multiple disks, memory (both volatile and non-volatile), and one or more processors to manage the system. Thus, a hardware RAID box is very much like a computer system, but just specialized for the task of managing a group of disks. RAIDs offer a number of advantages over a single disk. One advantage is performance. Using multiple disks in parallel can greatly speed up I/O times. Another benet is capacity. Large data sets 1
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) demand large disks. Finally, RAIDs can improve reliability; spreading data across multiple disks (without RAID techniques) makes the data vulnerable to the loss of a single disk; with some form of redundancy, RAIDs can tolerate the loss of a disk and keep operating as if nothing were wrong. Amazingly, RAIDs provide these advantages transparently to systems that use them, i.e., a RAID just looks like a big disk to the host system. The beauty of transparency, of course, is that it enables one to simply replace a disk with a RAID and not change a single line of software; the operating system and client applications continue to operate without modication. In this manner, transparency greatly improves the deployability of RAID, enabling users and administrators to put a RAID to use without worries of software compatibility. D ESIGN T IP : T RANSPARENCY When considering how to add new functionality to a system, one should always consider whether such functionality can be added transparently, in a way that demands no changes to the rest of the system. Requiring a complete rewrite of the existing software (or radical hardware changes) lessens the chance of impact of an idea. We now discuss some of the important aspects of RAIDs. We begin with the interface, fault model, and then discuss how one can evaluate a RAID design along three important axes: capacity, reliability, and performance. We then discuss a number of other issues that are important to RAID design and implementation.
37.1
O PERATING S YSTEMS
A RPACI -D USSEAU
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) example, consider a RAID that keeps two copies of each block (each one on a separate disk); when writing to such a mirrored RAID system, the RAID will have to perform two physical I/Os for every one logical I/O it is issued. A RAID system is often built as a separate hardware box, with a standard connection (e.g., SCSI, or SATA) to a host. Internally, however, RAIDs are fairly complex, consisting of a microcontroller that runs rmware to direct the operation of the RAID, volatile memory such as DRAM to buffer data blocks as they are read and written, and in some cases, non-volatile memory to buffer writes safely and perhaps even specialized logic to perform parity calculations (useful in some RAID levels, as we will also see below). At a high level, a RAID is very much a specialized computer system: it has a processor, memory, and disks; however, instead of running applications, it runs specialized software designed to operate the RAID.
37.2
Fault Model
To understand RAID and compare different approaches, we must have a fault model in mind. RAIDs are designed to detect and recover from certain kinds of disk faults; thus, knowing exactly which faults to expect is critical in arriving upon a working design. The rst fault model we will assume is quite simple, and has been called the fail-stop fault model [S84]. In this model, a disk can be in exactly one of two states: working or failed. With a working disk, all blocks can be read or written. In contrast, when a disk has failed, we assume it is permanently lost. One critical aspect of the fail-stop model is what it assumes about fault detection. Specically, when a disk has failed, we assume that this is easily detected. For example, in a RAID array, we would assume that the RAID controller hardware (or software) can immediately observe when a disk has failed. Thus, for now, we do not have to worry about more complex silent failures such as disk corruption. We also do not have to worry about a single block becoming inaccessible upon an otherwise working disk (sometimes called a latent sector error). We will consider these more complex (and unfortunately, more realistic) disk faults later.
A RPACI -D USSEAU
37.3
37.4
O PERATING S YSTEMS
A RPACI -D USSEAU
Table 37.1: RAID-0: Simple Striping In the example, we have made the simplifying assumption that only 1 block (each of say size 4KB) is placed on each disk before moving on to the next. However, this arrangement need not be the case. For example, we could arrange the blocks across disks as in Table 37.2:
Disk 0 0 1 8 9 Disk 1 2 3 10 11 Disk 2 4 5 12 13 Disk 3 6 7 14 15 chunk size: 2 blocks
Table 37.2: Striping with a Bigger Chunk Size In this example, we place two 4KB blocks on each disk before moving on to the next disk. Thus, the chunk size of this RAID array is 8KB, and a stripe thus consists of 4 chunks or 32KB of data.
Chunk Sizes
Chunk size mostly affects performance of the array. For example, a small chunk size implies that many les will get striped across many disks, thus increasing the parallelism of reads and writes to a single le; however, the positioning time to access blocks across multiple disks increases, because the positioning time for the entire request is determined by the maximum of the positioning times of the requests across all drives. A big chunk size, on the other hand, reduces such intra-le parallelism, and thus relies on multiple concurrent requests to achieve high throughput. However, large chunk sizes reduce positioning time; if, for example, a single le ts within a chunk and thus is
A RPACI -D USSEAU
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) placed on a single disk, the positioning time incurred while accessing it will just be the positioning time of a single disk. Thus, determining the best chunk size is hard to do, as it requires a great deal of knowledge about the workload presented to the disk system [CL95]. For the rest of this discussion, we will assume that the array uses a chunk size of a single block (4KB); most arrays use larger chunk sizes (say around 64 KB), but for the issues we discuss below, the exact chunk size does not matter and thus we use a single block for the sake of simplicity.
A SIDE : T HE RAID M APPING P ROBLEM Before studying the capacity, reliability, and performance characteristics of the RAID, we rst present an aside on what we call the mapping problem. This problem arises in all RAID arrays; simply put, given a logical block to read or write, how does the RAID know exactly which physical disk and offset to access? For these simple RAID levels, we do not need much sophistication in order to correctly map logical blocks onto their physical locations. Take the rst striping example above (chunk size = 1 block = 4KB). In this case, given a logical block address A, the RAID can easily compute the desired disk and offset with two simple equations:
Disk = A % number_of_disks Offset = A / number_of_disks
Note that these are all integer operations (e.g., 4 / 3 = 1 not 1.33333...). Lets see how these equations work for a simple example. Imagine in the rst RAID above that a request arrives for block 14. Given that there are 4 disks, this would mean that the disk we are interested in is (14 The exact block is calculated as (14 / 4 = 3): block 3. Thus, block 14 should be found on the fourth block (block 3, starting at 0) of the third disk (disk 2, starting at 0), which is exactly where it is. You can think about how these equations would be modied to support different chunk sizes. Try it! Its not too hard.
O PERATING S YSTEMS
A RPACI -D USSEAU
A RPACI -D USSEAU
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) widely different performance characteristics from a disk. With sequential access, a disk operates in its most efcient mode, spending little time seeking and waiting for rotation and most of its time transferring data. With random access, just the opposite is true: most time is spent seeking and waiting for rotation and relatively little time is spent transferring data. To capture this difference in our analysis, we will assume that a disk can transfer data at S MB/s under a sequential workload, and R MB/s when under a random workload. In general, S is much greater than R. To make sure we understand this difference, lets do a simple exercise. Specically, lets calculate S and R given the following disk characteristics. Assume a sequential transfer of size 10 MB on average, and a random transfer of 10 KB on average. Also, assume the following disk characteristics: Average seek time 7 ms Average rotational delay 3 ms Transfer rate of disk 50 MB/s To compute S, we need to rst gure out how time is spent in a typical 10 MB transfer. First, we spend 7 ms seeking, and then 3 ms rotating. Finally, transfer begins; 10 MB @ 50 MB/s leads to 1/5th of a second, or 200 ms, spent in transfer. Thus, for each 10 MB request, we spend 210 ms completing the request. To compute S, we just need to divide: S=
Amount of Data T ime to access
10 M B 210 ms
= 47.62 M B/s
As we can see, because of the large time spent transferirng data, S is very near the peak bandwidth of the disk (the seek and rotational costs have been amortized). We can compute R similarly. Seek and rotation are the same; we then compute the time spent in transfer, which is 10 KB @ 50 MB/s, or 0.195 ms. R=
Amount of Data T ime to access
10 KB 10.195 ms
= 0.981 M B/s
O PERATING S YSTEMS
A RPACI -D USSEAU
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) latency of a single-block request should be just about identical to that of a single disk; after all, RAID-0 will simply redirect that request to one of its disks. From the perspective of steady-state throughput, wed expect to get the full bandwidth of the system. Thus, throughput equals N (the number of disks) multiplied by S (the sequential bandwidth of a single disk). For a large number of random I/Os, we can again use all of the disks, and thus obtain N R MB/s. As we will see below, these values are both the simplest to calculate and will serve as an upper bound in comparison with other RAID levels.
37.5
Table 37.3: Simple RAID-1: Mirroring In the example, disk 0 and disk 1 have identical contents, and disk 2 and disk 3 do as well; the data is striped across these mirror pairs. In fact, you may have noticed that there are a number of different ways to place block copies across the disks. The arrangement above is a common one and is sometimes called RAID-10 or (RAID 1+0) because it uses mirrored pairs (RAID-1) and then stripes (RAID-0) on top of them; another common arrangement is RAID-01 (or RAID 0+1), which contains two large striping (RAID-0) arrays, and then mirrors (RAID-1) on top of them. For now, we will just talk about mirroring assuming the above layout. When reading a block from a mirrored array, the RAID has a choice: it can read either copy. For example, if a read to logical block
A RPACI -D USSEAU
10
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) 5 is issued to the RAID, it is free to read it from either disk 2 or disk 3. When writing a block, though, no such choice exists: the RAID must update both copies of the data, in order to preserve reliability. Do note, though, that these writes can take place in parallel; for example, a write to logical block 5 could proceed to disks 2 and 3 at the same time.
A SIDE : T HE RAID C ONSISTENT U PDATE P ROBLEM Before analyzing RAID-1, let us rst discuss a problem that arises in any multi-disk RAID system, known as the consistent update problem [DAA05]. The problem occurs on a write to any RAID that has to update multiple disks during a single logical operation. In this case, let us assume we are considering a mirrored disk array. Imagine the write is issued to the RAID, and then the RAID decides that it must be written to two disks, disk 0 and disk 1. The RAID then issues the write to disk 0, but just before the RAID can issue the request to disk 1, a power loss (or system crash) occurs. In this unfortunate case, let us assume that the request to disk 0 completed (but clearly the request to disk 1 did not, as it was never issued). The result of this untimely power loss is that the two copies of the block are now inconsistent; the copy on disk 0 is the new version, and the copy on disk 1 is the old. What we would like to happen is for the state of both disks to change atomically, i.e., either both should end up as the new version or neither. The general way to solve this problem is to use a write-ahead log of some kind to rst record what the RAID is about to do (i.e., update two disks with a certain piece of data) before doing it. By taking this approach, we can ensure that in the presence of a crash, the right thing will happen; by running a recovery procedure that replays all pending transactions to the RAID, we can ensure that no two mirrored copies (in the RAID-1 case) are out of sync. One last note: because logging to disk on every write is prohibitively expensive, most RAID hardware includes a small amount of non-volatile RAM (e.g., battery-backed) where it performs this type of logging. Thus, consistent update is provided without the high cost of logging to disk.
O PERATING S YSTEMS
A RPACI -D USSEAU
11
RAID-1 Analysis
Let us now assess RAID-1. From a capacity standpoint, RAID-1 is pretty expensive; with the mirroring level = 2, we only obtain half of our peak useful capacity. Thus, with N disks, the useful capacity of mirroring is N/2. From a reliability standpoint, RAID-1 does well. It can tolerate the failure of any one disk. However, you may notice RAID-1 can actually do better than this, with a little luck. Imagine, in the gure above, that disk 0 and disk 2 both failed. In such a situation, there is still no data loss! More generally, a mirrored system (with mirroring level = 2) can tolerate 1 disk failure for certain, and up to N/2 failures depending on which disks fail. In real life, however, we generally dont like to leave things like this to chance, and thus most people consider mirroring to be good for handling a single failure. Finally, we analyze performance. From the perspective of the latency of a single read request, we can see it is the same as the latency on a single disk; all the RAID-1 does is direct the read to one of its copies. A write is a little different: it requires two physical writes to complete before it is done. These two writes happen in parallel, and thus the time will be roughly equivalent to the time of a single write; however, because the logical write must wait for both physical writes to complete, it suffers from the worst-case seek and rotational delay of the two requests, and thus (on average) will be just a little bit higher than a single write to a single disk. To analyze steady-state throughput, let us start with the sequential workload. When writing out to disk sequentially, each logical write must result in two physical writes; for example, when we write logical block 0 (in the gure above), the RAID internally would write it to both disk 0 and disk 1. Thus, we can conclude that the maximum bandwidth obtained during sequential writing to a mirrored array is ( N S), or half the peak bandwidth. 2 Unfortunately, we obtain the exact same performance during a sequential read. One might think that a sequential read could do better, because it only needs to read one copy of the data, not both. However, lets use an example to illustrate why this doesnt help much. Imagine we need to read blocks 0, 1, 2, 3, 4, 5, 6, and 7. Lets say we issue the read of 0 to disk 0, the read of 1 to disk 2, the read of 2 to disk 1, and the read of 3 to disk 3. We continue by issuing reads to 4, 5, 6, and 7 to disks 0, 2, 1, and 3, respectively. One might naively
A RPACI -D USSEAU
12
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) think that because we are utilizing all the disks in this example, we are achieving the full bandwidth of the array. To see that this is not the case, however, consider the requests a single disk receives (say disk 0). First, it gets a request for block 0; then, it gets a request for block 4 (skipping block 2). In fact, each disk receives a request for every other block. While it is rotating over the skipped block, it is not delivering useful bandwidth to the client. Thus, each disk will only deliver half its peak bandwidth. And thus, the sequential read will only obtain a bandwidth of ( N S) MB/s. 2 Random reads are the best case for a mirrored RAID. In this case, we can distribute the reads across all the disks, and thus obtain the full possible bandwidth. Thus, for random reads, RAID-1 delivers N R MB/s. Finally, random writes perform as you might expect: N R MB/s. 2 Each logical write must turn into two physical writes, and thus while all the disks will be in use, the client will only perceive this as half the available bandwidth. Even though a write to logical block X turns into two parallel writes to two different physical disks, the bandwidth of many small requests only achieves half of what we saw with striping. As we will soon see, getting half the available bandwidth is actually pretty good!
37.6
As you can see, for each stripe of data, we have added a single parity block that stores the redundant information for that stripe of
O PERATING S YSTEMS
A RPACI -D USSEAU
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) blocks. For example, parity block P1 has redundant information that it calculated from blocks 4, 5, 6, and 7. To compute parity, we need to use some kind of mathematical function that enables us to withstand the loss of any one block from our stripe. It turns out the simple function XOR does the trick quite nicely. For a given set of bits, the XOR of all of those bits returns a 0 if there are an even number of 1s in the bits, and a 1 if there are an odd number of 1s. For example:
C0 0 0 C1 0 1 C2 1 0 C3 1 0 P XOR(0,0,1,1) = 0 XOR(0,1,0,0) = 1
13
In the rst row (0,0,1,1), there are two 1s (C2, C3), and thus XOR of all of those values will be 0 (P); similarly, in the second row there is only one 1 (C1), and thus the XOR must be 1 (P). You can remember this in a very simple way: that the number of 1s in any row must be an even (not odd) number; that is the invariant that the RAID must maintain in order for parity to be correct. From the example above, you might also be able to guess how parity information can be used to recover from a failure. Imagine the column labeled C2 is lost. To gure out what values must have been in the column, we simply have to read in all the other values in that row (including the XORd parity bit) and reconstruct the right answer. Specically, assume the rst rows value in column C2 is lost (it is a 1); by reading the other values in that row (0 from C0, 0 from C1, 1 from C3, and 0 from the parity column P), we get the values 0, 0, 1, and 0. Because we know that XOR keeps an even number of 1s in each row, we know what the missing data must be: a 1. And that is how reconstruction works in a XOR-based parity scheme! Note also how we compute the reconstructed value: we just XOR the data bits and the parity bits together, in the same way that we calculated the parity in the rst place. Now you might be wondering: we are talking about XORing all of these bits, and yet above we know that the RAID places 4KB (or larger) blocks on each disk; how do we apply XOR to a bunch of blocks to compute the parity? It turns out this is easy as well. Simply perform a bitwise XOR across each bit of the data blocks; put the result of each bitwise XOR into the corresponding bit slot in the parity block. For example, if we had blocks of size 4 bits (yes, this is still quite a bit smaller than a 4KB block, but you get the picture), they
A RPACI -D USSEAU
14
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) might look something like this:
Block0 00 10 Block1 10 01 Block2 11 00 Block3 10 01 Parity 11 10
As you can see from the gure, the parity is computed for each bit of each block and the result placed in the parity block.
RAID-4 Analysis
Let us now analyze RAID-4. From a capacity standpoint, RAID-4 uses 1 disk for parity information for every group of disks it is protecting. Thus, our useful capacity for a RAID group is (N-1). Reliability is also quite easy to understand: RAID-4 tolerates 1 disk failure and no more. If more than one disk is lost, there is simply no way to reconstruct the lost data. Finally, there is performance. This time, let us start by analyzing steady-state throughput. Sequential read performance can utilize all of the disks except for the parity disk, and thus deliver a peak effective bandwidth of (N 1) S MB/s (an easy case). To understand the performance of sequential writes, we must rst understand how they are done. When writing a big chunk of data to disk, RAID-4 can perform a simple optimization known as a fullstripe write. For example, imagine the case where the blocks 0, 1, 2, and 3 have been sent to the RAID as part of a write request (Table 37.4).
Disk 0 0 4 8 12 Disk 1 1 5 9 13 Disk 2 2 6 10 14 Disk 3 3 7 11 15 Disk 4 P0 P1 P2 P3
Table 37.4: Full-stripe Writes In RAID-4 In this case, the RAID can simply calculate the new value of P0 (by performing an XOR across the blocks 0, 1, 2, and 3) and then write all of the blocks (including the parity block) to the ve disks above in parallel (highlighted in gray in the gure). Thus, full-stripe writes are the most efcient way for RAID-4 to write to disk.
O PERATING S YSTEMS
A RPACI -D USSEAU
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) Once we understand the full-stripe write, calculating the performance of sequential writes on RAID-4 is easy; the effective bandwidth is also (N 1) S MB/s. Even though the parity disk is constantly in use during the operation, the client does not gain any performance advantage from it. Now let us analyze the performance of random reads. As you can also see from the gure above, a set of 1-block random reads will be spread across the data disks of the system but not the parity disk. Thus, the effective performance is: (N 1) R MB/s. Random writes, which we have saved for last, present the most interesting case for RAID-4. Imagine we wish to overwrite block 1 in the example above. We could just go ahead and overwrite it, but that would leave us with a problem: the parity block P0 would no longer accurately reect the correct parity value for the stripe. Thus, in this example, P0 must also be updated. But how can we update it both correctly and efciently? It turns out there are two methods. The rst, known as additive parity, requires us to do the following. To compute the value of the new parity block, read in all of the other data blocks in the stripe in parallel (in the example, blocks 0, 2, and 3) and XOR those with the new block (1). The result is your new parity block. To complete the write, you can then write the new data and new parity to their respective disks, also in parallel. The problem with this technique is that it scales with the number of disks, and thus in larger RAIDs requires a high number of reads to compute parity. Thus, the subtractive parity method. For example, imagine this string of bits (4 data bits, and one parity bit):
C0 0 C1 0 C2 1 C3 1 P XOR(0,0,1,1) = 0
15
Lets imagine that we wish to overwrite bit C2 with a new value which we will call C2(new). The subtractive method works in three steps. First, we read in the old data at C2 (C2(old) = 1) and the old parity (P(old) = 0). Then, we compare the old data and the new data; if they are the same (e.g., C2(new) = C2(old)), then we know the parity bit will also remain the same (i.e., P(new) = P(old)). If, however, they are different, then we must ip the old parity bit to the opposite of its current state, that is, if (P(old) == 1), P(new) will be set to 0; if (P(old) == 0), P(new) will be set to 1. We can express this whole
A RPACI -D USSEAU
16
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) mess neatly with XOR as it turns out (if you understand XOR, this will now make sense to you):
P(new) = (C(old) XOR C(new)) XOR P(old)
Because we are dealing with blocks, not bits, we perform this calculation over all the bits in the block (e.g., 4096 bytes in each block multipled by 8 bits per byte). Thus, in most cases, the new block will be different than the old block and thus the new parity block will too. You should now be able to gure out when we would use the additive parity calculation and when we would use the subtractive method. Think about how many disks would need to be in the system so that the additive method performs fewer I/Os than the subtractive method, and vice-versa. For this performance analysis, let us assume we are using the subtractive method. Thus, for each write, the RAID has to perform 4 physical I/Os (two reads and two writes). Now imagine there are lots of writes submitted to the RAID; how many can RAID-4 perform in parallel? To understand, let us again look at the RAID-4 layout (Figure 37.5).
Disk 0 0
4
Disk 1 1 5 9
13
Disk 2 2 6 10 14
Disk 3 3 7 11 15
Disk 4 P0
+ P1
8 12
P2
+ P3
Table 37.5: Example: Writes To 4, 13, And Respective Parity Blocks Now imagine there were 2 small writes submitted to the RAID-4 at about the same time, to blocks 4 and 13 (marked with in the diagram). The data for those disks is on disks 0 and 1, and thus the read and write to data could happen in parallel, which is good. The problem that arises is with the parity disk; both the requests have to read the related parity blocks for 4 and 13, parity blocks 1 and 3 (marked with + ). Hopefully, the issue is now clear: the parity disk is a bottleneck under this type of workload; we sometimes thus call this the small-write problem for parity-based RAIDs. Thus, even though the data disks could be access in parallel, the parity disk prevents any parallelism from materializing; all writes to the system will be serialized because of the parity disk. Because the parity disk has
O PERATING S YSTEMS
A RPACI -D USSEAU
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) to perform two I/Os (one read, one write) per logical I/O, we can compute the performance of small random writes in RAID-4 by computing the parity disks performance on those two I/Os, and thus we achieve (R/2) MB/s. RAID-4 throughput under random small writes is terrible; it does not improve as you add disks to the system. We conclude by analyzing I/O latency in RAID-4. As you now know, a single read (assuming no failure) is just mapped to a single disk, and thus its latency is equivalent to the latency of a single disk request. The latency of a single write requires two reads and then two writes; the reads can happen in parallel, as can the writes, and thus total latency is about twice that of a single disk (with some differences because we have to wait for both reads to complete and thus get the worst-case positioning time, but then the updates dont incur seek cost and thus may be a better-than-average positioning cost).
17
37.7
Table 37.6: RAID-5 With Rotated Parity As you can see in the gure, the parity block for each stripe is now rotated across the disks, in order to remove the parity-disk bottleneck for RAID-4.
RAID-5 Analysis
Much of the analysis for RAID-5 is identical to RAID-4. For example, the effective capacity and failure tolerance of the two levels are iden-
A RPACI -D USSEAU
18
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) tical. So are sequential read and write performance. The latency of a single request (whether a read or a write) is also the same as RAID-4. Random read performance is a little better, because we can utilize all of the disks. Finally, random write performance improves noticeably over RAID-4, as it allows for parallelism across requests. Imagine a write to block 1 and a write to block 10; this will turn into requests to disk 1 and disk 4 (for block 1 and its parity) and requests to disk 0 and disk 2 (for block 10 and its parity). Thus, they can proceed in parallel. In fact, we can generally assume that that given a large number of random requests, we will be able to keep all the disks about evenly busy. If that is the case, then our total bandwidth for small writes will be N R MB/s; the factor of four loss is due to 4 the fact that each RAID-5 write still generates 4 total I/O operations. Because RAID-5 is basically identical to RAID-4 except in the few cases where it is better, it has almost completely replaced RAID-4 in the marketplace. The only place where it has not is in systems that know they will never perform anything other than a large write, thus avoiding the small-write problem altogether [HLM94]; in those cases, RAID-4 is sometimes used as it is slightly simpler to build.
37.8
O PERATING S YSTEMS
A RPACI -D USSEAU
19
Capacity Reliability Throughput Sequential Read Sequential Write Random Read Random Write Latency Read Write
N N N N
S S R R
(N 1) S (N 1) S (N 1) R 1 2 R D 2D
(N 1) S (N 1) S N R N 4 R D 2D
D D
Table 37.7: RAID Capacity, Reliability, and Performance I/O performance and reliability, mirroring is the best; the cost you pay is in lost capacity. If capacity and reliability are your main goals, then RAID-5 is the winner; the cost you pay is in small-write performance. Finally, if you are always doing sequential I/O and want to maximize capacity, RAID-5 also makes the most sense.
37.9
A RPACI -D USSEAU
20
37.10
Summary
We have discussed RAID. RAID transforms a number of independent disks into a large, more capacious, and more reliable single entity; importantly, it does so transparently, and thus hardware and software above is relatively oblivious to the change. There are many possible RAID levels to choose from, and the exact RAID level to use depends heavily on what is important to the end-user. For example, mirrored RAID is simple, reliable, and generally provides good performance but at a high capacity cost. RAID5, in contrast, is reliable and better from a capacity standpoint, but performs quite poorly when there are small writes in the workload. Picking a RAID and setting its parameters (chunk size, number of disks, etc.) properly for a particular workload is challenging, and thus still remains more of an art than a science.
O PERATING S YSTEMS
A RPACI -D USSEAU
21
References
[BJ88] Disk Shadowing D. Bitton and J. Gray VLDB 1988 One of the rst papers to discuss mirroring, herein called shadowing. [CL95] Striping in a RAID level 5 disk array Peter M. Chen, Edward K. Lee SIGMETRICS 1995 A nice analysis of some of the important parameters in a RAID-5 disk array. [DAA05] Journal-guided Resynchronization for Software RAID Timothy E. Denehy, A. Arpaci-Dusseau, R. Arpaci-Dusseau FAST 2005 Our own work on the consistent-update problem. Here we solve it for Software RAID by integrating the journaling machinery of the le system above with the software RAID beneath it. [HLM94] File System Design for an NFS File Server Appliance Dave Hitz, James Lau, Michael Malcolm USENIX Winter 1994, San Francisco, California, 1994 The sparse paper introducing a landmark product in storage, the write-anywhere le layout or WAFL le system that underlies the NetApp le server. [K86] Synchronized Disk Interleaving M.Y. Kim. IEEE Transactions on Computers, Volume C-35: 11, November 1986 Some of the earliest work on RAID is found here. [K88] Small Disk Arrays - The Emerging Approach to High Performance F. Kurzweil. Presentation at Sping COMPCON 88, March 1, 1988, San Francisco, California Another early RAID reference. [P+88] Redundant Arrays of Inexpensive Disks D. Patterson, G. Gibson, R. Katz. SIGMOD 1988 This is considered the RAID paper, written by famous authors Patterson, Gibson, and Katz. The paper has since won many test-of-time awards and ushered in the RAID era, including the name RAID itself! [PB86] Providing Fault Tolerance in Parallel Secondary Storage Systems A. Park and K. Balasubramaniam Department of Computer Science, Princeton, CS-TR-O57-86, November 1986 Another early work on RAID.
A RPACI -D USSEAU
22
O PERATING S YSTEMS
A RPACI -D USSEAU
23
Homework
This section introduces raid.py, a simple RAID simulator you can use to shore up your knowledge of how RAID systems work. It has a number of options, as we see below:
Usage: raid2.py [options] Options: -h, --help show this help message and exit -s SEED, --seed=SEED the random seed -D NUMDISKS, --numDisks=NUMDISKS number of disks in RAID -C CHUNKSIZE, --chunkSize=CHUNKSIZE chunk size of the RAID -n NUMREQUESTS, --numRequests=NUMREQUESTS number of requests to simulate -S SIZE, --reqSize=SIZE size of requests -W WORKLOAD, --workload=WORKLOAD either "rand" or "seq" workloads -w WRITEFRAC, --writeFrac=WRITEFRAC write fraction (100->all writes, 0->all reads) -R RANGE, --randRange=RANGE range of requests (when using "rand" workload) -L LEVEL, --level=LEVEL RAID level (0, 1, 4, 5) -5 RAID5TYPE, --raid5=RAID5TYPE RAID-5 left-symmetric "LS" or left-asym "LA" -r, --reverse instead of showing logical ops, show physical -t, --timing use timing mode, instead of mapping mode -c, --compute compute answers for me
In its basic mode, you can use it to understand how the different RAID levels map logical blocks to underlying disks and offsets. For example, lets say we wish to see how a simple striping RAID (RAID0) with four disks does this mapping.
prompt> ./raid2.py -n 5 -L 0 -R 20 ... LOGICAL READ from addr:16 size:4096 Physical reads/writes? LOGICAL READ from addr:8 size:4096 Physical reads/writes? LOGICAL READ from addr:10 size:4096 Physical reads/writes?
A RPACI -D USSEAU
24
LOGICAL READ from addr:15 size:4096 Physical reads/writes? LOGICAL READ from addr:9 size:4096 Physical reads/writes?
In this example, we simulate ve requests (-n 5), specifying RAID level zero (-L 0), and restrict the range of random requests to just the rst twenty blocks of the RAID (-R 20). The result is a series of random reads to the rst twenty blocks of the RAID; the simulator then asks you to guess which underlying disks/offsets were accessed to service the request, for each logical read. In this case, calculating the answers is easy: in RAID-0, recall that the underlying disk and offset that services a request is calculated via modulo arithmetic:
disk = address % number_of_disks offset = address / number_of_disks
Thus, the rst request to 16 should be serviced by disk 0, at offset 4. And so forth. You can, as usual see the answers (once youve computed them!), by using the handy -c ag to compute the results.
prompt> ./raid2.py -R 20 -n 5 -L 0 -c ... LOGICAL READ from addr:16 size:4096 read [disk 0, offset 4] LOGICAL READ from addr:8 size:4096 read [disk 0, offset 2] LOGICAL READ from addr:10 size:4096 read [disk 2, offset 2] LOGICAL READ from addr:15 size:4096 read [disk 3, offset 3] LOGICAL READ from addr:9 size:4096 read [disk 1, offset 2]
Because we like to have fun, you can also do this problem in reverse, with the -r ag. Running the simulator this way shows you the low-level disk reads and writes, and asks you to reverse engineer which logical request must have been given to the RAID:
O PERATING S YSTEMS
A RPACI -D USSEAU
25
You can again use -c to show the answers. To get more variety, a different random seed (-s) can be given. Even further variety is available by examing different RAID levels. In the simulator, RAID-0 (block striping), RAID-1 (mirroring), RAID-4 (block-striping plus a single parity disk), and RAID-5 (blockstriping with rotating parity) are supported. In this next example, we show how to run the simulator in mirrored mode. We show the answers to save space:
prompt> ./raid2.py -R 20 -n 5 -L 1 -c ... LOGICAL READ from addr:16 size:4096 read [disk 0, offset 8] LOGICAL READ from addr:8 size:4096 read [disk 0, offset 4] LOGICAL READ from addr:10 size:4096 read [disk 1, offset 5] LOGICAL READ from addr:15 size:4096 read [disk 3, offset 7] LOGICAL READ from addr:9 size:4096 read [disk 2, offset 4]
You might notice a few things about this example. First, the mirrored RAID-1 assumes a striped layout (which some might call RAID01), where logical block 0 is mapped to the 0th block of disks 0 and 1, logical block 1 is mapped to the 0th blocks of disks 2 and 3, and
A RPACI -D USSEAU
26
R EDUNDANT A RRAYS OF I NEXPENSIVE D ISKS (RAID S ) so forth (in this four-disk example). Second, when reading a single block from a mirrored RAID system, the RAID has a choice of which of two blocks to read. In this simulator, we use a relatively silly way: for even-numbered logical blocks, the RAID chooses the evennumbered disk in the pair; the odd disk is used for odd-numbered logical blocks. This is done to make the results of each run easy to guess for you (instead of, for example, a random choice). We can also explore how writes behave (instead of just reads) with the -w ag, which species the write fraction of a workload, i.e., the fraction of requests that are writes. By default, it is set to zero, and thus the examples so far were 100% reads. Lets see what happens to our mirrored RAID when some writes are introduced:
prompt> ./raid2.py -R 20 -n 5 -L 1 -w 100 -c ... LOGICAL WRITE to addr:16 size:4096 write [disk 0, offset 8] write [disk 1, offset 8] LOGICAL WRITE to addr:8 size:4096 write [disk 0, offset 4] write [disk 1, offset 4] LOGICAL WRITE to addr:10 size:4096 write [disk 0, offset 5] write [disk 1, offset 5] LOGICAL WRITE to addr:15 size:4096 write [disk 2, offset 7] write [disk 3, offset 7] LOGICAL WRITE to addr:9 size:4096 write [disk 2, offset 4] write [disk 3, offset 4]
With writes, instead of generating just a single low-level disk operation, the RAID must of course update both disks, and hence two writes are issued. Even more interesting things happen with RAID-4 and RAID-5, as you might guess; well leave the exploration of such things to you in the questions below. The remaining options are discovered via the help ag. They are:
Options: -h, --help show this help message and exit -s SEED, --seed=SEED the random seed -D NUMDISKS, --numDisks=NUMDISKS number of disks in RAID -C CHUNKSIZE, --chunkSize=CHUNKSIZE chunk size of the RAID -n NUMREQUESTS, --numRequests=NUMREQUESTS
O PERATING S YSTEMS
A RPACI -D USSEAU
27
The -C ag allows you to set the chunk size of the RAID, instead of using the default size of one 4-KB block per chunk. The size of each request can be similarly adjusted with the -S ag. The default workload accesses random blocks; use -W sequential to explore the behavior of sequential accesses. With RAID-5, two different layout schemes are available, left-symmetric and left-asymmetric; use -5 LS or -5 LA to try those out with RAID-5 (-L 5). Finally, in timing mode (-t), the simulator uses an incredibly simple disk model to estimate how long a set of requests takes, instead of just focusing on mappnings. In this mode, a random request takes 10 milliseconds, whereas a sequential request takes 0.1 milliseconds. The disk is assumed to have a tiny number of blocks per track (100), and a similarly small number of tracks (100). You can thus use the simulator to estimate RAID performance under some different workloads.
A RPACI -D USSEAU
28
Questions
1. Use the simulator to perform some basic RAID mapping tests. Run with different levels (0, 1, 4, 5) and see if you can gure out the mappings of a set of requests. For RAID-5, see if you can gure out the difference between left-symmetric and leftasymmetric layouts. Use some different random seeds to generate different problems than above. 2. Do the same as the rst problem, but this time vary the chunk size with -C. How does chunk size change the mappings? 3. Do the same as above, but use the -r ag to reverse the nature of each problem. 4. Now use the reverse ag but increase the size of each request with the -S ag. Try specifying sizes of 8k, 12k, and 16k, while varying the RAID level. What happens to the underlying I/O pattern when the size of the request increases? Make sure to try this with the sequential workload too (-W sequential); for what request sizes are RAID-4 and RAID-5 much more I/O efcient? 5. Use the timing mode of the simulator (-t) to estimate the performance of 100 random reads to the RAID, while varying the RAID levels, using 4 disks. 6. Do the same as above, but increase the number of disks. How does the performance of each RAID level scale as the number of disks increases? 7. Do the same as above, but use all writes (-w 100) instead of reads. How does the performance of each RAID level scale now? Can you do a rough estimate of the time it will take to complete the workload of 100 random writes? 8. Run the timing mode one last time, but this time with a sequential workload (-W sequential). How does the performance vary with RAID level, and when doing reads versus writes? How about when varying the size of each request? What size should you write to a RAID when using RAID-4 or RAID-5?
O PERATING S YSTEMS
A RPACI -D USSEAU