Chapter 10 - External Storage - Part 2
Chapter 10 - External Storage - Part 2
Disadvantage:
Significant IS applications in real world very often require lots of data to process External disk files - only practical way. Disk processing is really the mainstay of information processing whether the files are legacy type, database files, etc!
4
Traditional disks (CAV) have same number of bits/track; different densities! (Velocity of outer tracks is much greater than inner tracks)
Disk organization:
Concentric platters, access arm(s), rotational speeds, seek time, data transfer times, etc. Disk Controllers small computers executing disk commands; shift registers, etc, etc. 5
2. Head Select electronic speeds; negligible 3. Rotational Delay half speed of rotation; maybe 5-10 msec 4. Data Transfer pretty quick too;
Tracks divided into fixed-sized sectors. Sector size predetermined at factory.
Disk access times of 10 or more msec are common. Important to note: disk access times are being reduced all the time, but so are main memory access time and at a faster rate. Book points out that the disparity continues to grow!
6
Your book does NOT say (for reasons I cannot fathom) that a block is a physical record. (interchangeable) Another vitally important term too: (missing from book)
Blocking Factor = number of logical records physical record = one block
7
Blocks
Blocking Factor = number of logical records physical record/block Physically, disk access retrieves a block Blocksize is the number of bytes, say 4K, 8K, or 64K A block contains a number of logical records. The blocking factor is the number of logical records / physical record; equivalently, number of logical records in a block. Sometimes we use sectors.
So we say, file is blocked at 100 or some such words.
Logical record is what you process each time you do a Read, Write, Get, Put, br1.nextLine(), scan.nextLine(), etc. Every time you get a logical input, you read the next logical record into your process area
Each Read after the first Read would NOT cause a physical read, but rather the movement of a pointer within a buffer in primary memory (RAM) that contains your block to point to the next logical record. You get the illusion that a physical read took place. In the real world, we block to avoid the incredible degraded performance that would have happened if each logical read in your program really corresponded to a physical read from disk (sometimes called unblocked or Blocking Factor 1) This, however, is what we have done in our programs.
10
Sequential Files
So records (logical) are organized into blocks. Fine. Sequential Files are normally ordered, but need not be. Sequential Files are normally best used for batch processing. (not mentioned in your book!) or processing when the entire file or much of the entire file is used/needed for processing, like when running a payroll. In this modus operandi, transactions are often built up (batched up) during the day, then, in the evening, edited, sorted, and updated into the master file. Transaction File; Old Master File; New Master File Great for generating reports, listings, etc. 12
Sequential Files
Looking for a specific record in a sequential file:
While one can search for a specific record in a sequential file, for a large file, this is very painful. Individual record searches are NOT practical, but can be done. Entire file might need to be searched! Time to access and read in the block the record may be in is much more time consuming than the search of the block for the record, once the block is in primary memory. If your blocking factor is 16 (may be MUCH higher) you are getting 16 records / block okay, but the disk accesses (the Input / 13 Output) may kill you!
B-Trees
Need a different kind of tree than 2-3-4 trees. 2-3-4 trees:
Fine for in-memory operations, but volume and persistency needs limit the applicability to large files.
For large files we need more data items (records) per node so when we retrieve them from disk, we retrieve into RAM (and store to disk) more records / block. Idea is to have few disk retrievals (much time) and then do sequential searches of retrieved blocks (nodes) in RAM for the desired record very quick. Know, once in RAM, we can search for records in the block.
Equivalently, we want to minimize the File I/Os!!!
17
18
B-Trees Searching
The number of levels in a B-Tree is relatively small and the number of records in a node is relatively large
Implies searching is fast once we have retrieved the node (block).
A block (node) is randomly accessed and read into memory and searched sequentially for the right record. If record found, we are done. If the search for the desired record gets a high result, then the record is not in this node, and the access method retrieves the next block / node using a child pointer just prior to the record in the block that gave us the high. Record is found or leaf is reached without finding desired record. Search is top-down. B Trees have data at many levels.
19
B-Trees Insertions - 1
Want nodes reasonably full
Facilitates likelihood of finding correct record increased once we retrieve the node (block)
To Insert:
A node split divides the data items equally:
half go to a newly created node and half remain in the old node.
Node splits performed bottom up, unlike 2-34 trees where they were split going top down. Middle item in the sequence (including the new item) is promoted upward.
20
B-Trees Insertions 2
Book goes into a good example to show how a new record is inserted. Do go through this. You may see this again. Approach is pretty simple, though. Starting with a leaf node, we will fill it up
20 40 60 80
60
20
40
70
80
Pretty straightforward. Adding a 10 and 30 causes no problems They are added to the lower left leaf node. Note the records are maintained in an ascending (ordered) order.
10 20 30 40
22
10
15
30
40
70
80
The process continues. Ultimately, the root node above will also become full, and, using the exact same algorithm, the tree grows by one level upwards! Clearly, the larger the node, the fewer the levels needed; the more records within a node, This is both good and bad. Larger blocks are retrieved from disk but more RAM (buffer space) is needed to hold the larger node. But can search more items per node 24 Note too: no node except the root will ever be less than half full.
B-Tree Efficiency
Very fast organization and access for retrievals. Because there are so many records / node, the number of levels is relatively small.
If we have a B-Tree of order 9, the height of tree is somewhat less than log9n; (log base 9 of n) that is 9 raised to what power equals n. That number is going to be one more than the max height of the tree. Example: Let n = 500,000 records. 96= 531,441. So tree will have six levels. This means that as a max, only six accesses are necessary to find the correct node in the file of 500,000 records.
Book: at 10 msec /access, 60 msec to get the right block (node) (internal comparisons and searches are quite negligible re the disk accesses). Cannot even compare this to those figures of retrieval of a record in a sequential file.
25
B-Tree Efficiency 2
Remember: no free lunch. Book hedges on the size of the node, and it is NOT free! Biggest advantage of B-Trees is for adding and deleting records. This is very significant!
Of course we need to search to find the correct node into which we insert or from which we delete.
Remember too, the vast majority of the time, in a BTree a node will not have to be split!
Splitting the node requires timeand disk accesses.
26
Supplementary Information
A B-tree is a tree data structure that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic amortized time. The B-tree is a generalization of a binary search tree in that a node can have more than two children.
Unlike self-balancing binary search trees, the B-tree is optimized for systems that read and write large blocks of data. It is commonly used in databases and filesystems. ============ In B-trees, internal (non-leaf) nodes can have a variable number of child nodes within some pre-defined range.
When data is inserted or removed from a node, number of child nodes changes. In order to maintain the pre-defined range, internal nodes may be joined/split. Because a range of child nodes is permitted, B-trees do not need re-balancing as frequently as other self-balancing search trees, such as RB trees, but 28 may waste some space, since nodes are not entirely full
Supplementary Information
A B-tree is kept balanced by requiring that all leaf nodes are at the same depth. This depth will increase slowly as elements are added to the tree, but an increase in the overall depth is infrequent, and results in all leaf nodes being one more node further away from the root. B-trees have substantial advantages over alternative implementations when node access times far exceed access times within nodes. This usually occurs when nodes are in secondary storage like disk drives.
By maximizing the number of child nodes within each internal node, the height of the tree decreases and the number of expensive node accesses is reduced. ---------------------------------In addition, rebalancing the tree occurs less often.
The maximum number of child nodes depends on the information that must be stored for each child node and the size of a full disk block or an analogous size in secondary storage.
29
In the narrow sense, a B-tree stores keys in its internal nodes but need not store those keys in the records at the leaves.
The general class includes variations such as the B+-tree and the B*-tree.
In the B+-tree, copies of the keys are stored in the internal nodes; the keys and records are stored in leaves; in addition, This is IBMs VSAM KSDS.
A leaf node may include a pointer to the next leaf node to speed sequential access
------------------------------The B*-tree balances more neighboring internal nodes to keep the internal nodes more densely packed.
For example, a non-root node of a B-tree must be only half full, but a non-root node of a B*-tree must be two-thirds full. 30
The primary value of a B+ tree is in storing data for efficient retrieval in a blockoriented storage contextin particular, filesystems. This is primarily because unlike binary search trees, B+ trees have very high fanout (typically on the order of 100 or more), which reduces the number of I/O operations required to find an element in the tree.
31
Self-balancing binary search trees: Red-black tree AVL tree AA tree Splay tree Scapegoat tree Treap
B-trees: B+ tree B*-tree UB-tree 2-3 tree 2-3-4 tree (a,b)-tree Dancing tree Htree Bx-tree Tries: Suffix tree Radix tree Ternary search tree Binary space partitioning (BSP) trees: Quadtree Octree kd-tree (implicit) VP-tree Non-binary trees: Exponential tree Fusion tree Interval tree PQ tree Range tree SPQR tree Spatial data partitioning trees: R-tree R+ tree R* tree X-tree M-tree Segment tree Fenwick tree Hilbert R-tree Other trees: Heap Hash tree Finger tree Metric tree Cover tree BK-tree Doubly-chained tree
33
Databases
A Database is a collection of data organized in a fashion that facilitates updating, retrieving, and managing the data.
Can consist of anything, including, but not limited to names, addresses, pictures, and numbers. Databases are commonplace and are used everyday.
Databases are often maintained electronically using a database management system. Database management systems are essential components of many everyday business operations.
Very common for a database to contain millions of records requiring many gigabytes of storage. And, because databases cannot typically be maintained entirely in memory, B-trees are often used to index data and provide fast access.
Searching an unindexed and unsorted database containing n key values will have a worst case running time of O(n); if the same data is indexed with a B-tree, the same search operation will run in O(log n).
To perform a search for a single key in one million keys (1,000,000), a linear 34 search will require at most 1,000,000 comparisons; with data indexed with a B-
Indexed File
The index file consists of small records consisting of keys and pointers;
Blocks of Records (control intervals) are indexed by the keys; Pointers point to a control interval (block) where the record with a desired value (key) may be located.
Theoretically, a random access requires two accesses at least: one for the index and one for the node of records!
In truth, the indexed file is usually brought into memory during these operations and maintained in a cache store until we are done doing what we are doing.
Here is what the VSAM organization looks like for Key Sequenced Data Sets (KSDSs):
36
INDEX COMPONENT
SEQUENCE SET
DATA COMPONENT
...
CONTROL INTERVALS
...
CONTROL AREA
CONTROL AREA
CONTROL AREA
37
Key values plus pointers to blocks Note the points to support sequential I1 operations.
INDEX SET 62 9/S1 S2 FREE
S2
36 62 D3 D4 FREE
S3
D1 1 3 FREE
D2 5 9 FREE
D3 36 35 FREE
D4 42 43 62 FREE
CONTROL INTERVALS
CONTROL INTERVALS
CONTROL AREAS
38
Random Access of Records: Records can be accessed randomly (via the indices (record key)) and sequentially via the nodes and pointers. Sequential Access: Only the leaf nodes contain records, but in turn, they point to the next node that contains the next group of sequential records in order to support sequential processing. Random processing does not need those pointers to the next node; search proceeds as previously described using index sets and sequence sets and pointers to control intervals.
39
Directories
OK. Nice, but wheres the catch. Well there is one or two Indexed Sequential File, there is No 1-1 correspondence between indices and actual records. IS a correspondence between indices and nodes. Sometimes we need a directory. So, if there is a one-to-one correspondence between an index record and a data record, we call the index file a directory.
Directories are VERY fast, but incur a lot of overhead to maintain this 1-1 correspondence between indices and data records...
In practice (not in book), most index files are not organized as directories and contain the largest key of a control interval (node). Then, in looking for a record, we search <= index record which will point us to a node where the desired record may/may not be.
This reduces maintenance on the index file, which would slow down insertions / deletions dramatically. 40