0% found this document useful (0 votes)
11 views

AOS all units

The document discusses load distribution algorithms in heterogeneous and homogeneous distributed systems, focusing on the transfer of tasks from overloaded to underloaded nodes. It outlines various types of load distribution methods, including static, dynamic, and adaptive algorithms, as well as the components involved in these processes. Additionally, it touches on the concepts of task migration, distributed shared memory (DSM), and the differences between message passing and shared memory communication.

Uploaded by

smilyvenky238
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
11 views

AOS all units

The document discusses load distribution algorithms in heterogeneous and homogeneous distributed systems, focusing on the transfer of tasks from overloaded to underloaded nodes. It outlines various types of load distribution methods, including static, dynamic, and adaptive algorithms, as well as the components involved in these processes. Additionally, it touches on the concepts of task migration, distributed shared memory (DSM), and the differences between message passing and shared memory communication.

Uploaded by

smilyvenky238
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 9
unt ic jon of load distribution algorithm & fo cane Maing» ak eo bc nd cn | =o help. Q6 What are the various types of distributed c '* Systems may be heterogeneous in terms of CPU ‘scheduling algorithm ? speed and resources. The distributed system may Ans: « Basic function of load distributing algorithms ai be Heterogeneous in tems of lads in different is tp tanwfer load (asks) from heavily loaded systems. ‘computers to ile or ight loaded computers. I can «sven in homogeneous distibuted systems a sysem | be characterized ws slate, dynamic and adaptive thay be idle even when a task is wating for serie © Static load disrouton algorithms: Decisions are in other ystems hardcoded. into an algo with 2 peor ‘Consider a system of N identical, independent ae MMM servers. Let P be the probably tat the | + Dynamic load. distibuton we system ste ayoter is tn a stale in which at lest 1 tak is | information auch as tak queue length, proceor ating for service and atleast 1 server is ide, kzaton. Let p be the tization of each server, We can + Ad pve load disbuton: adapt the approach timate P using probabilistic analysis and plot based on system state ‘raph aginst system ulizaton. + Dynamic ditrbution algorithms collet load For moderate syste utilization, value of Ps high, | information fom nodes even at very high system ia, at least 1 node i ile. Hence, performance can | oud. be improved by sharing of ass. + Load information colton itself can add lsd on (QS What be load tn dstebuted scheduling ? the system as memages need tobe exhunged Sneath ll pone He edo be | “Ta Ming stig ae curlers * Qu tng a Pop ak en ne st ey de om te we rena gee ne ot al oe eine onal Sicgiaigra’ sted Ganson ig | TEE a es, oo vote — util Selig an yy be arated Sgt 984 a oe pe 2 recta Pv ltorthing say, 2. Beton Ply are on Fo esol ain dae, | ‘mamuocna aa wie sett om ; Cc + Tak ter shuld led rue | Drie Alcon : oration of prs rai ewer de whe It BME py reponse tine, Ley wanfer ovetiead should be" evn i known vance Information cn + Nadeem oe WHE SE mop woth nearing ince |e hn of. prc, comping ro «Singlet approach : elec newly onigted tho. | Tepemen, le equine nd commana iasarenese ont Fee ee oer eae emetentstebe | fegementy eR preven an "opal re trauterel Nenprecnpve wns we alowed. Suen of reese vion is weed a8 the bag ‘+ Antcpatory task wansfers: Transfer = the doublethretheld poly ‘overloaded nodes to ones that are likely to become * aimation ‘Mfg loaded sea aa ae process tan poly whether the sender node ‘ +A node can be slected randomly based on | fy 7 process takes the inte oe ang ton ee single point of fare. But. performance rode in the system and ts at wing bode out under ay fad Alternative + Broodcasing «query, sort of ‘eaten aes Set Information Iba. Proce assignment made 1. Ie As nevessry and. sufcent fo prevent nodes Racal etn poy from beg idle while some other nodes have | The load ditibuting sctvity is inated from ax Aner naded node tat is tying 0 obtain a task mare than two processes 2 Losd-sharing is much — simpler than _ffom an overload nod load-balancing since it only attempts to ensure (ae ee) that no node is idle when heavily node exists. 5.2 : Components of a Load Distributing Algorithm 3. Priority assignment policy and migration limiting 2 policy are the same as that forthe load-balancing aaanee ‘In receverinitted polis, receivers slit load _ pen 040 Explain components of» load dlstrbutig pe lightly loaded nodes. algorttn A. Symmetic + Initated by both senders and Various lsue in designing loxd sharing slgorthm ree ‘A symmetric inated policy 6 a combination of pease 4. Load estimation polices 4. Trantor Polley |. Adaptive: Senive to state ofthe system. «Since load-sharing algorithms simply attempt 10» Determines when a node needs to send tasks 10 | niphia, contesqimened sutt oih other nodes or can receive asks from other nodes pate ‘Ans. :+ Load distributing activity 16 initiated by 2 ‘overloaded ode (sender) that atempt o send 2 tsk node is buy or idle Thas these algorithms Teepe are eo See Sli te ight Teal weal | | Doablieen sped Pm policy of counting the total numberof processes + Thesold perhaps in terms of number of aks ae to an under lade node (ever) In modem systems where permanent existence of Serra we 2 Cenelied ven dstbuted algorithms 7 When a load on a node exceeds a threshold, the ‘Optimal versus sub optimal algorithms. ea 7 i Se se Transer policy determines the condton under 4 Local vers global lgrtins Teich a “ask, should be tantered. A bpicll several processes on an idle node is possible, “Algorithme measure CPU ublization to estimate the node becomes sender. When it falls below # threshold it becomes a receiver EE EOWA RUEATON Ang tr ne load of a node, — rca) NATO a at he Aened Open Stem transfer pcs tncader task migration andi task reahetulng CPU the queue trshald T for al aden Innate ancbon Potey Once the tranfer polly decided hat 4 host is Siac acon py sles ask for aster 4 The smplest and popular approach it select the eh avd tsk for tanafr that just transforms the hot in a ender ‘ Tranfering sch tah relatively cheap, since the trae i efiecvly econ Ren-pCEMPtNe. select 0 task The selected tah should Be Jong ved so that it {s wovthrile to incur the transfer overhead. ated Stalin nd oy Sr a ne roc ty ow ee Tecan te in et ws ete, Avenel Operating Stems A ost is identified to be a sender if Its queue length exceeds the predefined threshold value T: Seictlon poy orth coder al tas for tars scatonpotey The locaton policy selects host at random and pol to determine wieder wansfring a tsk trou plac te guru legeh below the creo deve 1 pot then the plld Rost tasers a ta performance recive alge cast Be UTtable: & Say Aigo can be fecve it Pract ¢ aaron setae can POT Fig ce indfely with Ate probability. pol mt ramber of tris fall to find a sender. “+A problem with this Tocation policy is that sf all polls tal to find a sender, then the processing power avaliable a 3 receiver is completely lost by The problem severely affects performance in ystems where only 2 few hosts generate most of “+17 al the polls fil to ind a sender, then the host waite until another task departs or for a stable symmetrically init Angie Bie a + The polled node removes the sender node ID from (he Uist ti presently in and puts iin the sender's ist +The polled node retums its status (receiver, sender, ok) to the sender. The sender transfers a task tothe node if it i a collected. and where it should be collected. for receiver ‘There are a number of approaches ‘+The sender puts the polled nade in appropriate ist 1. Demand Driven : A node collects the state of ‘based on it reply, her nodes only when it withes to become involved in either sending or receiving. tasks 1 This process may contin, from lat to first then It polls the senders list from predetermined pid before reiting the ond Mabe vender tte grit. ed ree te ric Oe oat | tt _ boy roils hk Sk la, = 2 sate symmetry ated soto HF the polled node 16 sender then it wanes 2 —=«mlemals Thee plies do not adapt ter _ mtiny 1 prvi egies i cosine "areola tes eet rca ens) | __ schay to mens al uch oe av pero poling by senders negodation component the tk tafe, fabs history over tine of global source Tie halen polio! Gemeente = 1b #If the polled node is not a sender then it removes usage to gulde location algpeichans. Nowe that the starts only after host becomes a receiver Laced dassfy the nodes in the aystem as ci Senderfoverloaded, Receiverfunderloaded, or OK seaiey At hgh loads, a rciver will fd sender with high-probabity witha small numberof pols 4 The Snowledge concerning the state of node $ at exch node 4 At ow oad ost pos wil fl. Bot ths snot a ‘ane by» data sate at each node problem since CPU gles are avalable sender ta rcive Ist and an OK Hist pe ‘ Initally, each node assumes that every other nee ‘Polling initiated by receiver implies that it is i # ecriver dicul to find senders with new tasks. ry the eine ID fom te Ist peed in td po the sce At ands he seater tration Tsp he pled dein opropite list based on its reply itis proc ay oon. setoeden pokey sent 0 “+The sender initiated component considers only a vy arrived tasks for transfer, = TEOWCAL LEATON An at Rape Dial hala, ete tng ns ne Aten peng tr eo 26: shes CM ace i ane Migration, tases tm task Ces = Reet 1 the amy 4 Ut the adatae f DSM. Fach node consis oh one or more CPUS md « ane i sisal depend rated task continyy © ARK Advantages: proampaey minp ster plier — esc a host of 2 ri he migmaca ‘Shields programmer from Send/Receive red for connecting the node A imple message emma tse suet from 0 MT a riniives (pci aun dives fase ot Stal toto no eaegeei arenas Thy es aM | Wt Rage cammorpan ply pmetrctnm | be Seat ire poe posiiper nae four) peerance nd compl sa pemteg coer dats recrer cyan as eluate diverted ew we I vm youd Joel “amt the shared virmuai yoo another state tana meee ne tack state Exploit locality reference nn Os en eee (estat Recopying the sate: bulk of con moved memory For mapping operation, the sharwd ek placement the ston of «Rot fora ew oe new ot fore ee No memory acct otneck, 0 single bus say eprint teak an the cation ot tak otha Rot “Lester le 2255 Malar fanraesnres +m ag tt nn nt Benet of tak migration «+ Copy-am-eerence: Just COPY ‘migrated tg DEM propane pal a tty we commen ener sc rey_ DBM we date icing oad balancing - Improve performance for peed for its execton- [DSM programming interface bs = =e re crue by retin eed cone, \enaton morapreney G25 List the dlandvantages of DSM. ecoeeener cee eae eases a aan oe es ‘Ans: Disadvantages ‘ tits wala ate nents — Location Programmers need to undertand consitency "TY APP an to ed ony “etc emma ee By tig ng eB eee nae er seemed mom om one ow a group of tks with nimave me i Os —< —s spa med greene ni = sbi pment nae aera ‘© Systems that support DSM. data moves between Sek sieceea ae Svea of © BY yielding contol to DSM manager software, * QYtege A wPpon DOME & ae a Sroccard as ce Typical, there wil be interaction Between the ta rogrammers cannot use their own messige —Dopreen urn menories of teens nodes coe me spel devs enw ely oe ay es nS ; ee atta system 0.26 Explain distributed share memory iqect, @ tapping manager maple deed ‘+ Faulecee - Alowing long runing procses “em. the ile PY 228 Extent . “ —_ oe mm or fare of % re mechani canbe designed to be independen s = cof one anather 90 that if one mechanism’s prota Tmcana oesailicy mieten not the iia 220 Let the steps tse bt Guang he bers need es Ana. : Supe ae a follows canine can be fumed of without Snerfry en ‘Surpending (mein) the tak on the source rh ter mechanisms : - : ie ee 5.5: Distributed Shared Memory | oa mk] = te deananen uted shar 3. Recnstrcting the sate onthe detinton ae : Menery Men wero + op Se et go et ‘= ‘Ans. : Distributed shared memory is a mechaniss CU, Pun pot r = t aa erainp ioceae long end-use” press to tess shared Peat 2.21 Dice the varus hnnes of Task Mipation without using ine proess communications In tht ae i ioe ‘Ame + Thee ar hele tsk migration words, he goal of a DSM system is wo mul hovers ere = Sue raner inerprocess commurcatons "transparent © a = ane Location ranparency ender, 5 = ofa ipa 2.23 What are the main goals of DSM ? State trator: ie cages mms ae 4+ The cot to suppor remote exciton 4 To overcome the architectural limitation (men 1 Freezing the task (ite time as posse x) veont Obtaining and tandering the sate b_To support beter programming paradigm - Reece RE Ree ce age {DSM architecture types are On-chip memory, Bus bused iystem Ring based muliprocesor and etched mulliproceso. +ON-chip memory : Several procesors and shared memory are on the same chip A single addres space i divided in to a private part and a shared pen + The prvate partis divided up to regions so that ach mache has piece fr iti stacks and other Dit Sedge, No cit et mee Sea omar id mes ‘AewnelOprtig Sens messager may be low, oF the inbox bull of proce may overflow, leading to sila results + With message passing several messages may be sent at the sume tne, and the order of rsving message may be messed up. With shared memory the value of repister is abays the value that was ‘wot at 1230 Ecplain diferent implementation approaches tops. sheng! Arar DSM wise following, approuches for Poermning.piniives must ply requ ome up) om he shared memory. 228 Explatn difference between mesenge passig and DSM, ‘Ana: Shared memory allows multiple processes to ‘ead and write data fom the same location. Message Passing is another way for processes 9 communicie ‘ch proces can vend messages to ther processes. Thee is no delay with shared memory, if oe Process wns the other processes can. rex! immediately. With message passing delay ci ‘happen, diferent messages may even have differ delays, et ae * feplemening Hardware : 1 6 shared memory mulipocesor 2 Paged Viral Memory : DSM is 2 region of ‘ral memory. ocupies same address range in the addres space of each process. R also gure homogenous computes, he data tke png formas Middleware Implementation at ech noe abstracted by the aiddlevare layer. Heterogeneous systems are andl we [ S16: Algorithms for implementing DSM 031 Define granularty. ‘Ans: Granularity refers to the sie of the shared 1s fetched from ts current location and put on the EOWA RICATON A Undeving physical Del Shing a machine making the reference. An important design, Moared Opening Stems on Donte Seeing nk 054 ‘Write operation Update the data and send + DSM can be inegated with virtual memory at exch acknowledge tothe cent. ode Mak DM page mali of VM page wa. & an be mapped nto ‘Timeout can be wed rsd the equa im ce aly tld stared memory Ped of faled acknowledgement a tie m otect the duplicate *1F page not local, faulthandler migrates pape ‘Sequence number ie used to detect the dupe not lal faut-handier migrates og we regent. To bate» remote dita cj, use lcaton saver + an appt requ to aces hare dat ale - ts scm, fares, mnt Yo the Sd aman Nt ch oe an oda splenion : * Only one node can acm + date fect at» te peo ‘Thrashing can occur: to minimize it, set minimum + tne Peformance and eablty out euwarae ve + Toul wtons: Paton shared data between enh oat menace SD I several servers and use of mapping function to | O24 What ‘replication algorithe xh fas the advantages of bob 4, dsutfcate data oe = What is granul esau licating data blocks and multiple nodes “ 2.35 Waite sort om : Migration Algorithm repliting dat lowing mal hn : Gun Genny hud wan Ang + Shred dt eae dtd ner many thee ead noone moe have beh ead eae meme nO Ee nf hed dat rte Paton. Mgon again slows cal one nade When ord « Gunulrty size varies With the ize ofthe og 1 accem shared data at ne rsvp eet ons ~ ‘Fig. Q35.1 shows migration algorithm. Data being De pres darcparenlp ier ov rapper Soe eee eee Sieg el an os << accessed is forwarded to the accessing location. how tig told the chunk be 7A won, Boge olomenains ee oe rovied by i td map the acese othe physi tion ze tat 6 « multpe ofthe size p 1 Use tine damp 1 handle the aces sequacy ches Radian hemes TUS Seach ew owns CQO DSM and memory management fut has Die problem and single pit mysteensen * Granules isthe amount ot bocce [arom 1 A memory aces cequsts go Uvough & ct Soe. server ht auiiins te shared memory Fa a36s ents ai ui Rast peat Tae f te ho Bb M2 ale ce a sing many soll menges lade 10 Ket «ig Q341 stows center algontin fine and the whole lock conning, data tem ren grates instead of individual item requested, eer ofc aa If granularity i oo large, 2 whole page (or more) st snr its Segue Geen: Gs ae ee ‘would be sent for an update to a single byte, thus suscept sing where pages frequently 4. jeeps replicates data objects to multiple nodes reducing etficiency ie migrate between nodes while servicing only = few [114 DSM kepe track of localion of data bet. 1 Flo sharing of page occurs when two diferent "sea mt 4 Ths algritn alows multiple roves to have rad data ems ot shared bt ese BY te leon Tan ae rtm preety tet DSa es oe te havea Pace gerne stage ee 2 individual nodes sa 1 improves system performance by allowing + Advantages of sng large pae sie oO : ont ie ea ou A parties trite al copies ae iva or 1. Exploit locaty of reference = ‘Takes advantage ofthe locality of reference. 2 Less overhead in page transport ee ————— TECHMCAL PURUEATONS =n oat EOWA PARUCATIONE atv ee rae ——— Donel etn, cn we dred Otten To maintain eT nodes wishing to nh TMP "me soe 2incements + and Bin hat one. Poes os Som eng "ene the vaso and int cal Yrs a suet Sunt gd ant tet.” OSM Inplenetton might whch The modicen MA SEWER rane’ ver upd to an bat af oer oe ua ores ay fared tem" Spr map Pcs (240 Det seat! conten memory Coherence and | 57: Mererence Protocols | Ans. : A system fs sequential consistent if the result Seer —| any eaten ofthe operations ofa the pocesor a in ae in be ae ‘whch allows mulple node fo have both Soeenc POS steratin wich alos mule no ip a other process in the order in ich 230 Wote shot aote om AMON conning Oa ke fe ee re ‘men in tern ore y ferent presses (example: page mapping, pagetault handling, = 43 bicone sequentalconletency model communication. for most + Memory consistency model makes guarantees ¢ ¥ Js used in practice i sequential conden condition deserbed and it i essential t0 Pew! « Any read to a memory lotion v should have " ‘etured (in the actual exciton) the value sored Programmes a memory environment that bebe as expected by the most recent write operation to x in this eae sequential order. aS ‘In this model, writes must occu inthe same order on all copes; reads however can be interleaved on Prob eet oe Ce. oe, 3 ‘each system, as convenient performs according mp + A-DSM system is said to be sequently consistent tat for any erection thee is some interawing of the numbers A gap in sequence numbers indicates 4 missing print“OK"); sis of % . ‘operations issued by all the processes that write request node asks for retransmission of satisfies the following two eteia : TEOWEAL LAUGHTON eat ae missing write request aes TEOWCA LAUCATON 0 a gg Ne performed until all previous acceses syndtvonization variables have been performed. + Let us consider following example Pr: went Woz S 2 8 + Shared data can only be counted on tobe consistent ater synchronization is done. 4 A proces se acs, to sriontin “By doing synchronization before reading shared data, a process can be sure of getting the most recent values. 1+ Weak consistency requires the programmer to use locks to ensure reads and writes are done in the proper order for data that needs i 245 Explain release consistency in distributed shared memory ‘Ans, + Reads and writes are allowed to bypass both reads and writes. Any order that satisfies contro fl and data flow is alowed. ‘+ Assumes expt synchronization operations ‘acquire and release So, for correct operation, ‘example must become mt rn data = 1; Release flag =1 ;» Acquire : While (ag) Print data ; Constraints : +All previous waites (and previous reads) must ‘complete before a release can complete se eee ee at appropriate points. ‘Release consistency is like weak consistency, be there are two operations “Tock” and “unloi synchronization. | Ral trl caare onal bt copies ofthe protected data are brought upto dt toe consistent withthe remote ones if neds be ‘+ When a release is done, protected data that Mt been changed ae propagated out tothe loc of he datnstre ‘In Weak Consistency: Shared data can eo) ‘counted on to be consistent after synchronizate"* done synchronization variable “fushes the pipet! * forcing writes to complete te TEOWICHL PRLEATINS®- np nat 0 «From a practical point of view, a consistency model a ome leeway Adoancel Operating Stone om *By dong otoniaton lore seing share + 1 process can be sure of getting the most recent values 1 Weak consistency requires the programmer to use Jocks to ensure reads and verites are done in the proper order for data that needs it. 4 In release consistency, accesses inthe critical section do not wait or delay LOAD/STORES outside the ctl section. ‘Advantage : Good performance for many spdates 47 Dang he Acoso mena condtney tren a an erode aly ee, models, we often relered to the contract between Dlaadvantage: snd memory. Why such © contact 4 tavalidatons sent to all nodes tht have copes. ‘+ Ineficent if many nodes aces same object Examples : Most DSM systems : IVY, Clouds, Dash Memnet, Mermaid and Micage ‘Ans. :4In distributed systems, a consistency model i 1 contract betwen the system and the developer who ses it ‘A system is said to support a certain consistency Welteupdate protocol: "dal operntons om memory reper + Each update is multicast to al programs, Reads are defined by the model Performed on loa copies of the data. ‘They ave a powerful abstraction which helps fo 6 q waite to shared dita cause all copies to be describe 4 system in terms of its observable updated (new vale sent, instead of validation) properties. policy selects a hast at random and pols deerme whee aerng thethld evel consistency of data with respect to all processors. “Coherence deals with maintaining a global order in * Consistency deals with the ordering of operations to multiple locations with respect to all processors.

You might also like