Least Slack Time (LST) scheduling Algorithm in real-time systems Last Updated : 16 Jan, 2020 Summarize Comments Improve Suggest changes Share Like Article Like Report Least Slack Time (LST) is a dynamic priority-driven scheduling algorithm used in real-time systems. In LST, all the tasks in the system are assigned some priority according to their slack time. The task which has the least slack time has the highest priority and vice versa. Priorities to the tasks are assigned dynamically. Slack time can be calculated using the equation: slack_time = ( D - t - e' ) Here D : Deadline of the task t : Real time when the cycle starts. e' : Remaining Execution Time of the task. The task which has the minimal slack time is dispatched to the CPU for its execution as it has the highest priority. Hyper Period (HP) is the time period of the Gantt chart which is equal to the LCM of the periods of all the tasks in the system. At time t, the slack of a task is equivalent to ( d-t ) minus the time required to complete the remaining portion of the task. It is a complex algorithm, and that is why it requires extra information like execution times and deadlines. Least Slack Time scheduling algorithm works optimally only when preemption is allowed. It can produce a feasible schedule if and only if a feasible schedule exists for the set of tasks that are runnable. It is different from the Earliest Deadline First because it requires execution times of the task which are to be scheduled. Hence it is sometimes impractical to implement the Least Slack Time scheduling algorithm because the burst time of the tasks in real-time systems is difficult to predict. Unlike EDF (Earliest Deadline First) scheduling algorithm, LST may under-utilize the CPU, thus decreasing the efficiency and throughput. If two or more tasks which are ready for execution in LST, and the tasks have the same slack time or laxity value, then they are dispatched to the processor according to the FCFS (First Come First Serve) basis. Example: At time t=0: Only task T1, has arrived. T1 is executed till time t=4. At time t=4: T2 has arrived. Slack time of T1: 33-4-6=23 Slack time of T2: 28-4-3=21 Hence T2 starts to execute till time t=5 when T3 arrives. At time t=5: Slack Time of T1: 33-5-6=22 Slack Time of T2: 28-5-2=21 Slack Time of T3: 29-5-10=12 Hence T3 starts to execute till time t=13 At time t=13: Slack Time of T1: 33-13-6=14 Slack Time of T2: 28-13-2=13 Slack Time of T3: 29-13-2=14 Hence T2 starts to execute till time t=15 At time t=15: Slack Time of T1: 33-15-6=12 Slack Time of T3: 29-15-2=12 Hence T3 starts to execute till time t=16 At time t=16: Slack Time of T1: 33-16-6=11 Slack Time of T3:29-16-=12 Hence T1 starts to execute till time t=18 and so on.. Comment More infoAdvertise with us Next Article Least Slack Time (LST) scheduling Algorithm in real-time systems S ShivamKumar1 Follow Improve Article Tags : Operating Systems Similar Reads Scheduling in Real Time Systems Real-time systems are systems that carry real-time tasks. These tasks need to be performed immediately with a certain degree of urgency. In particular, these tasks are related to control of certain events (or) reacting to them. Real-time tasks can be classified as hard real-time tasks and soft real- 4 min read Shortest Remaining Time First (Preemptive SJF) Scheduling Algorithm In this post, we will talk about the pre-emptive version of Shortest Job First (SJF) scheduling, called Shortest Remaining Time First (SRTF). In SRTF, the process with the least time left to finish is selected to run. The running process will continue until it finishes or a new process with a shorte 8 min read Cyclic Scheduler in Real Time System A cyclic scheduler in the real-time system is a periodic static scheduler. It doesnât require resetting the timer interrupt at different values of tk . It interrupts the processor at regular intervals and scheduling decisions are taken regularly. The scheduling decision time divides the timeline int 1 min read Features of Global Scheduling Algorithm in Distributed System In this article, we will learn about the features of a good scheduling algorithm in a distributed system. Fault Tolerance:A good global scheduling algorithm should not be stopped when system nodes are crashed or temporarily crashed. Algorithm configuration should also be even if the nodes are separa 3 min read Two-level scheduling in Operating Systems Two-level scheduling is an efficient scheduling method that uses two schedulers to perform process scheduling. Let us understand it by an example : Suppose a system has 50 running processes all with equal priority and the system's memory can only hold 10 processes simultaneously. Thus, 40 processes 2 min read Like