4 CPU Scheduling – I

Mary Anitha Rajam

 

7.1 Introduction 

 

CPU scheduling is used in operating systems to make use of the processor time to the maximum and to provide multiple users with a feel that they are the sole users of the system. This module explains the basic concepts of CPU scheduling and the different criteria needed for evaluating CPU scheduling algorithms and the working of the First Come First Served (FCFS) and non-preemptive Shortest Job First (SJF) CPU scheduling algorithms.

 

7.2 Basic Concepts 

 

In a multiprogramming system, many processes are kept in memory at a particular time. Any process, during execution may have to wait for the completion of some I/O request. During this time, the CPU will remain idle. But, the objective of multiprogramming is to have some program running always and hence maximize CPU utilization. Hence, when a process waits for I/O, the operating system takes away t he CPU from the process and assigns the CPU to another process. But, of the many processes residing in the memory, which is the next process to which the CPU is assigned? This selection is done by the CPU scheduler.

 

The success of CPU scheduling depends on the following property of processes: Process execution comprises a cycle of CPU execution (CPU burst) and I/O wait (I/O burst). Any process alternates between these two states. Process execution begins with a CPU burst. This CPU burst is followed by an I/O burst, then a CPU burst, then an I/O burst and so on. Figure 7.1 shows an example of how a process alternates between CPU and I/O bursts, during its execution.

 

Consider the followi ng program in C to find the sum of two numbers:

int main(void)

{

int a,b, sum;

/*I/O burst*/

printf(“Enter the two numbers to be added\n”);

scanf(“%d %d”,&a,&b);

 

/*CPU burst */

sum = a+b;

/*I/O burst*/

printf(“ The sum of %d and %d is %d \n”, a,b,sum);

return 0;

}

 

It can be seen that the first two statements are I/O bound, the third statement is CPU-bound and the next statement is I/O bound. Thus any program will  have alternating CPU-bound and I/O bound i nstructions.

Fig. 7.1 Alternating sequence of CPU and I/O bursts (Source: [1])

 

If a program has few and very long CPU bursts , it is a CPU-bound program. An I/O-bound program has many very short CPU bursts . The duration of CPU bursts vary from process to process and from computer to computer. Still, the CPU bursts tend to have a frequency curve similar to that shown i n Figure 7.2. It is seen from Figure 7.2 that very short CPU bursts are more frequent than long CPU bursts. Whenever a CPU burst ends, there is an I/O burst. When there is an I/O burst, the process has to wait for I/O to get completed and is moved to the waiting state. Now, the CPU scheduler has to run to select the next process that should be assigned the CPU. Since the CPU bursts are frequent, the CPU scheduler has to run frequently.

Fig. 7.2 Histogram of CPU-burst times (Source: [1])

 

7.3 CPU Scheduler

 

Whenever the CPU becomes idle, the operating system should select one of the processes in the ready queue and that process is assigned the CPU. This selection is done by the CPU scheduler. The CPU scheduler is also called the s hort-term scheduler. The CPU scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. The ready queue from which the processes are selected need not be a first i n, first out queue, always. The ready queue may even be a priority queue, a tree or an unordered linked list.

 

Figure 7.3 shows the change i n states during the lifetime of a process. CPU scheduling decisions may take place when a process:

 

1.  Switches from running to waiting state (I/O, wait system call)

2.  Switches from running to ready state (timer i nterrupt)

3.  Switches from waiti ng to ready (completion of I/O)

4.  Termi nates

Fig. 7.3 Process state transition diagram

 

When the process switches from the running state to the waiting state or when the process terminates, the process relinquishes the CPU on its own. It is not forced to relinquish the CPU. When scheduling takes place in these circumstances, it is called non-preemptive CPU scheduling. When the process moves from the waiting state to the ready state or when the process moves from the running state to the ready state, the CPU is forcibly removed from the process. The scheduling that happens in these circumstances is called preemptive scheduling.

 

7.4  Dispatcher 

 

Only the selection of which process will use the CPU next is done by the CPU scheduler. The dispatcher module is the one that gives control of the CPU to the process selected by the short-term scheduler. The dispatcher takes care of the following:

 

1. Switching context

2. Switching to user mode

3. Jumping to the proper location in the user program to restart that program Thus, it takes some  time for the dispatcher to stop one process and start another process. This time is called dispatch latency. The dispatch latency should be as less as possible because the CPU does not do any useful work during this period.

 

7.5 Scheduling Criteria 

 

Many CPU scheduling algorithms have been proposed. To select the best CPU scheduling algorithm, various criteria can be considered. The following are some of the criteria that are used to analyze the performance of any CPU scheduling algorithm:

 

1. CPU utilization – Keep the CPU as busy as possible. Hence, CPU utilization should be maximum.

2. Throughput – Number of processes that complete their execution per time unit. Throughput should be maximum.

3. Turnaround time – amount of time to execute a particular process. This should be as minimum as possible.

4. Waiting time – amount of time a process has been waiting in the ready queue. This should be as minimum as possible.

5. Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment). Response time should be as minimum as possible.

 

7.6 Scheduling algorithms 

 

In this section, we will learn two CPU scheduling algorithms, First Come First Served (FCFS) scheduling and Shortest Job First (SJF) scheduling algorithms.

 

7.6.1 First-Come, First-Served (FCFS) Scheduling 

 

In FCFS, the process that arrived first and requests the CPU first is assigned the CPU first. In FCFS, the ready queue can be maintained as a FIFO queue. When processes arrive they are added to the tail of the queue. When the CPU becomes free, the process in the head of the queue is taken for execution. This is a non-preemptive algorithm. Once a process is assigned the CPU, the process continues to use the CPU till its CPU burst time is completed. When its CPU burst gets over, the process by itself relinquishes the CPU. FCFS is explained with an example given below:

 

Consider three processes P1, P2 and P3 with CPU-burst times 24 milliseconds (ms), 3 ms and 3 ms respectively. All three processes arrive at the same time i n the order P1, P2, P3.

 

The Gantt chart that shows the order i n which processes are assigned the CPU is shown below. Since process P1 is the first process in the queue, P1  is assigned the CPU first. P1 has a CPU-burst length of 24 ms. P1 uses the CPU for 24 ms and then relinquishes the CPU as shown in Figure 7.4.

Fig. 7.4 Process P1 using the CPU

 

The next process i n the queue is P2. P2 is scheduled next. P2 has a CPU-burst time of 3 ms and uses the CPU till 27 ms as shown in Figure 7.5.

Fig. 7.5 Process P2 using the CPU

 

After P2, P3 is the only process in the queue and it is assigned the CPU. P3 uses the CPU for a time duration of 3 ms and then relinquishes the CPU as shown i n Figure 7.6.

Fig. 7.4 Process P3 using the CPU

 

The waiting time for each process and hence the average waiting time can be calculated. Since all the three processes arrive at the same time, the arrival time of the processes is taken as 0. Process P1 arrived at time 0 and was assigned the CPU immediately. Hence, the waiting time for P1 = 0. Process P2 arrived at time 0 and was assigned the CPU at time 24. Hence, the waiting time for P2 = 24. Process P3 arrived at time 0 and was assigned the CPU at time 27. Hence, the waiting time for P3 = 27. Hence, the average waiting time is (0 + 24 + 27) / 3 = 17.

 

Similarly, the turnaround time for each process and hence the average turnaround time can be calculated. Process P1 arrived at time 0 and completed its CPU burst at time 24. Hence, the turnaround time for P1 = 24. Process P2 arrived at time 0 and completed its CPU burst at time 27. Hence, the turnaround time for P2 = 27. Process P3 arrived at time 0 and completed its CPU burst at time 30. Hence, the turnaround time for P3 = 30.The average turnaround time is (24 + 27 + 30) / 3 = 27.

 

Suppose if the three processes arrive in the order P2, P3, P1. The Gantt chart for the schedule is as shown in Figure 7.5.

Fig. 7.5 Gantt chart with the order of the processes changed

 

The waiting time of the processes can be calculated as follows: The arrival time for all the processes is the same and is taken as 0. Therefore, the waiting time for P1 = 6, P2 = 0 and P3 = 3. The average waiting time is (6 + 0 + 3)/3 = 3. The turnaround time for P1 = 30, P2 = 3 and P3 = 6. Therefore, the average turnaround time is (3 + 6 + 30)/3 = 13. It is seen that the average waiting time and the average turnaround time have improved when the order of the processes is changed. In the earlier case, process P1 had a long CPU burst and the other processes had to wait for P1 to complete its CPU burst. This increased the average waiting time and the average turnaround time.

 

7.6.2  Shortest-Job-First (SJF) Scheduling 

 

The SJF CPU scheduling algorithm associates with each process the length of its next CPU burst. These lengths are used to schedule the process with the shortest time. When the CPU becomes available, the process with the shortest next CPU burst is assigned the CPU. There are two schemes i n SJF. One is non-preemptive in which once CPU is assigned to a process, it cannot be preempted until it completes its CPU burst. The other is preemptive in which if a new process arrives with CPU burst length less than the remaining time of the currently executing process, the currently executing process is preempted. This scheme is known as the Shortest-Remaining-Time-First (SRTF). In this module, we will learn how scheduling is done i n non-preemptive SJF.

 

Consider four processes P1, P2, P3  and P4. The arri val times and the CPU-burst times of the four processes are gi ven below:

Process Arriva l Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

 

At time 0.0, P1 is the only process that has arrived. Hence , P1 is assigned the CPU first. Since it is non-preemptive SJF scheduling, P1 continues to use the CPU till its CPU-burst gets over as shown in Figure 7.6

Fig. 7.6 Process P1 using the CPU

 

After P1 finishes its CPU-burst, the processes remaining i n the ready queue are P2, P3 and P4. The processes P2, P3  and P4 have now arrived. Of these three processes, P3 has the shortest CPU burst. Hence, P3 is assigned the CPU. P3 makes use of the CPU for a time duration that is equal to its CPU burst (1 ms) and then relinquishes the CPU as in Figure 7.7.

Fig. 7.7 Process P3usingtheCPU

 

P2 and P4 are the remaining processes. Both the processes have the same CPU burst time. Any one of these can be assigned the CPU. So, the tie is now broken using FCFS. Of the two processes, P2 arrived earlier. Hence, P2 is assigned the CPU and P2 executes till its CPU burst is completed as i n Figure 7.8.

Fig. 7.8 P2 using the CPU

 

After P2 finishes execution, the only process waiting is P4. P4 is now assigned the CPU. After P4 runs for its CPU-burst time, it relinquishes the CPU as shown i n Figure 7.9.

Fig. 7.9 P4 using the CPU

 

The average waiting time and the average turnaround time can be calculated as follows. The waiting time for process P1 is 0. P1 arrived at time 0 and was assigned the CPU immediately. P1 did not wait. P2 arrived at time 2.0 and was assigned the CPU at time 8.0. Hence, the waiting time for process P2 is 8 – 2 = 6. P3 arrived at time 4.0 and was assigned the CPU at time 7.0. Hence, the waiting time for process P2 is 7 – 4 = 3. Process P4 arrived at time 5.0 and was assigned the CPU at time 12.0. Hence the waiting time of process P2 is 12 – 5 = 7. Hence, the average waiting time = (0 + (8 – 2) + (7 – 4) + (12 – 5)) / 4 = 4.

 

P1 arrived at time 0 and completed execution at 7. The turnaround time of process P1 is 7. P2 arrived at time 2.0 and completed execution at time 12.0. Hence, the turnaround time for process P2 is 12 – 2 = 10. P3 arrived at time 4.0 and completed execution at time 8.0. Hence , the waiting time for process P2 is 8 – 4 = 4. Process P4 arrived at time 5.0 and completed execution at time 16.0. Hence , the turnaround time of process P2 is 16 – 5 = 11. Hence the a verage waiting time = (0 + (8 – 2) + (7 – 4) + (12 – 5)) / 4 = 4. Hence, the average turnaround time = ((7 – 0) + (12 – 2) + (8 – 4) + (16 – 5)) /  4 = 32 / 4 =8.

 

7.7  Summary 

 

In this module, we learnt what CPU scheduling is. Multiprogramming maximizes CPU utilization. The CPU scheduler selects the next process that should be assigned the  CPU.  Many CPU scheduling algorithms  have  been proposed. CPU scheduling algorithms can be preemptive or non-preemptive. In this module, we learnt how First Come First Served (FCFS) and non-preemptive Shortest Job First (SJF) CPU scheduling algorithms work. We saw that SJF has less waiting time and less turnaround time when compared to FCFS.

 

References

  1. Abraham Silberschatz,  Peter  B.  Galvin,  Greg  Gagne,  “Operating  System Concepts”, Sixth Edition, John Wiley & Sons Inc., 2003.