11 Process Synchronization – Semaphores

Mary Anitha Rajam

 

14.1 Introduction

 

Cooperating processes share data. Hence, the processes that access the shared data need to be synchronized so that there is no data inconsistency. In the earlier modules, we saw a number of solutions that helped in achieving process synchronization. We learned solutions for two processes as well as for multiple processes. The solutions to the critical section problem learned in the earlier modules are not easy to generalize to more complex problems. To overcome this difficulty, we can use a synchronization tool called a semaphore. The objectives of this module are as follows:

 

To understand what a semaphore is

To learn how semaphores can be used to achieve synchronization

To understand the difference between counting and binary semaphores

To learn how a counting semaphore can be implemented using binary semaphores

 

14.2. Semaphores

 

A semaphore is a synchronization tool that does not require busy waiting. The semaphore is an integer variable. The semaphore, being an integer variable, is assigned an initial value. The initial value denotes the number of processes that can simultaneously access the shared resource guarded by the semaphore. For example, if the shared resource can be accessed by only one process, then the initial value is set to 1. The semaphore can be accessed only via two atomic operations, namely, wait (P operation) and signal (V operation).

 

The definition of the wait operation is given below:

wait (S):

while S£ 0 do no-op; S

-;

 

Here S is the semaphore. Any process that wants to enter its critical section should first perform the wait operation on the semaphore. The wait operation works as follows: If the value of the semaphore is less than or equal to zero, the process waits in the while loop. If the value of the semaphore is greater than zero, then the process comes out of the while loop and the value of the semaphore is decremented. Any process that comes out of its critical section should perform the signal operation on the semaphore.

 

The definition of the signal operation is given below:

signal (S):

S++;

 

In the signal operation, the value of the semaphore is incremented.

 

The wait and signal operations are atomic or indivisible. Indivisibility of the wait and signal operations is ensured by the programming language or the operating system that implements it. It ensures that race conditions cannot arise over a semaphore.

 

We  can  use  semaphores  to  deal  with  the  n-process  critical-section  problem.  The following section explains how a semaphore can be used to solve the critical-section problem.

 

14.2.1 Critical Section of n Processes

 

Shared data:

semaphore mutex; //initially mutex = 1

 

The algorithm for process Pi is given below:

do {

 wait (mutex);

critical section

signal (mutex);

remainder section

} while (1);

 

A semaphore called ‘mutex’ is used here for guarding a shared resource. The initial value of ‘mutex’ is set to one. This means that only one process can use the shared resource at a particular time. Process Pi, before entering its critical section, waits on the semaphore mutex. Process Pi, after coming out of its critical section, signals the semaphore ‘mutex’.

 

Let us now see a sequence of execution of two processes P0 and P1  wanting to enter their respective critical sections.

 

P0                                                      mutex                                       P1

1

wait operation                                    0

Process enters critical section                                                  wait operation 

   signal operation                                   1                                     Process waits

                                                              0                                     Process enters

                                                                                                          critical section

1                                             signal operation

 

Initially, process P0 wants to enter its critical section. P0 executes the wait operation on the semaphore mutex. Since the value of mutex is 1, the wait operation decrements the value of mutex. The value of mutex becomes 0. P0 enters its critical section. In the meantime, if process P1 wants to enter its critical section, it executes the wait operation. Since the value of mutex is 0, process P1 continues to wait in the while loop while S£ 0 do no-op;. When process P0 comes out of its critical section, it executes the signal operation on the semaphore mutex. This signal operation increments the value of mutex. The value of mutex now becomes 1. Process P1, which is checking the value of mutex in the while loop, now comes out of the while loop, decrements the value of mutex to 0 and enters its critical section.

 

The implementation of the wait and signal operations on a semaphore that we have seen now has a disadvantage – busy waiting. That is, when one process is in its critical section, any other process trying to enter its critical section, continuously checks the value of semaphore in the wait operation. Whenever the process waiting to enter its critical section gets the CPU, it executes while S£ 0 do no-op;. That is, the process just keeps on checking if S £ 0. The value of S is not going to change until this process relinquishes the CPU and some other process changes it. This busy waiting wastes CPU cycles in multi-programmed systems with single CPU. Such a semaphore is called a spinlock.

 

Let us look at an example to understand spinlock. Let the initial value of semaphore S be 1. Assume that one process acquired a resource after executing a wait on the semaphore S and is continuing to use the resource. Let there be 2 other processes waiting to access the shared resource. Let round robin scheduling be the CPU scheduling algorithm. When the second process gets its CPU time slice, it will execute the while loop continuously in wait till its CPU time slice gets over. During this CPU time slice, the CPU time is used without doing any useful work. After this time slice, the third process gets its time slice. The third process also uses its entire time slice by just checking in the while loop of wait. The CPU’s time is not used for any useful work. Thus it is seen that spinlock wastes CPU time.

 

Spinlock is useful in multiprocessor systems. The advantage of spinlock is that no context switch is required when a process must wait on a lock. This is useful when locks are for short times.

 

To overcome this busy waiting, the definitions of the wait and signal operations of the semaphore are modified. The semaphore is implemented as a structure or a record, rather than as just a variable. The structure has two members, an integer value and a list of processes associated with the semaphore.

 

The semaphore is defined as follows:

typedef struct {

int value;

struct process *L;

} semaphore;

 

Two operations are defined on processes associated with the semaphore, the block and the wakeup operations. The block operation suspends the process that invokes it. The wakeup(P) operation resumes the execution of a blocked process P.

 

The wait and signal operations on a semaphore are now defined as

 

wait(S):

S.value–;

if (S.value < 0)

{

add this process to S.L;

block;

}

 

Here S is the semaphore structure. It has an integer value (S.value) and a list of processes (S.L) associated with it. In the wait operation, the value of the semaphore is first decremented (S.value–;). Then the value of the semaphore is checked. If the value of the semaphore is less than zero, it means that the resource is not available. Therefore, the process that is executing wait is added to the list associated with the semaphore and is blocked (i.e., put to the waiting state). Compared to the earlier implementation of the wait operation, here the process does not keep on checking the value of the semaphore. Once the resource becomes available, this process is moved out of the waiting state and the process accesses the shared resource.

 

signal(S):

S.value++;

if (S.value <= 0)

{

remove a process P from S.L;

wakeup(P);

}

 

In the signal operation, the value of the semaphore is incremented. This means that one more process can access the resource. Then the value of the semaphore is checked. If the value of the semaphore is less than or equal to zero, one of the waiting processes is woken up. The process that is woken up is moved to the ready state and can access the resource.

 

The value of the semaphore is less than or equal to zero only if there are other processes waiting in the list associated with this semaphore. (The value had become less than zero when other processes had decremented the value of the semaphore earlier while executing wait).

 

14.3  Semaphore as a General Synchronization Tool 

 

In this section, we see how the semaphore can be used as a synchronization tool. There are two processes Pi and Pj that want to access a common resource. Let flag be the semaphore that guards the shared resource. The shared resource can be used by only one process at a time. Therefore, the value of the semaphore flag is initialized to 1.

 

Pi                                                       Semaphore                                                Pj

flag = 1

wait(flag)                                           

access resource                               flag = 0

 

signal(flag)                                        

                                                               flag = -1                                                   wait(flag)

flag = 0

access resource signal(flag)

flag = 1

 

Process Pi executes the wait operation on the semaphore. The wait operation decrements the value of the semaphore flag. flag becomes zero. Since the value of flag is not less than zero, Pi can access the resource. In the meantime, if Pj wants to access the resource, Pj executes the wait operation on the semaphore flag. flag is decremented and becomes -1. Since the value of flag is < 0, Pj is added to the list associated with the semaphore and is put to the waiting state.

 

When process Pi completes the usage of the resource, it executes the signal operation on flag. The value of flag is incremented and becomes 0. Process Pj is woken up from the sleeping state. Process Pj now can access the resource. After Pj completes using the resource, Pj executes the signal operation and the value of flag is incremented back to 1.

 

In this way, a semaphore can be used for safeguarding resources.

 

14.4  Deadlock and Starvation 

 

When semaphores are used for synchronization, it is possible to have deadlocks and starvation. Deadlock is a situation where two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. For example, when two or more processes are waiting for the release of some resource that is held by another waiting process, it is a deadlock. Every process in a set of processes will be waiting for an event caused by one of the processes in the set. Starvation means indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended and it starves.

 

Let us now see how deadlocks and starvation can happen with semaphores. Let S and Q be two semaphores initialized to 1. That is, semaphore S is safeguarding a resource that can be accessed by only one process at a time. Similarly, semaphore Q is also safeguarding another resource that can be accessed by only one process at a time.

P0                         P1

wait(S);            wait(Q);

wait(Q);           wait(S);

    :           :

signal(S);         signal(Q);

signal(Q)         signal(S);

 

Consider two processes P0 and P1. P0 executes wait(S) and at the same time, P1 executes wait(Q). The values of the semaphores S and Q become 0. P0 then executes wait(Q) and waits till semaphore Q is signaled. P1 then executes wait(S) and waits till the semaphore S is signaled. Here, we see that process P0 is waiting for a resource held by P1 and process P1 is held by a resource held by P0. Each of the two processes is waiting for a resource held by the other process. Therefore, both are unable to proceed. Both the processes wait indefinitely. Both the processes are caught in a deadlock and are starving.

 

Therefore, the order in which wait and signal are used in processes should be selected carefully. Else, it may result in deadlocks and starvation also.

 

Starvation is indefinite blocking. Starvation can also happen when processes that are added to the semaphore queue are removed in a last in first out manner. That is, whenever processes wait for a resource guarded by a semaphore, the processes are added to the queue associated with the semaphore. When more processes wait for the same resource, all these processes are also added to the end of the semaphore’s queue. While removing processes from the queue, if the processes are removed from the end, the processes that were added first will starve.

 

14.5   Types of Semaphores 

 

There are two types of semaphores—counting semaphores and binary semaphores. A counting semaphore takes an integer value that can range over an unrestricted domain. The semaphores that we discussed in the previous sections are counting semaphores. They are called counting semaphores because the value of the semaphores can be any integer value. A binary semaphore takes an integer value that can range only between 0 and 1. Hence, a binary semaphore can be simpler to implement than counting semaphores. If we have the implementation of binary semaphores, we can implement  counting semaphores using the implementation of binary semaphores.

 

We now see how a counting semaphore S can be implemented using binary semaphores. S1 and S2 are the binary semaphores using which the counting semaphore S is implemented. C is an ordinary variable and the initial value of C is set to the initial value of the counting semaphore S. The initial value of the counting semaphore S indicates the number of processes that can access the resource guarded by the semaphore S at the same time.

 

Data structures:

binary-semaphore S1, S2;

int C;

 

S1 and S2 are two binary semaphores and C is an integer variable.

 

Initialization:

S1 = 1
S2 = 0
C = initial value of semaphore S

C is assigned the initial value of the counting semaphore S. The initial values of S1 and S2 are assigned 1 and 0 respectively.

 

Wait and signal operations for binary semaphore: 

 

If S1 is a binary semaphore, then the wait operation on the binary semaphore is given below:

 

wait(S1)

{

while(S1==0)

; S1–;

}

 

If the value of the binary semaphore is zero, the process that executes the wait operation, waits in the while loop. If the value is not equal to zero, the process comes out of the while loop and decrements the value of the semaphore.

 

signal(S1)

{

S1++;

}

 

In the signal operation, the value of the binary semaphore is incremented.

 

The wait and signal operations of the counting semaphore are given below:

 

wait operation

The wait operation on the counting semaphore can be implemented using wait and signal operations on the binary semaphore as given below:

 

wait(S1);

C–;

if (C < 0) {

signal(S1);

wait(S2);

}

signal(S1); 

 

The initial value of S1 is 1. Therefore, the wait operation on S1, finds that the value of S1 is not zero and decrements the value of S1. Next, the value of C is decremented. The initial value of C is equal to the value of the counting semaphore S. If the value of C becomes less than zero, then the process executing wait cannot proceed to execute its critical section. It has to wait. Therefore, the process signals S1 semaphore; the value of S1 becomes 1. Then, it waits on semaphore S2. Since the initial value of S2 is 0, the process waits on S2. If the value of C is not less than 0, it means that the process can enter its critical section. In this case, the process signals S1 and proceeds to enter its critical section. Here, we can see that the semaphore S1 guards the decrement of the variable C. The process waits on S1 before decrementing C and signals S1 after decrementing C. Semaphore S2 guards the shared resource.

 

signal operation

 

The signal operation on S is also implemented using wait and signal operations on S1 and S2. The implementation of signal(S) is given below:

 

wait(S1);

C ++;

if (C <= 0)

signal(S2);

else

signal(S1);

 

The executing process first waits on S1. If no other process is currently incrementing or decrementing C, the value of S1 will be 1. wait(S1) decrements the value of S1 to 0. Then, C is incremented. If the value of C becomes less than or equal to zero, then it means that there is(are) other process(es) waiting to use the shared resource. In that case, signal(S2) is executed. As mentioned earlier, S2 is the semaphore that guards the shared resource. When signal(S2) is executed, another process waiting on S2 will come out of waiting, signal S1 (this is the signal(S1) in the wait operation) and proceed  to use the shared  resource. If C > 0, signal(S1) (in the signal operation) is executed.

 

Let us see with an example, how a counting semaphore is implemented using binary semaphores. Let C = 2 (initial value of semaphore S). Since the value of semaphore S is 2, two processes can use the shared resource at the same time. Let us see what happens when three processes want to use the shared resource at the same time.

 

When wait(S) is executed by the first process, say P0

 

wait(S) by P0

 

S1                           C                                S2

1                              2                                 0

0

1

1

 

Initially, S1 is 0, C is 2 and S2 is 0. wait(S1) decrements the value of S1 to 0. Then C is decremented from 2 to 1. signal(S1) increments the value of S1 back to 1. P0 gets access to the shared resource.

 

In the meantime, suppose process P1  wants to access the shared resource. The sequence is shown below:

 

wait(S) by P1

 

S1                           C                                S2

1                              1                                 0

0

0

1

 

wait(S1) decrements the value of S1 from 1 to 0.Then C is decremented from 1 to 0. signal(S1) increments the value of S1 back to 1. P1 gets access to the shared resource.

 

In the meantime, suppose the third process P2  wants to access the shared resource. The sequence is shown below:

 

wait(S) by P2

 

S1                           C                                S2

1                              0                                 0

0

-1

1                                                     Process P2 waits on S2

 

wait(S1) decrements the value of S1 from 1 to 0.Then C is decremented from 0 to -1. Since the value of C goes below 0, process P2 executes wait(S2). P2 does not get access to the shared resource. Since the initial value of the semaphore S was 2, only two processes are able to access the shared resource. The other processes wait.

 

Let P0 complete the usage of the shared resource. P0 executes signal(S). The changes in the semaphores’ values are shown below:

 

When signal(S) is executed by P0

 

S1                                 C                               S2

1                                  -1                                0

0

0

Released process P2                                              1

signals S1 1                                         Process P2  waiting on S2 is released

 

wait(S1) in the signal operation decrements the value of S1 from 1 to 0.Then C is incremented from –1 to 0. Since the value of C becomes 0, signal(S2) is executed. Process P2 waiting on S2 is released. Process P2 signals S1 (signal(S1) in the wait operation) and gains access to the shared resource.

 

When signal(S) is executed by P1:

 

S1                               C                                S2

1                                  0

0

                                    1

1

wait(S1) in the signal operation  decrements the value of S1 from 1 to 0.  Then C is incremented from 0 to 1. Since the value of C becomes greater than 0, signal(S1) is executed. The value of S1 is changed from 0 to 1. Thus, we see that a counting semaphore can be implemented using binary semaphores.

 

14.6   Summary

 

In this module, we learnt how a semaphore can be used as a synchronization tool. We understood that there are two types of semaphores – counting and binary semaphores. We also learnt how a counting semaphore is implemented using binary semaphores.

 

 

References

  1. Abraham Silberschatz, Peter B. Galvin, Greg Gagne, “Operating System Concepts”, Ninth Edition, John Wiley & Sons Inc., 2012.