Tuesday, February 21, 2023

OS Unit-2

 

2.1.1 The Process

A process is a program in execution or we can say that an instance of a running program or the entity, that can be assigned to & execute on a processor. The memory layout of a process is typically divided into multiple sections, and is shown in Figure.

These sections include:

·         Text Section—the executable code (This includes the current activity represented by the value of Program Counter and the contents of the processor's registers.)

·         Data Section—Global variables (This section contains the global and static variables.)

·         Heap Section—Memory that is dynamically allocated during program run time.

·         Stack Section—Temporary data storage when invoking functions (such as function parameters, return addresses, and local variables).

 


Figure: Layout of a process in memory.

 

A program becomes a process when an executable file is loaded into memory. Two common techniques for loading executable files are double-clicking an icon representing the executable file and entering the name of the executable file on the command line.

 

2.1.2 Process State

As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states:

 

·         New State- When a program in secondary memory is started for execution, the process is said to be in a new state.

 

·         Ready State- After being loaded into the main memory and ready for execution, a process transitions from a new to a ready state. The process will now be in the ready state, waiting for the processor to execute it. Many processes may be in the ready stage in a multiprogramming environment.

 

·         Running State - After being allotted the CPU for execution, a process passes from the ready state to the run state.

 

·         Block & Wait State - If a process requires an Input/Output operation or a blocked resource during execution, it changes from run to block or the wait state.

·         The process advances to the ready state after the I/O operation is completed or the resource becomes available.

 

·         Terminate State - When a process’s execution is finished, it goes from the run state to the terminate state. The operating system deletes the process control box (or PCB) after it enters the terminate state.

 


 

Figure: Diagram of process state.

 

·        Suspend Ready State

If a process with a higher priority needs to be executed while the main memory is full, the process goes from ready to suspend ready state. Moving a lower-priority process from the ready state to the suspend ready state frees up space in the ready state for a higher-priority process. Until the main memory becomes available, the process stays in the suspend-ready state. The process is brought to its ready state when the main memory becomes accessible.

 

·        Suspend Wait State

If a process with a higher priority needs to be executed while the main memory is full, the process goes from the wait state to the suspend wait state. Moving a lower-priority process from the wait state to the suspend wait state frees up space in the ready state for a higher-priority process. The process gets moved to the suspend-ready state once the resource becomes accessible. The process is shifted to the ready state once the main memory is available.


Figure: Process States in Operation System

 

2.1.3 Process Control Block

Each process is represented in the operating system by a process control block (PCB)—also called a task control block. A PCB is shown in Figure- It contains many pieces of information associated with a specific process, including these:

 

·         Process state. The state may be new, ready, running, waiting, halted, and so on.

 

·         Program counter. The counter indicates the address of the next instruction to be executed for this process.


 

Figure: Process Control Block (PCB)

·         CPU registers. The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. Along with the program counter, this state information  must be saved when an interrupt occurs, to allow the process to be continued correctly afterward when it is rescheduled to run.

 

·         CPU-scheduling information. This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters.

 

·         Memory-management information. This information may include such items as the value of the base and limit registers and the page tables, or the segment tables, depending on the memory system used by the operating system.

 

·         Accounting information. This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on.

 

·         I/O status information. This information includes the list of I/O devices allocated to the process, a list of open files, and so on.

In brief, the PCB simply serves as the repository for all the data needed to start, or restart, a process, along with some accounting data.

 

2.2 Process Scheduling

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.

The objective of multiprogramming is to have some process running at all times so as to maximize CPU utilization. The objective of time sharing is to switch a CPU core among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process (possibly from a set of several available processes) for program execution on a core. The number of processes currently in memory is known as the degree of multiprogramming. In general, most processes can be described as either I/O bound or CPU bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations.

 

2.2.1 Categories of Scheduling

There are two categories of scheduling:

  1. Non-preemptive: Here the resource can’t be taken from a process until the process completes execution. The switching of resources occurs when the running process terminates and moves to a waiting state.
  2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During resource allocation, the process switches from running state to ready state or from waiting state to ready state. This switching occurs as the CPU may give priority to other processes and replace the process with higher priority with the running process.

 

2.2.2 Scheduling Queues

 

As processes enter the system, they are put into a ready queue, where they are ready and waiting to execute on a CPU’s core This queue is generally stored as a linked list; a ready-queue header contains pointers to the first PCB in the list, and each PCB includes a pointer field that points to the next PCB in the ready queue.

The system also includes other queues. Suppose the process makes an I/O request to a device such as a disk. Since devices run significantly slower than processors, the process will have to wait for the I/O to become available. Processes that are waiting for a certain event to occur — such as completion of I/O — are placed in a wait queue  (Figure)

 


Figure: The ready queue & wait queues

 

A common representation of process scheduling is a queueing diagram, such as that in Figure 3.5. Two types of queues are present: the ready queue and a set of wait queues. The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the system.

A new process is initially put in the ready queue. It waits there until it is selected for execution, or dispatched. Once the process is allocated a CPU core and is executing, one of several events could occur:


Figure: Queueing-diagram representation of process scheduling

 

·         The process could issue an I/O request and then be placed in an I/O wait queue.

·         The process could create a new child process and then be placed in a wait queue while it awaits the child’s termination.

·         The process could be removed forcibly from the core, as a result of an interrupt or having its time slice expire, and be put back in the ready queue.

In the first two cases, the process eventually switches from the waiting state to the ready state and is then put back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated.

 

 

2.2.3 Context Switch

When an interrupt occurs, the system needs to save the current context of the process running on the CPU core so that it can restore that context when its processing is done, essentially suspending the process and then resuming it. The context is represented in the PCB of the process. It includes the value of the CPU registers, the process state and memory-management information. Generically, we perform a state save of the current state of the CPU core, be it in kernel or user mode, and then a state restore to resume operations.

Switching the CPU core to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context switch time is pure overhead, because the system does no useful work while switching. Switching speed varies from machine to machine, depending on the memory speed,


Inter Process Communication

Interprocess communication is the mechanism provided by the operating system that allows processes to communicate with each other. This communication could involve a process grant another process know that some event has occurred or the transferring of data from one process to another.

 

A process can be of two types:

·         Independent process.

·         Co-operating process.

 

An Independent process is one that doesn’t get affected by the other processes running in the system. It doesn’t even affect the processing of any other process in the system. This is because the independent process doesn’t share any data with the other process.

The cooperating process is one that shares data with the other processes in the system. Thus, it can get affected by the other processes of the system. And even it can affect the working of other processes.


Methods of Interprocess Communication

There are two methods or two models that operating systems can implement to achieve IPC.

1.      Shared Memory

2.      Message Passing

 

Shared Memory

Multiple processes can access a common shared memory. Multiple processes communicate by shared memory, where one process makes changes at a time and then others view the change. Shared memory does not use kernel.

Message Passing

  • In IPC, this is used by a process for communication and synchronization.
  • Processes can communicate without any shared variables, therefore it can be used in a distributed environment on a network.
  • It is slower than the shared memory technique.
  • It has two actions sending (fixed size message) and receiving messages.

 

There are several reasons for providing an environment that allows process cooperation: Why (IPC) is required?

 

·         Information sharing. Since several applications may be interested in the same piece of information (for instance, copying and pasting), we must provide an environment to allow concurrent access to such information.

·         Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing cores.

·         Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads.

 

 

Process Synchronization

When two or more process cooperates with each other, their order of execution must be preserved otherwise there can be conflicts in their execution and inappropriate outputs can be produced.

A cooperative process is the one which can affect the execution of other process or can be affected by the execution of other process. Such processes need to be synchronized so that their order of execution can be guaranteed.

The procedure involved in preserving the appropriate order of execution of cooperative processes is known as Process Synchronization. There are various synchronization mechanisms that are used to synchronize the processes.

 

Some of the methods to provide synchronization are as follows −

·        Semaphore A semaphore is a variable that controls the access to a common resource by multiple processes. The two types of semaphores are binary semaphores and counting semaphores.

·        Mutual Exclusion Mutual exclusion requires that only one process thread can enter the critical section at a time. This is useful for synchronization and also prevents race conditions.

·        Barrier A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel languages and collective routines impose barriers.

·        Spinlock This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is available or not. This is known as busy waiting because the process is not doing any useful operation even though it is active.

 

Producer-Consumer problem

The Producer-Consumer problem is a classical multi-process synchronization problem, that is we are trying to achieve synchronization between more than one processes.

In the producer-consumer problem, Producer is producing some items, whereas there is one Consumer that is consuming the items produced by the Producer. The same memory buffer is shared by both producers and consumers which is of fixed-size. The task of the Producer is to produce the item, put it into the memory buffer, and again start producing items. Whereas the task of the Consumer is to consume the item from the memory buffer.

Let's understand what is the problem?

Below are a few points that considered as the problems occur in Producer-Consumer:

  • The producer should produce data only when the buffer is not full. In case it is found that the buffer is full, the producer is not allowed to store any data into the memory buffer.
  • Data can only be consumed by the consumer if and only if the memory buffer is not empty. In case it is found that the buffer is empty, the consumer is not allowed to use any data from the memory buffer.
  • Accessing memory buffer should not be allowed to producer and consumer at the same time.

Let's see the code for the above problem:

 

Producer Code


OR


Consumer Code


OR


 

Let's understand above Producer and Consumer code:

Before Starting an explanation of code, first, understand the few terms used in the above code:

  1. "in" used in a producer code represent the next empty buffer
  2. "out" used in consumer code represent first filled buffer
  3. “count” keeps the count number of elements in the buffer
  4. count is further divided into 3 lines code represented in the block in both the producer and consumer code.

 

If we talk about Producer code first:

--Rp is a register which keeps the value of m[count]

--Rp is incremented (As element has been added to buffer)

--an Incremented value of Rp is stored back to m[count]

Similarly, if we talk about Consumer code next:

--Rc is a register which keeps the value of m[count]

--Rc is decremented (As element has been removed out of buffer)

--the decremented value of Rc is stored back to m[count].



Critical Section | Critical Section Problem

 

Critical Section-

 We have discussed-

·         Process Synchronization controls the execution of processes running concurrently so as to produce the consistent results.

·         Critical section is a part of the program where shared resources are accessed by the process.

 Critical Section Problem- 

·         If multiple processes access the critical section concurrently, then results produced might be inconsistent.

·         This problem is called as critical section problem.

 Synchronization Mechanisms- 

Synchronization mechanisms allow the processes to access critical section in a synchronized manner to avoid the inconsistent results.

 For every critical section in the program, a synchronization mechanism adds-

·         An entry section before the critical section

·         An exit section after the critical section

 


 

Entry Section- 

·         It acts as a gateway for a process to enter inside the critical section.

·         It ensures that only one process is present inside the critical section at any time.

·         It does not allow any other process to enter inside the critical section if one process is already present inside it.

 

 

Exit Section- 

·         It acts as an exit gate for a process to leave the critical section.

·         When a process takes exit from the critical section, some changes are made so that other processes can enter inside the critical section.

 

Criteria For Synchronization Mechanisms- 

Any synchronization mechanism proposed to handle the critical section problem should meet the following criteria-

 


 

1.      Mutual Exclusion

2.      Progress

3.      Bounded Wait

4.      Architectural Neutral

 

1. Mutual Exclusion- 

The mechanism must ensure-

·         The processes access the critical section in a mutual exclusive manner.

·         Only one process is present inside the critical section at any time.

·         No other process can enter the critical section until the process already present inside it completes.

 

2. Progress- 

The mechanism must ensure-

·         An entry of a process inside the critical section is not dependent on the entry of another process inside the critical section.

·         A process can freely enter inside the critical section if there is no other process present inside it.

·         A process enters the critical section only if it wants to enter.

·         A process is not forced to enter inside the critical section if it does not want to enter.

 

3. Bounded Wait- 

The mechanism should ensure-

·         The wait of a process to enter the critical section is bounded.

·         A process gets to enter the critical section before its wait gets over.

 

4. Architectural Neutral- 

The mechanism should ensure-

·         It can run on any architecture without any problem.

·         There is no dependency on the architecture.

 

Important Notes-

 

Note-01: 

·         Mutual Exclusion and Progress are the mandatory criteria.

·         They must be fulfilled by all the synchronization mechanisms.

 

Note-02: 

·         Bounded waiting and Architectural neutrality are the optional criteria.

·         However, it is recommended to meet these criteria if possible.

 

 

Two-Process Solutions-Peterson's Solution

 Two process P0 & P1 // the restriction is on two processes for pi use pj.

Algorithm 1

Repeat

While turn i;

Critical section

Turn :=j;

Remainder section

Until false;

let the processes share a common integer variable turn initialized to 0 (or 1). If turn == i, then process Pi is allowed to execute in its critical section.

Only one process at a time can be in its critical section. if turn == 0 and P1 is ready to enter its critical section, P1 cannot do so, even though Po may be in its remainder section.

Algorithm 2

Repeat

Flag[i] := true

While flag[j];

Critical section

Flag[i] := false;

Remainder section

Until false;

The problem with algorithm 1 is that it does not retain sufficient information about the state of each process; it remembers only which process is allowed to enter its critical section. To remedy this problem, we can replace the variable turn with the following array:

 boolean flag [2] ;

The elements of the array are initialized to false. If flag[i] is true, this value indicates that Pi is ready to enter the critical section. The structure of process Pi is shown.

Algorithm 3(Peterson's Solution)

By combining the key ideas of algorithm 1 and algorithm 2, we obtain a correct solution to the critical-section problem, where all three requirements are met. The processes share two variables: 

boolean flag [2];

int turn;

Repeat Flag[i] :=true;

Turn := j;

While(flag[j] and turn = j);

Critical section

Flag[i] = false;

Remainder section

Until false; 

Initially flag[O] = flag[I] = false, and the value of turn is immaterial (but is either 0 or 1). The structure of process Pi is shown in Figure

To enter into critical section pi sets flag[i] to be true and asserts that it is the other process turn to enter if appropriate (turn = j). If both process try to enter at the same time, the eventual value of turn decides which of the two processes is allowed to enter its critical section first. We prove that this solution is correct. We need to show that:

1.      Mutual exclusion is preserved,

2.      The progress requirement is satisfied,

3.      The bounded-waiting requirement is met.

To prove property 1, we note that each P, enters its critical section only if either flag[j]==false or turn==i. Also note that, if both processes can be executing in their critical sections at the same time, then flag COI == flag [l] == true. These two observations imply that Po and PI could not have successfully executed their while statements at about the same time, since the value of turn can be either 0 or 1, but cannot be both.

Since at that time, flag[j] =true, and turn = i and this condition will persist as long as pj is in critical section the result follows: mutual exclusion is preserved. Similarly 2 & 3 is preserved. 


Dekker’s algorithm in Process Synchronization

To obtain such a mutual exclusion, bounded waiting, and progress there have been several algorithms implemented, one of which is Dekker’s Algorithm. To understand the algorithm let’s understand the solution to the critical section problem first.
A process is generally represented as:
do {
    //entry section
        critical section
    //exit section
        remainder section
} while (TRUE);
The solution to critical section problem must ensure following three conditions:

  1. Mutual Exclusion
  2. Progress
  3. Bounded Waiting

One of solution for ensuring above all factors is Peterson’s solution.
Another one is Dekker’s Solution. Dekker’s algorithm was the first provably-correct solution to the critical section problem. It allows two threads to share a single-use resource without conflict, using only shared memory for communication. It avoids the strict alternation of a naïve turn-taking algorithm, and was one of the first mutual exclusion algorithms to be invented.
Although there are many versions of Dekker’s Solution, the final or 5th version is the one that satisfies all of the above conditions and is the most efficient of them all.

Note: Dekker’s Solution, mentioned here, ensures mutual exclusion between two processes only, it could be extended to more than two processes with the proper use of arrays and variables.

 

Dekker’s Algorithm: Final and completed Solution: Idea is to use favoured thread notion to determine entry to the critical section. Favoured thread alternates between the thread providing mutual exclusion and avoiding deadlock, indefinite postponement or lockstep synchronization.


Semaphores

Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used for process synchronization.

The definitions of wait and signal are as follows −

  • Wait

    This operation decrements the value of its argument S, as soon as it would become non-negative(greater than or equal to 1). This Operation mainly helps you to control the entry of a task into the critical section. In the case of the negative or zero value, no operation is executed. wait() operation was originally termed as P; so it is also known as P(S) operation. The definition of wait operation is as follows:

wait(S)
{
   while (S<=0);

   S--;
}
  • Signal

    Increments the value of its argument S, as there is no more process blocked on the queue. This Operation is mainly used to control the exit of a task from the critical section. Signal() operation was originally termed as V; so it is also known as V(S) operation. The definition of signal operation is as follows:

signal(S)
{
   S++;
}

Also, note that all the modifications to the integer value of semaphore in the wait() and signal() operations must be executed indivisibly.

Properties of Semaphores

  1. It's simple and always have a non-negative integer value.

  2. Works with many processes.

  3. Can have many different critical sections with different semaphores.

  4. Each critical section has unique access semaphores.

  5. Can permit multiple processes into the critical section at once, if desirable.

We will now cover the types of semaphores in the Operating system;

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these are given as follows −

  • Counting Semaphores

    These are used to implement bounded concurrency. The Counting semaphores can range over an unrestricted domain. These can be used to control access to a given resource that consists of a finite number of Instances. Here the semaphore count is used to indicate the number of available resources. If the resources are added then the semaphore count automatically gets incremented and if the resources are removed, the count is decremented. Counting Semaphore has no mutual exclusion.d.

  • Binary Semaphores
    It is a special form of semaphore used for implementing mutual exclusion, hence it is often called a Mutex. A binary semaphore is initialized to 1 and only takes the values 0 and 1 during the execution of a program. In Binary Semaphore, the wait operation works only if the value of semaphore = 1, and the signal operation succeeds when the semaphore= 0. Binary Semaphores are easier to implement than counting semaphores.

Advantages of Semaphores:

Benefits of using Semaphores are as given below:

  • With the help of semaphores, there is a flexible management of resources.

  • Semaphores are machine-independent and they should be run in the machine-independent code of the microkernel.

  • Semaphores do not allow multiple processes to enter in the critical section.

  • They allow more than one thread to access the critical section.

  • As semaphores follow the mutual exclusion principle strictly and these are much more efficient than some other methods of synchronization.

  • No wastage of resources in semaphores because of busy waiting in semaphores as processor time is not wasted unnecessarily to check if any condition is fulfilled in order to allow a process to access the critical section.

Disadvantages of Semaphores:

  • One of the biggest limitations is that semaphores may lead to priority inversion; where low-priority processes may access the critical section first and high priority processes may access the critical section later.

  • To avoid deadlocks in the semaphore, the Wait and Signal operations are required to be executed in the correct order.

  • Using semaphores at a large scale is impractical; as their use leads to loss of modularity and this happens because the wait() and signal() operations prevent the creation of the structured layout for the system.

  • Their use is not enforced but is by convention only.

  • With improper use, a process may block indefinitely. Such a situation is called Deadlock. We will be studying deadlocks in detail in coming lessons.

Counting Semaphore vs. Binary Semaphore

Here, are some major differences between counting and binary semaphore:

Counting SemaphoreBinary Semaphore
No mutual exclusionMutual exclusion
Any integer valueValue only 0 and 1
More than one slotOnly one slot
Provide a set of ProcessesIt has a mutual exclusion mechanism.


Dining Philosophers Problem

The dining philosophers problem is another classic synchronization problem which is used to evaluate situations where there is a need of allocating multiple resources to multiple processes.

Problem Statement: 

Consider there are five philosophers sitting around a circular dining table. The dining table has five chopsticks and a bowl of rice in the middle as shown in the below figure.


At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses two chopsticks - one from their left and one from their right. When a philosopher wants to think, he keeps down both chopsticks at their original place.

Solution:

From the problem statement, it is clear that a philosopher can think for an indefinite amount of time. But when a philosopher starts eating, he has to stop at some point of time. The philosopher is in an endless cycle of thinking and eating.

An array of five semaphores, stick[5], for each of the five chopsticks.

The code for each philosopher looks like:

while(TRUE)
{
wait(stick[i]);
/*
mod is used because if i=5, next
chopstick is 1 (dining table is circular)
*/
wait(stick[(i+1) % 5]);
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
}

When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating, he puts both the chopsticks down.

But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick, then a deadlock situation occurs because they will be waiting for another chopstick forever. 

The possible solutions for this are:

  • A philosopher must be allowed to pick up the chopsticks only if both the left and right chopsticks are available.
  • Allow 6 chopsticks to be used simultaneously at the table.
  • Allow only four philosophers to sit at the table. That way, if all four philosophers pick up four chopsticks, there will be one chopstick left on the table. So, one philosopher can start eating and eventually, two chopsticks will be available. In this way, deadlocks can be avoided.
  • The odd philosopher picks up first his left chopstick and then his right chopstick, whereas an even philosopher pics of his right chopstick and then his left chopstick.
  • One Philosopher picks up his right chopstick first and then his left chopstick, i.e. reverse the sequence of any philosopher.


Sleeping Barber Problem

Problem : The analogy is based upon a hypothetical barber shop with one barber. There is a barber shop which has one barber, one barber chair, and n chairs for waiting for customers if there are any to sit on the chair.

  • If there is no customer, then the barber sleeps in his own chair.
  • When a customer arrives, he has to wake up the barber.
  • If there are many customers and the barber is cutting a customer’s hair, then the remaining customers either wait if there are empty chairs in the waiting room or they leave if no chairs are empty.



Solution : The solution to this problem includes three semaphores.First is for the customer which counts the number of customers present in the waiting room (customer in the barber chair is not included because he is not waiting). Second, the barber 0 or 1 is used to tell whether the barber is idle or is working, And the third mutex is used to provide the mutual exclusion which is required for the process to execute. In the solution, the customer has the record of the number of customers waiting in the waiting room if the number of customers is equal to the number of chairs in the waiting room then the upcoming customer leaves the barbershop.

When the barber shows up in the morning, he executes the procedure barber, causing him to block on the semaphore customers because it is initially 0. Then the barber goes to sleep until the first customer comes up.

When a customer arrives, he executes customer procedure the customer acquires the mutex for entering the critical region, if another customer enters thereafter, the second one will not be able to anything until the first one has released the mutex. The customer then checks the chairs in the waiting room if waiting customers are less then the number of chairs then he sits otherwise he leaves and releases the mutex.



If the chair is available then customer sits in the waiting room and increments the variable waiting value and also increases the customer’s semaphore this wakes up the barber if he is sleeping.

At this point, customer and barber are both awake and the barber is ready to give that person a haircut. When the haircut is over, the customer exits the procedure and if there are no customers in waiting room barber sleeps.

Flow chart for Sleeping barber problem


Algorithm for Sleeping Barber problem:

Semaphore Customers = 0;
Semaphore Barber = 0;
Mutex M = 1;
int FreeSeats = N;
  
Barber {
      while(true) {
            /* waits for a customer (sleeps). */
            down(Customers);
  
            /* mutex to protect the number of available seats.*/
            down(M);
  
            /* a chair gets free.*/
            FreeSeats++;
             
            /* bring customer for haircut.*/
            up(Barber);
             
            /* release the mutex on the chair.*/
            up(M);
            /* barber is cutting hair.*/
      }
}
  
Customer {
      while(true) {
            /* protects seats so only 1 customer tries to sit
               in a chair if that's the case.*/
            down(M); Note:-//This line should not be here.
            if(FreeSeats > 0) {
                   
                  /* sitting down.*/
                  FreeSeats--;
                   
                  /* notify the barber. */
                  up(Customers);
                   
                  /* release the lock */
                  up(M);
                   
                  /* wait in the waiting room if barber is busy. */
                  down(Barber);
                  // customer is having hair cut
            } else {
                  /* release the lock */
                  up(M);
                  // customer leaves
            }
      }
}

Readers Writer Problem

Readers writer problem is another example of a classic synchronization problem. There are many variants of this problem, one of which is examined below.

Statement

There is a shared resource which should be accessed by multiple processes. There are two types of processes in this context. They are reader and writer. Any number of readers can read from the shared resource simultaneously, but only one writer can write to the shared resource. When a writer is writing data to the resource, no other process can access the resource. A writer cannot write to the resource if there are non zero number of readers accessing the resource at that time.

The Solution

From the above problem statement, it is evident that readers have higher priority than writer. If a writer wants to write to the resource, it must wait until there are no readers currently accessing that resource.

Here, we use one mutex m and a semaphore w. An integer variable read_count is used to maintain the number of readers currently accessing the resource. The variable read_count is initialized to 0. A value of 1 is given initially to m and w.

Instead of having the process to acquire lock on the shared resource, we use the mutex m to make the process to acquire and release lock whenever it is updating the read_count variable.

The code for the writer process looks like this:

while(TRUE)
{
wait(w);
/* perform the write operation */
signal(w);
}

And, the code for the reader process looks like this:

while(TRUE)
{
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count == 0)
signal(w);
// release lock
signal(m);
}

Code explaination

  • As seen above in the code for the writer, the writer just waits on the w semaphore until it gets a chance to write to the resource.
  • After performing the write operation, it increments w so that the next writer can access the resource.
  • On the other hand, in the code for the reader, the lock is acquired whenever the read_count is updated by a process.
  • When a reader wants to access the resource, first it increments the read_count value, then accesses the resource and then decrements the read_count value.
  • The semaphore w is used by the first reader which enters the critical section and the last reader which exits the critical section.
  • The reason for this is, when the first readers enters the critical section, the writer is blocked from the resource. Only new readers can access the resource now.
  • Similarly, when the last reader exits the critical section, it signals the writer using the w semaphore because there are zero readers now and a writer can have the chance to access the resource.



Bounded Buffer Problem

Bounded buffer problem, which is also called producer consumer problem, is one of the classic problems of synchronization. Let's start by understanding the problem here, before moving on to the solution and program code.

Problem Statement

There is a buffer of n slots and each slot is capable of storing one unit of data. There are two processes running, namely, producer and consumer, which are operating on the buffer.

Tutorialwing operating system Producer Consumer Problem

Bounded Buffer Problem

A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove data from a filled slot in the buffer. As you might have guessed by now, those two processes won't produce the expected output if they are being executed concurrently.

There needs to be a way to make the producer and consumer work in an independent manner.

Solution

One solution of this problem is to use semaphores. The semaphores which will be used here are:

  • m, a binary semaphore which is used to acquire and release the lock.
  • empty, a counting semaphore whose initial value is the number of slots in the buffer, since, initially all slots are empty.
  • full, a counting semaphore whose initial value is 0.

At any instant, the current value of empty represents the number of empty slots in the buffer and full represents the number of occupied slots in the buffer.

The Producer Operation

The pseudocode of the producer function looks like this:

do
{
// wait until empty > 0 and then decrement 'empty'
wait(empty);
// acquire lock
wait(mutex);
/* perform the insert operation in a slot */
// release lock
signal(mutex);
// increment 'full'
signal(full);
}
while(TRUE)
  • Looking at the above code for a producer, we can see that a producer first waits until there is atleast one empty slot.
  • Then it decrements the empty semaphore because, there will now be one less empty slot, since the producer is going to insert data in one of those slots.
  • Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until producer completes its operation.
  • After performing the insert operation, the lock is released and the value of full is incremented because the producer has just filled a slot in the buffer.

The Consumer Operation

The pseudocode for the consumer function looks like this:

do
{
// wait until full > 0 and then decrement 'full'
wait(full);
// acquire the lock
wait(mutex);
/* perform the remove operation in a slot */
// release the lock
signal(mutex);
// increment 'empty'
signal(empty);
}
while(TRUE);
  • The consumer waits until there is atleast one full slot in the buffer.
  • Then it decrements the full semaphore because the number of occupied slots will be decreased by one, after the consumer completes its operation.
  • After that, the consumer acquires lock on the buffer.
  • Following that, the consumer completes the removal operation so that the data from one of the full slots is removed.
  • Then, the consumer releases the lock.
  • Finally, the empty semaphore is incremented by 1, because the consumer has just removed data from an occupied slot, thus making it empty.



Test and Set


No comments:

Post a Comment

Computer Organization

  Boolean algebra  is the category of algebra in which the variable’s values are the truth values,  true and false,  ordina rily denoted 1 a...