Lamport's Distributed Mutual Exclusion Algorithm


parrallel algorithem


 Lamport's Distributed Mutual Exclusion Algorithm is a distributed algorithm used to achieve mutual exclusion in a distributed system, where multiple processes compete for access to a shared resource. The algorithm was proposed by Leslie Lamport in 1978.

Basic Idea: The basic idea behind Lamport's algorithm is to assign a timestamp to each process based on Lamport's logical clocks. The process with the lowest timestamp gets priority for accessing the shared resource. If two processes have the same timestamp, tie-breaker rules are used to determine the priority.

Algorithm Steps:

  1. Requesting Access:

    • When a process wants to access the shared resource, it sends a request message to all other processes in the system.
    • Along with the request message, the process includes its own timestamp.
  2. Receiving Requests:

    • Upon receiving a request message from another process:
      • The receiving process compares the timestamp in the request message with its own timestamp.
      • If the received timestamp is smaller or if the timestamps are equal but the sender's process ID is lower, the receiving process defers its own request and sends a reply message to the sender granting it permission to access the resource.
      • If the received timestamp is greater, the receiving process defers the sender's request and waits for its own request to be granted.
  3. Entering Critical Section:

    • Once a process receives replies from a majority of processes (including its own request), it can enter the critical section and access the shared resource.
  4. Releasing Access:

    • After completing its critical section execution, the process sends release messages to all other processes to inform them that it no longer requires access to the shared resource.

Properties:

  • Mutual Exclusion: Only one process can access the critical section at any given time.
  • Deadlock Freedom: The algorithm ensures progress by using timestamps to break ties and guaranteeing that one process always proceeds.
  • Starvation Freedom: Every process eventually gets access to the critical section, as long as it continues to request it.

Limitations:

  • Overhead: The algorithm requires communication among all processes in the system for each request, leading to increased overhead as the number of processes grows.
  • Message Delay: The algorithm assumes that message delivery is reliable and that there are no message delays or failures. Real-world distributed systems may experience delays or failures, which can affect the algorithm's correctness and performance.

No comments:

Post a Comment