Parallel and Distributed Computing — Complete BSCS Notes
Why Use Parallel & Distributed Systems?
Definition: Systems that use multiple processors or computers to solve problems faster.
Advantages:
• Faster execution (speed)
• Handle large data
• Reliability
• Faster execution (speed)
• Handle large data
• Reliability
Why NOT use?
• Complex programming
• Synchronization issues
• High cost
• Complex programming
• Synchronization issues
• High cost
Speedup & Amdahl's Law
Definition: Speedup measures performance improvement using multiple processors.
Amdahl's Law: Limits maximum speedup due to sequential part.
Example:
If 30% code is sequential → max speedup limited.
If 30% code is sequential → max speedup limited.
Hardware Architectures
Multiprocessors: Shared memory system
Distributed Systems: Network of computers
Clusters: Group of computers working together
Software Architectures
Threads: Shared memory execution
Processes: Message passing
DSM: Distributed shared memory
DSD: Distributed shared data
Parallel Algorithms
Definition: Algorithms that run multiple tasks simultaneously.
Examples:
• Parallel Search
• Parallel Sorting
• Parallel Search
• Parallel Sorting
Core Concepts
Concurrency: Multiple tasks at same time
Synchronization: Control execution order
Load Balancing: Equal work distribution
Granularity: Size of tasks
Distributed Memory Programming
Message Passing: Communication via messages
MPI: Message Passing Interface
PVM: Parallel Virtual Machine
Advanced Systems
Distributed Shared Memory: Shared memory over network
Aurora: Abstract data types system
Enterprise: Process templates
Research Topics
• Cloud computing
• GPU computing
• Big data processing
• AI parallelization
• GPU computing
• Big data processing
• AI parallelization
0 Comments