Source code
class NODE { public: int key; NODE *next; NODE(int v) { key = v; next = nullptr; } NODE() { next = nullptr; } }; class CLIST{ private: NODE head, tail; mutex glock; public: CLIST(){ head.key = 0x80000000; tail.key = 0x7FFFFFFF; head.next = &tail; } void Init(){ while (head.next != &tail) { NODE *temp = head.next; head.next = head.next->next; delete temp; } } bool Add(int x) { NODE *pred, *curr; pred = &head; glock.lock(); curr = pred->next; while (curr->key < x) { pred = curr; curr = curr->next; } if (curr->key == x) { glock.unlock(); return false; } else { NODE *node = new NODE(x); node->next = curr; pred->next = node; glock.unlock(); return true; } } bool Remove(int x) { NODE *pred, *curr; pred = &head; glock.lock(); curr = pred->next; while (curr->key < x) { pred = curr; curr = curr->next; } if (curr->key == x) { pred->next = curr->next; delete curr; glock.unlock(); return true; } else { glock.unlock(); return false; } } bool Contains(int x) { NODE *pred, *curr; pred = &head; glock.lock(); curr = pred->next; while (curr->key < x) { pred = curr; curr = curr->next; } if (curr->key == x) { glock.unlock(); return true; } else { glock.unlock(); return false; } } };
The source code implements a coarse-grained synchronization linked list, where the entire list is locked during operations. This method uses a single global lock (glock
) to protect the entire list, meaning only one thread can modify the list at any given time.
Key Concepts in the Code:
Coarse-Grained Locking:
- In coarse-grained synchronization, a single lock (
glock
) is used for all operations (add, remove, contains). This ensures mutual exclusion but prevents concurrency, as only one thread can access the list at a time, even if different parts of the list are being accessed by different threads.
- In coarse-grained synchronization, a single lock (
Node Structure:
- The
NODE
class contains:key
: the value of the node.next
: pointer to the next node in the list.
- The
glock
is used to lock the entire list during operations to ensure no other thread can modify the list concurrently.
- The
Add:
- The
Add
method inserts a new node with the keyx
into the list. The entire list is locked during this operation. If the node already exists, it returnsfalse
; otherwise, it adds the node and unlocks the list.
- The
Remove:
- The
Remove
method removes the node with the keyx
from the list. It locks the entire list, finds the node, removes it, and then unlocks the list.
- The
Contains:
- The
Contains
method checks if a node with the keyx
exists in the list. It locks the entire list, traverses the list, and then unlocks it.
- The
Execution Time Analysis:
The execution times of the coarse-grained linked list under different thread loads are measured. Here’s the performance table:
Number of Threads | Execution Time (ms) |
---|---|
1 Thread | 1211ms |
2 Threads | 3265ms |
4 Threads | 3965ms |
8 Threads | 77828ms |
16 Threads | 79534ms |
Key Insights:
Performance Scaling:
- As the number of threads increases from 1 to 2, the execution time increases slightly. This is expected because with more threads, more contention occurs as they all try to acquire the same global lock (
glock
).
- As the number of threads increases from 1 to 2, the execution time increases slightly. This is expected because with more threads, more contention occurs as they all try to acquire the same global lock (
Performance Degradation at Higher Thread Counts:
- The performance degrades significantly at 4 threads and beyond. The execution time increases drastically when scaling to 8 and 16 threads, indicating that the global lock becomes a bottleneck as more threads attempt to acquire it simultaneously.
Lock Contention:
- The key issue here is lock contention: when multiple threads attempt to access the list at the same time, they must wait for the global lock to be released. As the number of threads grows, the amount of time spent waiting for the lock increases, leading to poor scalability and performance degradation.
Hardware Specifications:
- CPU: Intel Core i5-3570 (3.40 GHz)
- RAM: 8GB DDR3
The system specifications indicate a moderately powerful CPU and sufficient RAM, but the bottleneck in performance is clearly due to the coarse-grained locking mechanism, which limits concurrency.
Conclusion:
- Coarse-grained synchronization is a simple approach to thread safety but does not scale well in multi-threaded environments. The global lock prevents multiple threads from concurrently modifying or accessing different parts of the list, leading to significant performance degradation as the number of threads increases.
- For applications with high concurrency, fine-grained synchronization, optimistic synchronization, or lock-free techniques may offer better scalability and performance by reducing lock contention and allowing multiple threads to work concurrently without blocking each other.
This analysis highlights the trade-offs associated with using coarse-grained locking in a linked list and underscores the need for more advanced synchronization techniques in high-concurrency scenarios.