Optimistic Linked List (Optimistic Synchronization Linked List) with Execution Time Analysis
Source code
class NODE { public: int key; NODE *next; bool marked; NODE(int v) { key = v; next = nullptr; marked = false; } NODE() { next = nullptr; } }; class OLIST{ private: NODE head, tail; public: OLIST(){ head.key = 0x80000000; tail.key = 0x7FFFFFFF; head.next = &tail; } void Init(){ while (head.next != &tail) { NODE *temp = head.next; head.next = head.next->next; delete temp; } } bool Validate(NODE *pred, NODE *curr) { NODE *node = &head; while (node->key <= pred->key){ if (node == pred) { return pred->next == curr; } node = node->next; } return false; } bool Add(int x) { NODE *pred, *curr; while (true) { pred = &head; curr = pred->next; while (curr->key < x) { pred = curr; curr = curr->next; } pred->lock(); curr->lock(); if (Validate(pred, curr)) { if (curr->key == x) { curr->unlock(); pred->unlock(); return false; } else { NODE *node = new NODE(x); node->next = curr; pred->next = node; curr->unlock(); pred->unlock(); return true; } } else{ curr->unlock(); pred->unlock(); } } } bool Remove(int x) { NODE *pred, *curr; while (true) { pred = &head; curr = pred->next; while (curr->key < x) { pred = curr; curr = curr->next; } pred->lock(); curr->lock(); if (Validate(pred, curr)) { if (curr->key == x) { pred->next = curr->next; curr->unlock(); pred->unlock(); //delete curr; --> Memory Leak return true; } else { curr->unlock(); pred->unlock(); return false; } } else{ curr->unlock(); pred->unlock(); } } } bool Contains(int x) { NODE *pred, *curr; while (true) { pred = &head; curr = pred->next; while (curr->key < x) { pred = curr; curr = curr->next; } pred->lock(); curr->lock(); if (Validate(pred, curr)) { if (curr->key == x) { curr->unlock(); pred->unlock(); return true; } else { curr->unlock(); pred->unlock(); return false; } } else{ curr->unlock(); pred->unlock(); } } } };
The source code implements an optimistic synchronization linked list, which employs a technique where operations are performed with the assumption that there will be no conflicts. If a conflict is detected, the operation is retried. This approach helps reduce the overhead of locking in scenarios where conflicts are rare, improving performance under low contention.
Key Concepts in the Code:
Optimistic Synchronization:
- The idea behind optimistic synchronization is to assume that no other thread will interfere with the operation. The
Validate
function checks whether the state of the list is consistent before performing an operation, like adding or removing a node. - If a thread detects a conflict (e.g., another thread modifies the list concurrently), the operation is retried. This is in contrast to more conservative locking approaches where a thread would block until it can perform the operation.
- The idea behind optimistic synchronization is to assume that no other thread will interfere with the operation. The
Node Structure:
- The
NODE
class contains:key
: the value of the node.next
: pointer to the next node in the list.marked
: a flag used to indicate if the node is logically deleted (this is not utilized in the given code, but could be useful for lazy deletion).
- The
Add:
- The
Add
method adds a new node with the keyx
into the list. It checks for conflicts by calling theValidate
function. If no conflict is detected, it adds the node.
- The
Remove:
- The
Remove
method removes a node with the keyx
from the list. It also uses theValidate
function to ensure that the node is not concurrently modified by another thread.
- The
Contains:
- The
Contains
method checks whether a node with the given key exists in the list and returns true if it is found.
- The
Execution Time Analysis:
The execution times of the optimistic linked list under different thread loads are measured. Here’s the performance table:
Number of Threads | Execution Time (ms) |
---|---|
1 Thread | 1819ms |
2 Threads | 1452ms |
4 Threads | 1222ms |
8 Threads | 6330ms |
16 Threads | 10995ms |
Key Insights:
Performance Scaling:
- As the number of threads increases from 1 to 4, the execution time decreases significantly, indicating that optimistic synchronization is beneficial when there are low conflicts between threads.
Performance Degradation at Higher Thread Counts:
- At 8 threads, the performance starts to degrade significantly, and it gets worse with 16 threads. This suggests that while optimistic synchronization works well with a few threads, the overhead of retrying operations becomes more significant as the number of threads increases, especially when contention increases.
Optimistic Synchronization:
- The optimistic approach works best when conflicts are rare, as it avoids unnecessary locking. However, when many threads are involved and contention increases, the retries (due to validation failures) increase the overall execution time.
Hardware Specifications:
- CPU: Intel Core i5-3570 (3.40 GHz)
- RAM: 8GB DDR3
The system specifications indicate moderate processing power and memory, but the performance bottleneck appears to be related to the retry mechanism in the optimistic synchronization design, especially when scaling up to a higher number of threads.
Conclusion:
- Optimistic Synchronization offers improved performance with lower thread counts, as it reduces the overhead caused by locking. However, as the number of threads increases and contention rises, the cost of retries increases, leading to performance degradation.
- For applications with low contention, optimistic synchronization provides a good balance between concurrency and efficiency. However, for high-concurrency scenarios with many threads, alternative synchronization methods like lock-free or non-blocking approaches might perform better.
This analysis helps demonstrate the trade-offs of using optimistic synchronization in linked lists and highlights its strengths and weaknesses in different concurrency environments.