Sunday, August 24, 2014

A simple read-write lock implementation - Problem 1

Here I implemented a very basic read-write lock based on pthread libraries. You may be wondering as how this below code works?

void
my_rwlock_write_lock (my_rwlock_t *rw_lock)
{
    if (!rw_lock) {
        return;
    }

    pthread_mutex_lock(&rw_lock->rw_mutex);
    if (rw_lock->num_readers) {
        pthread_cond_wait(&rw_lock->rw_cond, &rw_lock->rw_mutex);
    }
}

Hell, where is the unlock? Does it not cause permanent deadlock?

If you look at the manpage of pthread_cond_wait(), the function atomically unlocks the mutex and waits on condition variable. So there is no need to worry!

Now coming to the problem. As you are aware, the previous implementation was running on continuous loop and hence readers and writers were grabbing their time-slices properly (even though not fair). Now, let me remove for-loops from all the thread routines and examine the output.

nandakumar@heramba ~/rwlock $ ./a.out
Grabbed read-lock -- 1
Grabbed read-lock -- 2
Grabbed write-lock -- 1


Voila, only one write lock gets woken up while the other is permanently waiting. What is the reason? There is no one to wake him up! pthread_cond_signal() wakes up only one thread and hence only one of the writer thread gets woken up. How to get around this problem? Simple, wake up all writer threads waiting on the lock. In that case, the writer threads will be scheduled as per scheduling policy but none of them will be blocked on condition.

The modified part of read_unlock is below.

void
my_rwlock_read_unlock (my_rwlock_t *rw_lock)
{
    if (!rw_lock) {
        return;
    }

    pthread_mutex_lock(&rw_lock->rw_mutex);
    if (rw_lock->num_readers) {
        rw_lock->num_readers--;
        if (!rw_lock->num_readers) {
            pthread_cond_broadcast(&rw_lock->rw_cond);
        }
    }
    pthread_mutex_unlock(&rw_lock->rw_mutex);
}


And here is output:

nandakumar@heramba ~/rwlock $ ./a.out
Grabbed read-lock -- 1
Grabbed read-lock -- 2
Grabbed write-lock -- 1
Grabbed write-lock -- 2


Works as expected! Another way is to signal the thread waiting on condition variable when write lock is unlocked. Here is the modifed code snippet and output. Note that conditional signal is required while unlocking final reader lock

void
my_rwlock_write_unlock (my_rwlock_t *rw_lock)
{
    if (!rw_lock) {
        return;
    }

    pthread_cond_broadcast(&rw_lock->rw_cond);
    pthread_mutex_unlock(&rw_lock->rw_mutex);
}


Output:

nandakumar@heramba ~/rwlock $ gcc my_rwlock.c -lpthread
nandakumar@heramba ~/rwlock $ ./a.out
Grabbed read-lock -- 2
Grabbed read-lock -- 1
Grabbed write-lock -- 2
Grabbed write-lock -- 1


I still feel #1 is better solution since it is reader's responsibility to wake up writing threads.

Finally here is comparison between the initial implementation and one with pthread_cond_broadcast in read_unlock()

Output with pthread_cond_signal:

Grabbed read-lock -- 1
Grabbed read-lock -- 2
Grabbed write-lock -- 1
Grabbed read-lock -- 1
Grabbed read-lock -- 2
Grabbed write-lock -- 2
Grabbed read-lock -- 1
Grabbed read-lock -- 2
Grabbed write-lock -- 1
Grabbed read-lock -- 1
Grabbed read-lock -- 2


Output with pthread_cond_broadcast:

Grabbed read-lock -- 1
Grabbed read-lock -- 2
Grabbed write-lock -- 1
Grabbed read-lock -- 2
Grabbed write-lock -- 2
Grabbed read-lock -- 1
Grabbed write-lock -- 2
Grabbed read-lock -- 2
Grabbed write-lock -- 1
Grabbed read-lock -- 1
Grabbed write-lock -- 2
Grabbed write-lock -- 1
Grabbed read-lock -- 2
Grabbed read-lock -- 1


As you see, the initial implementation does not guarantee fair chance. Also I mentioned about inherent problem above. For second, even though not in order, all of them get fair chance to access shared resource. Also second implementation confirms that no regression is introduced by change!

There may be far better solutions to circumvent the problem. The comment box is always in your service to help me :)

Hope you enjoyed the sequel. By the way, there are still more problems. Stay tuned for further analysis!

No comments:

Post a Comment