How to Resolve Multi-Threading Problems in Concurrent C Applications

5/18/2022 | By Maker.io Staff

Two other articles discussed the benefits and drawbacks of using multi-threading in programs as well as potential problems to watch out for when writing concurrent applications. However, those articles didn't explain how to solve the issues. This article describes how to solve the problems discussed earlier using the C programming language and standard POSIX threads.

Manage Concurrent Access with Mutual Exclusion and Semaphores

As discussed, concurrent write operations can cause a plethora of problems. As threads often operate on the same shared data, developers must pay close attention to how many threads they let access a variable or file at any given time. Mutual exclusion (mutex) using semaphores is one way to prevent too many threads from simultaneously accessing a file or variable.

How to Resolve Multi-Threading Problems in Concurrent C Applications In this example, each thread must completely finish writing values to a variable in a critical section before any other thread may enter the section. Whenever a thread enters or exits the section, it outputs its ID and a short status (running or ended). In the left image, you can see that two threads are running at the same time. In addition, the OS lets many other threads run before thread zero even finishes. In the right image, all threads start and stop as expected.

Semaphores are a simple construct to understand, and they are typically not part of a specific programming language. Instead, a high-level OS provides the mechanism. You can think of semaphores as counters with a queue. In your program, you define how many threads you want to allow to simultaneously enter a critical section, for example, a file access operation in your code. Then, whenever a thread wants to enter the section, it asks the semaphore mechanism for permission. The semaphore lets the asking thread to enter the region if the count is above zero. Otherwise, it places the thread at the end of a queue. The OS then suspends the thread until it’s the thread’s turn to enter the critical section. Once the thread enters the section and subsequently exits it again, it signals the semaphore that another thread can now execute the critical code.

Consider the following example:

Copy Code
#define NUM_THREADS 8
#define LOOP_ITERATIONS 1000

pthread_t threads[NUM_THREADS];
int result = 0;

void* createThread(void* id)
{
	int tmp;

	for(int i = 0; i < LOOP_ITERATIONS; i++)
	{
    	    tmp = result;
    	    tmp = tmp + 1;
    	    result = tmp;
	}

	return NULL;
}

int main(int argc, char* argv[])
{
	for(int i = 0; i < NUM_THREADS; i++)
	{
    	    pthread_create(&threads[i], NULL, createThread, NULL);
	}

	for(int i = 0; i < NUM_THREADS; i++)
    	    pthread_join(threads[i], NULL);

	printf("The result is: %d\n", result);
	return 0;
}

This seemingly simple and trouble-free multi-threaded program doesn’t look like much, but it’s enough to cause severe concurrent access issues. First, note the global result variable, and recall that all threads within this program share the same variable. However, each thread holds its own copy of the local tmp variable located within the createThread function. Then, each thread obtains a copy of the global variable, modifies it, and then writes the result back to the global variable. However, a thread may pause the code execution at any point while others continue updating the global variable. Therefore, many updates are lost, and depending on the execution order, the result varies:

How to Resolve Multi-Threading Problems in Concurrent C Applications The result should consistently be 8000. However, the result dramatically varies depending on the execution order of the threads.

Using semaphores, programmers can protect the critical region, so that only a single thread may access the global variable at any given time:

Copy Code
/* Include statements omitted */

#define NUM_THREADS 8
#define LOOP_ITERATIONS 1000

sem_t semaphore;

pthread_t threads[NUM_THREADS];
int result = 0;

void* createThread(void* id)
{
	int tmp;

	for(int i = 0; i < LOOP_ITERATIONS; i++)
	{
        	    sem_wait(semaphore);
   	 
    	    tmp = result;
    	    tmp = tmp + 1;
    	    result = tmp;

        	    sem_post(semaphore);
	}

	return NULL;
}

int main(int argc, char* argv[])
{
	sem_init(&semaphore, 0, 1);

	for(int i = 0; i < NUM_THREADS; i++)
    	    pthread_create(&threads[i], NULL, createThread, NULL);

	for(int i = 0; i < NUM_THREADS; i++)
    	    pthread_join(threads[i], NULL);

	sem_close(&semaphore);

	return 0;
}

The new version of this code example contains a global semaphore variable, and the first line of the main method initializes the semaphore. The last parameter of the initialization call defines how many threads may simultaneously enter the critical region. Then, the program generates multiple threads like before. However, inside the createThread method’s loop, each thread checks whether it may access the variable before working with the variable. Next, the sem_wait method requests access, and the sem_post method signals the variable that the thread has left the critical section. Lastly, the second to last line of the main method closes the semaphore variable. Note that semaphores are OS-specific, so the exact names of the methods may vary. However, the general principles apply to all operating systems and implementations.

Implement Mutual Exclusion Using Locks

If you think semaphores are too powerful for implementing simple mutex locking, then you can also use a special lock function that doesn’t require semaphores. Here, a thread calls the lock function before it enters the critical region, and then it frees the lock by calling an unlock function:

Copy Code
#define NUM_THREADS 8
#define LOOP_ITERATIONS 1000

pthread_mutex_t lock;
pthread_t threads[NUM_THREADS];
int result = 0;

void* createThread(void* id)
{
	int tmp;

	for(int i = 0; i < LOOP_ITERATIONS; i++)
	{
    	    pthread_mutex_lock(&lock);

    	    tmp = result;
    	    tmp = tmp + 1;
    	    result = tmp;

    	    pthread_mutex_unlock(&lock);
	}

	return NULL;
}

int main(int argc, char* argv[])
{
	for(int i = 0; i < NUM_THREADS; i++)
    	    pthread_create(&threads[i], NULL, createThread, NULL);

	for(int i = 0; i < NUM_THREADS; i++)
    	    pthread_join(threads[i], NULL);

	return 0;
}

You can see that the lock calls happen in the same places as the previous binary-semaphore calls. You might be wondering why one would use semaphores over locks. Without going into too much detail, locks are always associated with a thread, and only that thread can unlock the region. So, if that thread crashes, freezes, or runs into a deadlock situation, neither the OS nor a higher priority thread can unlock the critical section without ending the entire program. Besides other reasons, semaphores are also more flexible, as they may allow multiple threads to enter a region, not just a single thread.

How to Implement Condition Synchronization in Multi-Threaded C Programs

The increased flexibility of semaphores becomes apparent when you recall the other big problem discussed in the last article. Besides managing critical regions, concurrent threads must sometimes also run in a specific order, which can be achieved using semaphores. For example, suppose thread A must always finish task X before thread B can execute task Y. You can achieve this schedule by employing semaphores:

Copy Code
/* import statements omitted */

#define LOOP_ITERATIONS 10000
#define MULTIPLIER 100


int result = 0;
sem_t semaphore;

void* createThreadB(void* id)
{
    	sem_wait(semaphore);
	result = result * MULTIPLIER;
	return NULL;
}

void* createThreadA(void* id)
{

	for(int i = 0; i < LOOP_ITERATIONS; i++)
    	    result = result + 1;

    	sem_post(semaphore);
	return NULL;
}

int main(int argc, char* argv[])
{
	pthread_t threadA;
	pthread_t threadB;

	sem_init(&semaphore, 0, 0);

	pthread_create(&threadA, NULL, createThreadA, NULL);
	pthread_create(&threadB, NULL, createThreadB, NULL);
	pthread_join(threadA, NULL);
	pthread_join(threadB, NULL);
    
	sem_close(&semaphore);

	// Should be 1.000.000
	printf("The result is: %d\n", result);

	return 0;
}

Note how the main method creates a semaphore with a starting value of zero in this example. Thread A doesn’t wait for the semaphore, while thread B does. Therefore, thread A can always run the code within the createThreadA method without waiting for any other threads. Once thread A finishes executing its code, it increments the semaphore. Doing so enables thread B to run as well. However, note that thread A always fully completes its task before thread B starts executing any code. This straightforward example illustrates how you can achieve condition synchronization using semaphores.

Download the Complete Code Examples

You can download all code examples mentioned in this article from this GitHub repository. It also contains a bonus example that illustrates how you can use semaphores to make your application alternate between two threads in a fixed order.

Summary

This article discussed a few ways to implement mutual exclusion and condition synchronization in multi-threaded C programs using POSIX threads. As demonstrated, semaphores are a compelling concept that high-level operating systems implement. In essence, a semaphore contains a counter variable and a queue. Whenever a thread wants to enter a critical region, it requests permission from the semaphore. If the semaphore counter is greater than zero, the thread may enter. Otherwise, the OS places the thread on a waiting list and lets it continue as soon as another thread exits the critical region. Binary semaphores can model mutual exclusion, and you can achieve the same result using simple locks.

However, semaphores are more flexible than simple locks, and you can use them to model various other situations. Semaphores can, for example, also implement condition synchronization. Here, you initialize the semaphore’s counter with a value of zero. Then, one of the threads may continue right away, while the other requests access to a critical region from the semaphore. Naturally, the semaphore blocks access for the second thread. Once the first thread finishes its task, it increments the semaphore counter, and the second thread can run. This procedure guarantees that the first thread always runs before the second thread.

TechForum

Have questions or comments? Continue the conversation on TechForum, Digi-Key's online community and technical resource.

Visit TechForum