Name

cyg_mutex_init, cyg_mutex_destroy, cyg_mutex_lock, cyg_mutex_timed_lock, cyg_mutex_trylock, cyg_mutex_unlock, cyg_mutex_release, cyg_mutex_set_ceiling, cyg_mutex_set_protocol — Synchronization primitive

Synopsis

#include <cyg/kernel/kapi.h>
      
void cyg_mutex_init (cyg_mutex_t* mutex );
 
void cyg_mutex_destroy (cyg_mutex_t* mutex );
 
cyg_bool_t cyg_mutex_lock (cyg_mutex_t* mutex );
 
cyg_bool_t cyg_mutex_timed_lock (cyg_mutex_t* mutex ,
 cyg_tick_count_t abstime );
 
cyg_bool_t cyg_mutex_trylock (cyg_mutex_t* mutex );
 
void cyg_mutex_unlock (cyg_mutex_t* mutex );
 
void cyg_mutex_release (cyg_mutex_t* mutex );
 
void cyg_mutex_set_ceiling (cyg_mutex_t* mutex ,
 cyg_priority_t priority );
 
void cyg_mutex_set_protocol (cyg_mutex_t* mutex ,
 enum cyg_mutex_protocol protocol/ );
 

Description

The purpose of mutexes is to let threads share resources safely. If two or more threads attempt to manipulate a data structure with no locking between them then the system may run for quite some time without apparent problems, but sooner or later the data structure will become inconsistent and the application will start behaving strangely and is quite likely to crash. The same can apply even when manipulating a single variable or some other resource. For example, consider:

static volatile int counter = 0;

void
process_event(void)
{
    …

    counter++;
}

Assume that after a certain period of time counter has a value of 42, and two threads A and B running at the same priority call process_event. Typically thread A will read the value of counter into a register, increment this register to 43, and write this updated value back to memory. Thread B will do the same, so usually counter will end up with a value of 44. However if thread A is timesliced after reading the old value 42 but before writing back 43, thread B will still read back the old value and will also write back 43. The net result is that the counter only gets incremented once, not twice, which depending on the application may prove disastrous.

Sections of code like the above which involve manipulating shared data are generally known as critical regions. Code should claim a lock before entering a critical region and release the lock when leaving. Mutexes provide an appropriate synchronization primitive for this.

static volatile int counter = 0;
static cyg_mutex_t  lock;

void
process_event(void)
{
    …

    cyg_mutex_lock(&lock);
    counter++;
    cyg_mutex_unlock(&lock);
}

A mutex must be initialized before it can be used, by calling cyg_mutex_init. This takes a pointer to a cyg_mutex_t data structure which is typically statically allocated, and may be part of a larger data structure. If a mutex is no longer required and there are no threads waiting on it then cyg_mutex_destroy can be used.

The main functions for using a mutex are cyg_mutex_lock and cyg_mutex_unlock. In normal operation cyg_mutex_lock will return success after claiming the mutex lock, blocking if another thread currently owns the mutex. However the lock operation may fail if other code calls cyg_mutex_release or cyg_thread_release, so if these functions may get used then it is important to check the return value. The current owner of a mutex should call cyg_mutex_unlock when a lock is no longer required. This operation must be performed by the owner, not by another thread.

The kernel supplies a variant of cyg_mutex_lock, cyg_mutex_timed_wait, which can be used to wait for the lock or until some number of clock ticks have passed. The number of ticks is specified as an absolute, not relative, tick count and so in order to wait for a relative number of ticks, the return value of the cyg_current_time() function should be added to determine the absolute number of ticks. If this function returns true then the mutex has been claimed, if it returns false then either a timeout has occurred or the thread has been released.

cyg_mutex_trylock is a variant of cyg_mutex_lock that will always return immediately, returning success or failure as appropriate. This function is rarely useful. Typical code locks a mutex just before entering a critical region, so if the lock cannot be claimed then there may be nothing else for the current thread to do. Use of this function may also cause a form of priority inversion if the owner runs at a lower priority, because the priority inheritance code will not be triggered. Instead the current thread continues running, preventing the owner from getting any CPU time, completing the critical region, and releasing the mutex.

cyg_mutex_release can be used to wake up all threads that are currently blocked inside a call to cyg_mutex_lock for a specific mutex. These lock calls will return failure. The current mutex owner is not affected.

Priority Inversion

The use of mutexes gives rise to a problem known as priority inversion. In a typical scenario this requires three threads A, B, and C, running at high, medium and low priority respectively. Thread A and thread B are temporarily blocked waiting for some event, so thread C gets a chance to run, needs to enter a critical region, and locks a mutex. At this point threads A and B are woken up - the exact order does not matter. Thread A needs to claim the same mutex but has to wait until C has left the critical region and can release the mutex. Meanwhile thread B works on something completely different and can continue running without problems. Because thread C is running a lower priority than B it will not get a chance to run until B blocks for some reason, and hence thread A cannot run either. The overall effect is that a high-priority thread A cannot proceed because of a lower priority thread B, and priority inversion has occurred.

In simple applications it may be possible to arrange the code such that priority inversion cannot occur, for example by ensuring that a given mutex is never shared by threads running at different priority levels. However this may not always be possible even at the application level. In addition mutexes may be used internally by underlying code, for example the memory allocation package, so careful analysis of the whole system would be needed to be sure that priority inversion cannot occur. Instead it is common practice to use one of two techniques: priority ceilings and priority inheritance.

Priority ceilings involve associating a priority with each mutex. Usually this will match the highest priority thread that will ever lock the mutex. When a thread running at a lower priority makes a successful call to cyg_mutex_lock or cyg_mutex_trylock its priority will be boosted to that of the mutex. For example, given the previous example the priority associated with the mutex would be that of thread A, so for as long as it owns the mutex thread C will run in preference to thread B. When C releases the mutex its priority drops to the normal value again, allowing A to run and claim the mutex. Setting the priority for a mutex involves a call to cyg_mutex_set_ceiling, which is typically called during initialization. It is possible to change the ceiling dynamically but this will only affect subsequent lock operations, not the current owner of the mutex.

Priority ceilings are very suitable for simple applications, where for every thread in the system it is possible to work out which mutexes will be accessed. For more complicated applications this may prove difficult, especially if thread priorities change at run-time. An additional problem occurs for any mutexes outside the application, for example used internally within eCos packages. A typical eCos package will be unaware of the details of the various threads in the system, so it will have no way of setting suitable ceilings for its internal mutexes. If those mutexes are not exported to application code then using priority ceilings may not be viable. The kernel does provide a configuration option CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT_PRIORITY that can be used to set the default priority ceiling for all mutexes, which may prove sufficient.

The alternative approach is to use priority inheritance: if a thread calls cyg_mutex_lock for a mutex that it currently owned by a lower-priority thread, then the owner will have its priority raised to that of the current thread. Often this is more efficient than priority ceilings because priority boosting only happens when necessary, not for every lock operation, and the required priority is determined at run-time rather than by static analysis. However there are complications when multiple threads running at different priorities try to lock a single mutex, or when the current owner of a mutex then tries to lock additional mutexes, and this makes the implementation significantly more complicated than priority ceilings.

There are a number of configuration options associated with priority inversion. First, if after careful analysis it is known that priority inversion cannot arise then the component CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL can be disabled. More commonly this component will be enabled, and one of either CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_INHERIT or CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_CEILING will be selected, so that one of the two protocols is available for all mutexes. It is possible to select multiple protocols, so that some mutexes can have priority ceilings while others use priority inheritance or no priority inversion protection at all. Obviously this flexibility will add to the code size and to the cost of mutex operations. The default for all mutexes will be controlled by CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT, and can be changed at run-time using cyg_mutex_set_protocol.

Priority inversion problems can also occur with other synchronization primitives such as semaphores. For example there could be a situation where a high-priority thread A is waiting on a semaphore, a low-priority thread C needs to do just a little bit more work before posting the semaphore, but a medium priority thread B is running and preventing C from making progress. However a semaphore does not have the concept of an owner, so there is no way for the system to know that it is thread C which would next post to the semaphore. Hence there is no way for the system to boost the priority of C automatically and prevent the priority inversion. Instead situations like this have to be detected by application developers and appropriate precautions have to be taken, for example making sure that all the threads run at suitable priorities at all times.

[Warning]Warning

The default implementation of priority inheritance within the eCos kernel has been simplified in a way that may cause behaviour which is unexpected for developers. Problems will only arise if a thread owns one mutex, then attempts to claim another mutex, and there are other threads attempting to lock these same mutexes. Although the system will continue running, the current owners of the various mutexes involved may not run at the priority they should.

The reason for this is that, with the default implementation of priority inheritance, a thread which has its priority boosted due to it having locked two or more mutexes will not have its priority reduced until both mutexes are unlocked. In other words, that thread will continue running at the highest priority of any of the threads waiting for a mutex it holds, and will keep running at that priority until it has unlocked all the mutexes which it holds.

This situation rarely arises in real-world code because a mutex should generally only be locked for a small critical region, and there is no need to manipulate other shared resources inside this region. However eCosPro offers an alternative implementation which does allow priorities to be reduced in a fair and accurate way when mutexes are unlocked. This alternative is not enabled by default, but can be enabled with the "Fair priority inheritance semantics" (CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_INHERIT_ACCURATE) CDL configuration option. However the trade-off of providing a fair and accurate behaviour of priority inheritance is that mutex unlock operations then have a non-deterministic element as each mutex held by a thread must be examined to determine the priority of the highest waiting thread. Fortunately, it is unlikely in real-world applications that more than a few mutexes will be held simultaneously, so that list should be short; and therefore this trade-off may be acceptable for many developers.

[Warning]Warning

Support for priority ceilings and priority inheritance is not implemented for all schedulers. In particular neither priority ceilings nor priority inheritance are currently available for the bitmap scheduler.

Alternatives

In nearly all circumstances, if two or more threads need to share some data then protecting this data with a mutex is the correct thing to do. Mutexes are the only primitive that combine a locking mechanism and protection against priority inversion problems. However this functionality is achieved at a cost, and in exceptional circumstances such as an application's most critical inner loop it may be desirable to use some other means of locking.

When a critical region is very very small it is possible to lock the scheduler, thus ensuring that no other thread can run until the scheduler is unlocked again. This is achieved with calls to cyg_scheduler_lock and cyg_scheduler_unlock. If the critical region is sufficiently small then this can actually improve both performance and dispatch latency because cyg_mutex_lock also locks the scheduler for a brief period of time. This approach will not work on SMP systems because another thread may already be running on a different processor and accessing the critical region.

Another way of avoiding the use of mutexes is to make sure that all threads that access a particular critical region run at the same priority and configure the system with timeslicing disabled (CYGSEM_KERNEL_SCHED_TIMESLICE). Without timeslicing a thread can only be preempted by a higher-priority one, or if it performs some operation that can block. This approach requires that none of the operations in the critical region can block, so for example it is not legal to call cyg_semaphore_wait. It is also vulnerable to any changes in the configuration or to the various thread priorities: any such changes may now have unexpected side effects. It will not work on SMP systems.

Recursive Mutexes

The implementation of mutexes within the eCos kernel does not support recursive locks. If a thread has locked a mutex and then attempts to lock the mutex again, typically as a result of some recursive call in a complicated call graph, then either an assertion failure will be reported or the thread will deadlock. This behaviour is deliberate. When a thread has just locked a mutex associated with some data structure, it can assume that that data structure is in a consistent state. Before unlocking the mutex again it must ensure that the data structure is again in a consistent state. Recursive mutexes allow a thread to make arbitrary changes to a data structure, then in a recursive call lock the mutex again while the data structure is still inconsistent. The net result is that code can no longer make any assumptions about data structure consistency, which defeats the purpose of using mutexes.

Valid contexts

cyg_mutex_init, cyg_mutex_set_ceiling and cyg_mutex_set_protocol are normally called during initialization but may also be called from thread context. The remaining functions should only be called from thread context. Mutexes serve as a mutual exclusion mechanism between threads, and cannot be used to synchronize between threads and the interrupt handling subsystem. If a critical region is shared between a thread and a DSR then it must be protected using cyg_scheduler_lock and cyg_scheduler_unlock. If a critical region is shared between a thread and an ISR, it must be protected by disabling or masking interrupts. Obviously these operations must be used with care because they can affect dispatch and interrupt latencies.