Class Note for CMPSCI 377 at UMass(49)
Class Note for CMPSCI 377 at UMass(49)
Popular in Course
Popular in Department
This 10 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 34 views.
Reviews for Class Note for CMPSCI 377 at UMass(49)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
CMPSCI 377 Operating Systems Synchronization 71 Synchronization As we already know threads must ensure consistency otherwise race conditions incorrect nondeterministic results might happen Most of what we are concerned with is constraining threads so that they produce correct results If two threads do not synchronize they may attempt to update the same data structure simultaneously Such con icts usually result in corrupt data structures which produce crashes and incorrect results Letls consider the too much milk problem two people share the same fridge and must guarantee that therels always milk but not too much milk How to solve it First we consider some important concepts and their de nitions 0 Mutual exclusion prevents threads from operating on the same data at the same time 0 Critical section a piece of code that only one thread can execute at a time 0 Lock a mechanism for mutual exclusion the program locks on entering a critical section accesses the shared data and then unlocks Also a program waits if it tries to acquire an alreadylocked lock 0 Invariant something that must always be true when not holding the lock For the above mentioned problem we want to ensure some correctness properties First we want to guarantee that only one person buys milk when it is needed this is the safety property aka nothing bad happens Also we want to ensure that someone does buy milk when needed the progress property aka something good eventually happens Now consider that we can use the following atomic operations when writing the code for the problem 0 leave a note equivalent to a lock 0 remove a note equivalent to a unlock o donlt buy milk if there s a note equivalent to a wait Our rst try could be to use the following code on both threads if no milk and no note leave note buy milk remove note Unfortunately this doesnlt work because both threads could simultaneously verify that there s no note and no milk and then both would simultaneously leave a note and buy more milk The problem in this case is that we end up with too much milk safety property not met 71 72 Chapter 7 Synchronization Now consider our solution 2 Thread A leave note quotAquot if no note quotBquot if no milk buy milk remove note quotAquot Thread B leave note quotBquot if no note quotAquot if no milk buy milk remove note quotBquot The problem now is that if both threads leave notes at the same time neither will ever do anything Then we end up with no milk at all which means that the progress property is not met Let us now consider an approach that does work Thread A leave note A while note B do nothing if no milk buy milk remove note A Thread B leave note B if no note A if no milk buy milk remove note B This approach unlike the two examples considered on the previous class does work However it is not easy to be convinced that these two algorithms when taken together always produce the desired behavior Moreover these pieces of code have some drawbacks rst notice that Thread A goes into a loop waiting for B to release its note This is called busy waiting and is generally not a good idea because Thread A wastes a lot of CPU and because it can t execute anything usefull while B is not done Also notice that even though both threads try to perform the exact same thing they do it in very different ways This is a problem specially when we were to write say a third thread This third thread would probably look very different than both A and B and this type of asymmetric code does not scale very well So the question is how can we guarantee correctness and at the same time avoid all these drawbacks The answer is that we can augment the programming language with highlevel constructs capable of solving synchronization problems Currently the best known constructs used in order to deal with concurrency problems are locks semaphores monitors Chapter 7 Synchronization 7 3 711 LocksMutexes Locks also known as Mutexes provide mutual exclusion to shared data inside a critical section They are implemented as two atomic routines acquire7 which waits for a lock7 and takes it when possible and release7 which unlocks the lock and wakes up the waiters The rules semantics of locks are the following 1 at most one thread can have the lock 2 locks must be acquired before accessing shared data 3 locks must be released after use 4 locks are initially released The syntax for using locks in CC is the following pthreadmutexinit ampl pthreadmutexlockampl update data this is the critical section pthreadmutexunlockampl Let us now try to rewrite the Too Much Milk77 problem in a cleaner and more symmetric way7 using locks In order to do so7 the code for Thread A and also for Thread B has to be the following pthreadmutexlockampl if no milk buy milk pthreadmutexunlockampl This is clearly much easier to understand than the previous solutions also7 it is more scalable7 since all threads are implemented in the exact same way Now7 how could one go about implementing locks No matter how we choose to implement them7 we must have some hardware support One possibility to implement locks is to disable interrupts7 since these are the only way that a CPU has to change what it is doing in other words7 if we keep the CPU from switching processes7 we can guarantee that only one process the active one will have access to the shared data Another option would be to make use of atomic operations7 such as testampset This operation which usually correspond to an assembly instruction7 is such that testampsetx returns 17 if x1 otherwise7 if x07 it returns 0 and sets x to 1 All this is of course implemented atomically Having this type of atomic operation7 one could implement threadioek simply as while testampset 1 do nothing and threadiunloek simply as 74 Chapter 7 Synchronization 0 Communication between threads is done implicitly via shared variables 0 Critical sections are regions of code that access shared variables 0 Critical sections must be protected by synchronization 7 We need primitives that ensure mutual exclusion 7 Writing personalized solutions to concurrency is tricky and errorprone 7 The solution is to introduce general highlevel constructs into the language such as locks 72 Advanced Synchronization Remember that synchronization serves two purposes 1 to ensure safety for updates on shared data eg to avoid data races and 2 to coordinate actions taken by threads eigi handling in parallel different clients at the same time One of the most important aspects of parallel programs is that all their possible interleavings must be correct One possible way to guarantee this is to simply put one lock in the beginning of each thread however it is also clear that we want to use as as little constraints as possible in order to effectively exploit the available concurrencyi Thus placing locks optimally is usually very dif cult 721 Example Consider the following example of a multithreaded program that spawns N threads each thread computes some expensive expCompv computation and adds safely its results to a global variable pthreadt threads N pthreadmutext myLock total O void main pthreadmutextinit myLock for int i0 iltN i int ptri new int ptri i pthreadcreateampthreads i expensiveComputation void ptri for int i0 iltN i pthreadjointhreadsi NULL printfquottota1 odnquot total expensiveComputationvoid x int v int x delete int x Chapter 7 Synchronization 75 int res expCompv pthreadmutexlock ampmyLock total res pthreadmutexunlock amplock return NULL 73 Thread safety What does it mean for something to be threadsafe By saying that X is threadsafe we mean that if multiple threads use X at the same time we donlt have to worry about concurrency problems The STL for instance is not threadsafe if we were to create an STL queue and have two threads to operate on it simultaneously we would have to manually perform all locking operations The coat instruction is also not threadsafer Suppose now we want to build a threadsafe queue the methods we want the queue to have are insert tem remove and emptyOl The first question to be asked is what should remove do when the queue is empty One solution would be for it to return 1 or to throw an expection however it would much more elegant and useful to make that function call wait until something actually appears in the queue By implementing this type of blocking system we are in fact implementing part of a producerconsumer systemi Now let us think of how to make the function waiti We can spin ie write something like while emptyO x This however obviously doesnlt work since the test of emptiness needs to read shared data we need to put locks somewherel Several other naive approaches do not work see slides for examples The conclusion is that we need some way of going to sleep and have someone wake us up when therels something interesting to do Let us now present two possible implementations for this system and discuss why they do not work The first possibility is dequeue lock needs to lock before checking if it s empty if queue empty sleep unlock enqueue lock insert item if thread waiting wake up dequeuer unlock The problem with this approach is that the dequeuer goes to sleep while holding the lock How about the second possible approach dequeue lock need to lock before checking if it s empty if queue empty unlock sleep else 76 Chapter 7 Synchronization unlock enqueue lock insert item if thread waiting wake up dequeuer unlock The problem here is that the dequeuer might unlock and before it actually sleeps the enqueuer could get the CPU back In this case the enqueuer would acquire the lock and insert the new item but since there would be no one sleeping it would unlock and never wake up the dequeuer again The general solution to this type of problem is to make unlock sleep77 atomicl We use condition variables to do exactly that condition variables make it possible and easy to go to sleep by atomically releasing the lock putting the thread on the waiting queue and going to sleep Each condition variable has a queue of threads waiting on it There are three important functions to be used when dealing with condition variables 0 wait ock l cv c if the queue is empty waits atomically releases the lock and goes to sleep When calling wait we must be holding the lock Wait may or may not reacquire the lock when awakened pthreads do not 0 signalcv c wakes up one waiting thread if any 0 broadcas cv c wakes up all waiting threads Abstractly you use a condition variable to encapsulate a condition such as whether or not a queue is not empty When a thread requires that the queue be nonempty it can check and if its empty it can wait on the CV Other threads must signal on the CV when they make the queue nonempty That is whenever the state of the condition changes the CV must be signalled However notice that the relation between the CV and the actual condition is not explicit and must be carefully maintained by the programmer 731 Producerconsumer using condition variables Now let us present an implementation of a producerconsumer system using condition variables dequeue lockA while queue empty waitA C releases lock A C is the condition variables that means If thread wakes and by chance the queue is empty there is no problem that s why we need the quotwhilequot loop When thread wakes up it locks queue not empty code to remove item unlockA enqueue lock A Chapter 7 Synchronization 77 insert item signal C unlock A We can ask a couple of questions about this implementation First can we use an if in the dequeuer instead of a while Welre guaranteed that some item has been enqueued since we checked whether the queue was empty and went to sleep But still the answer is no The problem is that another consumer can come in after the enqueuer and make the queue empty again So we have to recheck the condition after we come back from the call to wait 0 to make sure that the condition we were waiting for indeed does still hold Second question if we reverse signal and unlock in the enqueuer does the system still work Yes since the worst thing that can happen is that some other consumer can remove the item before we signal the threads waiting on the CV But the waiting thread will check the condition notice that the queue is still empty and go back to sleep 732 Bounded Buffers Let us now discuss how to implement a more realistic producerconsumer in the sense that it only has a bounded nite bu er Assume we are dealing with a Coke machine no one can buy a coke while someone else is using the machine the delivery person needs to wait until there is free space before putting more cokes into the machine and the buyer needs to wait until there is at least one coke available For this system we are going to use two condition variables the code is as follows consumer lockA while cokes0 waitA cokeinmachine buycoke works because when we get out of the wait we are absolutely sure there is a coke signalcokebought unlockA producer lockA while cokes maxcokes waitA cokebought putcokeinmachine signalcokeinmachine unlockA 74 Semaphores and advanced locks Synchronization can be achieved by means other than locks and condition variables We can use for in stance semaphores A semaphore is basically used to regulate traf c in a critical section A semaphore is implemented as a nonnegative integer counter with atomic increment and decrement operations whenever the decrement operation would make the counter negative the semaphore blocks instead The increment operation is called P the decrement is called V 78 Chapter 7 Synchronization o Psem tries to decrease the counter by one if the counter is zero7 blocks until greater than zero 1 77 can be interpreted as wait7 also called up 0 Vsem increments the counter wakes up one waiting process V can be interpreted as signal7 also called down Notice that we can use semaphores to implement both mutual exclusion and ordering constraints example7 by initializing a semaphore to 07 threads can wait for an event to occur thread A wait for thread B semwait do stuff thread B do stuff then wake up A semsigna1 Let us now present how to implement a bounded buffer with semaphores semaphore semmutex 1 this is for mutual exclusion semaphore slots1eft N semaphore cokes1eft O producer Pslots1eft Psemmutex inserts coke into machine Vsemmutex Vcokes1eft consumer Pcokes1eft Psemmutex buy coke Vsemmutex Vslots1eft 75 Other types of locks For We can think of other types of locks For example a blocking lock suspends the thread immediately and lets the scheduler execute another thread This type of thread minimizes the amount of time spent waiting imagine that each thread gets 10 seconds of cpu why keep looping doing nothing for 10 seconds when we could be doing something Notice7 however7 that since blocking locks always cause a context switch which might be a costly operation7 they might not be suitable in all cases Another way of implementing locks is Via spin locks Instead of blocking7 spin locks loop until released For example Chapter 7 Synchronization 7 9 void spinlocklock ampl whiletestandsetlv 1 Notice however that the whole justi cation for blocking locks was exactly that spin locks ilei locks that loop doing nothing were bad Why would we want to use them then In which situations would we prefer to waste time instead of doing something useful Basically we should prefer spin locks whenever it is cheaper to wait than it is to switch contexti One possible situation in which this happens is when we know we are not going to have to wait for a lot of time In other words whenever we know that all other threads that might have the lock will run very fast in these cases we might as well just keep waiting arguably for a small period of time instead of paying the price of performing a context switchi 751 Readerwriter locks What if we have a program in which many threads read from some common data but rarely does a thread write to the data Usually it s OK to allow concurrent readers but not OK to allow a concurrent writer with any other operationi Readerwriter locks are a method of allowing this RW locks allow any number of readers in simultaneously but will grant exclusive access for writers The lock has two lock operations depending on whether the thread needs read or write access RW locks can increase concurrency in systems where few people are writing but there are lots of people reading most database systems t this criteria 76 Major locking errors When programming with threads there are three very common mistakes that programmers do 1 failure to unlock 2 locking twice on a nonrecursive lock depending on the system can crash or do bizarre things 3 deadlock Of these problems locking twice is probably the easiest error to detect Failure to unlock on the other hand might be more dif cult It can occur for example if the programmer forgets to release the lock in one of the possible execution branches of the function function lock if x0 return should also unlock here do something unlock 77 Deadlocks We now discuss the last type of logical error that can occur when programming with threadsi A deadlock also known as the deadly embrace happens when two things threads processes etc wait on each other For example 710 Chapter 7 Synchronization thread A lockprinter lockdisk thread B lockdisk lockprinter In this case both threads may lock their rst resource but then they wait on each others resource Neither can ever make progress Yet another example of a deadlock is known as the Dining Philosophers problem In this abstract problem philosophers alter between thinking and eating Each philosopher needs two forks to eat with The problem is that each philosopher only gets one fork at a time if one philosopher gets one of the forks the next of gets the other etc in a circular way we get a deadlock The philosophers will starve One of the main aspects to notice about this problem is that we must necessarily have threads competing for a nite number of resources if we have had an in nite amount of forks there would be no deadlock We now enumerate the conditions needed for a deadlock to occurs notice that all of them are necessary and none is suf cient l nite resources 2 holdand wait each thread holds one resource while waiting for another 3 no preemption thread can only release resources voluntarily No other thread or OS can force the thread to release 4 circular wait 771 Deadlock detection Deadlocks can be detected on the y by running cycle detection algorithms on the graph that de nes the current use of resources Let the graph being discussed have one vertex for each resources r1 rm and one for each thread t1 tn We say that there is an edge from a thread to a resource if that thread is using that resource if there is an edge from a resource to a thread that resource is owned by the thread Given this graph we can run any cycle detection algorithm If a cycle is found we have a deadlock we might then either kill all threads in the cycle or kill threads one at a time thus forcing them to give up resources and hope that we will need to kill few threads before the deadlock is resolved However few real systems do this since killing threads tends to result in everything crashing 772 Deadlock prevention Preventing deadlocks is fairly easy in principle Remember that the list presented before enumerates condi tions which are all necessary for a deadlock to occur therefore it suf ces that at least one of those conditions does not hold For example we can just eliminate the possibility of circular waiting or we can also always acquire locks in a canonical orders eg acquire locks in an increasing order lockl then lock2 then lock3 etc By doing so we ensure a cyclefree system and thus also deadlockfree
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'