Week 6.b CS3650 02/14 2024 https://naizhengtan.github.io/24spring/ 1. Managing concurrency 2. Mutexes 3. Condition variables ---- Admin & recall: - attending survey - non-determinism [show an example] - concurrency is hard [handout, panel 1] 1. Managing concurrency A. the concept of *critical sections* --Regard accesses of shared variables (for example, "head" in the linked list example) as being in a _critical section_ --Critical sections will be protected from concurrent execution [draw: program {A; c.s.; B} and three threads running it T1 ---[A][c.s.]-----[ B ]--> T2 ---[A]------[c.s.]---[ B ]-----> T3 ---[A]------------[c.s.]-----[B]-----> ] --analogy: a room that only allows one person --people: threads --room: critical section --Solution must satisfy 3 properties: 1. mutual exclusion only one thread can be in c.s. at a time [this is the notion of atomicity] 2. progress if no threads executing in c.s., one of the threads trying to enter a given c.s. will eventually get in (sometimes called "no lockout") 3. bounded waiting once a thread T starts trying to enter the critical section, there is a bound on the number of other threads that may enter the critical section before T enters (sometimes called "no deadlock") B. protecting critical sections. --want lock()/unlock() or enter()/leave() or acquire()/release() --lots of names for the same idea --in each case, the semantics are that once the thread of execution is executing inside the critical section, no other thread of execution is executing there --analogy: mutex => door for the room --lock/qcuire => close the door --unlock/release => open the door (let the next person enter the room) C. implementing critical sections many possible ways (usually called synchronization primitives) this course studies one of them: mutex 2. Mutexes -- Mutex : mutual exclusion object -- Usage: mutex_t m mutex_init(mutex_t* m) acquire(mutex_t* m) release(mutex_t* m) .... --pthread implementation: pthread_mutex_init(), pthread_mutex_lock(), ... --using critical sections [see handout] --linked list example (handout from last time) --producer/consumer example (handout, panel 2.a) --why are we doing this? --because *atomicity* is required if you want to reason about the correctness of concurrent code --atomicity requires mutual exclusion aka a solution to critical sections --mutexes provide that solution --once you have mutexes, don't have to worry about arbitrary interleavings. critical sections are interleaved, but those are much easier to reason about than individual operations. --why? because of _invariants_. examples of invariants: "list structure has integrity" "'count' reflects the number of entries in the buffer" the meaning of lock.acquire() is that if and only if you get past that line, it's safe to violate the invariants. the meaning of lock.release() is that right _before_ that line, any invariants need to be restored. the above is abstract. let's make it concrete: invariant: "list structure has integrity" so protect the list with a mutex only after acquire() is it safe to manipulate the list [skipped] --Question: by the way, why aren't we worried about *processes* trashing each other's memory? [answer: because the OS, with the help of the hardware, arranges for two different processes to have isolated memory space. such isolation is one of the uses of virtual memory, which we will study in a few weeks.] 3. Condition variables A. Motivation --producer/consumer queue --very common paradigm. also called "bounded buffer": --producer puts things into a shared buffer --consumer takes them out --producer must wait if buffer is full; consumer must wait if buffer is empty --shows up everywhere * Soda machine: producer is delivery person, consumer is soda drinkers, shared buffer is the machine * OS implementation of pipe() * DMA buffers --producer/consumer queue using mutexes (see handout, 2a) --what's the problem with that? --answer: a form of *busy waiting* --analogy: the next person keeps knocking the door --It is convenient to break synchronization into two types: --*mutual exclusion*: allow only one thread to access a given set of shared state at a time --*scheduling constraints*: wait for some other thread to do something (finish a job, produce work, consume work, accept a connection, get bytes off the disk, etc.) B. Usage --API --void cond_init (Cond *, ...); --Initialize --void cond_wait(Cond *c, Mutex* m); --Atomically unlock m and sleep until c signaled --Then re-acquire m and resume executing --void cond_signal(Cond* c); --Wake one thread waiting on c [in some pthreads implementations, the analogous call wakes *at least* one thread waiting on c. Check the the documentation (or source code) to be sure of the semantics. But, actually, your implementation shouldn't change since you need to be prepared to be "woken" at any time, not just when another thread calls signal(). More on this below.] --void cond_broadcast(Cond* c); --Wake all threads waiting on c [handout, panel 2b]