Week 5.b CS 5600 10/08 2021 On the board ------------ 0. Lab2 1. Last time 2. Deadlock 3. Other concurrency issues --------------------------------------------------------------------------- Admin - midterm: -- time: 10/22 (Fri, Week7), 1:35pm (likely 75min) -- location: KA 110 (Kariotis Hall) - review session (in-person): -- time: 10/21 (Thr, Week7), 1:30-3:00pm -- location: DG 470 (Dodge Hall) -- 30min review + 1hr Q&A; bring your questions -- no recording; good time to say hi to your neighbor :-) - Lab0/HW1/HW2 grades released -- normal questions: piazza and mailing list -- disagreement: mailing list or my office hours (Fri) with 1-day ahead notice -------- 0. Lab2 -- if you haven't started, start today! -- why? -- week7 will be busy -- midterm will cover Lab2 -- briefly describe how parsing and executing work -- read a string from users (main.c) -- tokenize the string and organize tokens to commands (cmdparse.c) -- read "command_t"! -- execute commands following the shell keywords (cmdrun.c) -- fork/exec -- pipe -- builtin cmds -- not much to write but a lot to read -- common feeling: "I want to write code now, right now" -- reading code sometimes is *more important* than writing -- like learning most of the knowledge (e.g., English) 1. Last time -- condition variable: cond_wait(Cond, Mutex) -- always use "while" instead of "if" -- has to be a single interface (vs. release/wait/acquire) -- Monitor -- an object (= mutex + condition variables) -- should be a language primitive -- C: a programming convention -- "a safe way": Monitor + 6 rules + design approach -- facing dangerous concurrency world -- 6 rules: -- must memorize! (one of few things I require you to recite) -- Question: tell me one rule -- 4-step design approach -- practice makes perfect (do your HW3!) 2. Deadlock --see handout: simple example based on two locks --see handout: more complex example --M calls N --N waits --but let's say condition can only become true if N is invoked through M --now the lock inside N is unlocked, but M remains locked; that is, no one is going to be able to enter M and hence N. --can also get deadlocks with condition variables --lesson: dangerous to hold locks (M's mutex in the case on the handout) when crossing abstraction barriers --deadlocks without mutexes: real issue is resources and how/when they are required/acquired (a) [draw bridge example] --bridge only allows traffic in one direction --Each section of a bridge can be viewed as a resource. --If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback). --Several cars may have to be backed up if a deadlock occurs. --Starvation is possible. (b) another example: --one thread/process grabs disk and then tries to grab scanner --another thread/process grabs scanner and then tries to grab disk --when does deadlock happen? under four conditions. all of them must hold for deadlock to happen: 1. mutual exclusion 2. hold-and-wait 3. no preemption 4. circular wait --what can we do about deadlock? (a) ignore it: worry about it when it happens. the so-called "ostrich solution" (b) detect and recover --could imagine attaching debugger --not really viable for production software, but works well in development --threads package can keep track of resource-allocation graph --For each lock acquired, order with other locks held --If cycle occurs, abort with error --Detects potential deadlocks even if they do not occur (c) avoid algorithmically --banker's algorithm --need to know the maximum resources needed by each process --only allocate resources if the state is safe (safe state: there are enough resources for *all* the executing processes to finish under some allocation scheduling.) --elegant but impractical --if you're using banker's algorithm, it looks like this: ResourceMgr::Request(ResourceID resc, RequestorID thrd) { acquire(&mutex); assert(system in a safe state); while (state that would result from giving resource to thread is not safe) { wait(&cv, &mutex); } update state by giving resource to thread assert(system in a safe state); release(&mutex); } Now we need to determine if a state is safe.... --an example: -- a bank has 10 coins -- three customers (max coins needed): A (9), B (4), and C (7) -- current status: has max A 3 9 B 2 4 C 2 7 bank: 3 -- Question: is current state safe? [Answer: yes, because there is a way to let all finish: B->C->A] -- Question: if A requests and gets another coin, is the state then safe? [Answer: no, there is no *guarantee* for A and C to finish. Try it yourself.] --so the banker should reject A's request because the state if not safe. See details at Wiki: https://en.wikipedia.org/wiki/Banker%27s_algorithm --disadvantage to banker's algorithm: --requires every single resource request to go through a single broker --requires every thread to state its maximum resource needs up front. unfortunately, if threads are conservative and claim they need huge quantities of resources, the algorithm will reduce concurrency (d) negate one of the four conditions using careful coding: --can sort of negate 1 --put a queue in front of resources, like the printer --virtualize memory --can sort of negate 2 --either cancel "hold": abort when conflicting --or cancel "wait": hold *all* resources at once --can sort of negate 3: --consider physical memory: virtualized with VM, can take physical page away and give to another process! --what about negating #4? --in practice, this is what people do --idea: partial order on locks --Establishing an order on all locks and making sure that every thread acquires its locks in that order --why this works: --can view deadlock as a cycle in the resource acquisition graph --partial order implies no cycles and hence no deadlock --two bummers: 1. hard to represent CVs inside this framework. works best only for locks. 2. Picking and obeying the order on *all* locks requires that modules make public their locking behavior, and requires them to know about other modules' locking. This can be painful and error-prone. --see Linux's filemap.c example on the handout; this is complexity that arises by the need for a locking order (e) Static and dynamic detection tools --(if you're interested) See, for example, these citations, citations therein, and papers that cite them: Engler, D. and K. Ashcraft. RacerX: effective, static detection of race conditions and deadlocks. Proc. ACM Symposium on Operating Systems Principles (SOSP), October, 2003, pp237-252. http://portal.acm.org/citation.cfm?id=945468 Savage, S., M. Burrows, G. Nelson, P. Sobalvarro, and T. Anderson. Eraser: a dynamic data race detector for multithreaded programs. ACM Transactions on Computer Systems (TOCS), Volume 15, No 4., Nov., 1997, pp391-411. http://portal.acm.org/citation.cfm?id=265927 a long literature on this stuff --Disadvantage to dynamic checking: slows program down --Disadvantage to static checking: many false alarms (tools says "there is deadlock", but in fact there is none) or else missed problems --Note that these tools get better every year. I believe that Valgrind has a race and deadlock detection tool 3. Other concurrency issues (a) progress issues Deadlock was one kind of progress (or liveness) issue. Here are others... - Livelock -- two threads waiting on each other: deadlock -- each backs off when finding another process -- a possibility: all processes keep aborting and make no progress [draw a curly bridge] - Starvation --thread waiting indefinitely (if low priority and/or if resource is contended) -- for example, merging to a busy road with a yield sign - Priority inversion --T1, T2, T3: (highest, middle, lowest priority) --T1 wants to get lock, T2 runnable, T3 runnable and holding lock --System will preempt T3 and run highest-priority runnable thread, namely T2 --Solutions: --Temporarily bump T3 to highest priority of any thread that is ever waiting on the lock --Disable interrupts, so no preemption (T3 finishes) ... not great because OS sometimes needs control (not for scheduling, under this assumption, but for handling memory [page faults], etc.) --Don't handle it; structure app so only adjacent priority processes/threads share locks --Happened in real life, see: https://www.microsoft.com/en-us/research/people/mbj/just-for-fun/ (search for Pathfinder) -- Mars pathfinder experience total system resets -- what happened: -- data gathering thread (low priority) -- communication thread (medium priority) -- management thread (high priority) -- data gathering and management threads may acquire the same mutex -- classic priority inversion (b) other concurrency bugs - atomicity-violation bugs T1: if (thd->info) { ... use(thd->info); ... } T2: thd->info = NULL - order-violation bugs T1: void init() { thd = createThread(...) } T2: void main() { state = thd->info } "Learning from Mistakes -- A Comprehensive Study on Real World Concurrency Bug Characteristics" https://people.cs.uchicago.edu/~shanlu/preprint/asplos122-lu.pdf (c) performance vs. complexity -- Coarse locks limit available parallelism .... [(still, you should design this way at first!!!)] the fundamental issue with coarse-grained locking is that only one CPU can execute anywhere in the part of your code protected by a lock. If your critical section code is called a lot, this may reduce the performance of an expensive multiprocessor to that of a single CPU. if this happens inside the kernel, it means that applications will inherit the performance problems from the kernel -- ... but fine-grained locking leads to complexity and hence bugs (like deadlock) --> although finer-grained locking can often lead to better performance, it also leads to increased complexity and hence risk of bugs (including deadlock). --> see handout --> also code here: https://github.com/torvalds/linux/blob/7cca308cfdc0725363ac5943dca9dcd49cc1d2d5/mm/filemap.c [thanks to David Mazieres and Mike Dahlin for content in portions of this lecture.]