I. For each of the 4 questions below, mark all the correct options. Every question has one or more correct options. (4 x 1 = 4) 1. Which of the following happen when fork() is invoked? _X___ A new PCB is allocated ____ A new program is loaded into memory _X___ A new address space is created Fork creates a new process with a new address space that is a copy of the old one. So, a PCB is created for the new process, and a new address space. However, no new program is loaded into memory. 2. When a thread calls thread_yield, which of the following can potentially be the new state of the thread after the execution of thread_yield completes? ____ Waiting _X___ Ready _X___ Running A new thread is typically placed on the ready list, but could be scheduled immediately by the scheduler (so running is possible too). Waiting requires that the thread runs until it reaches a place where it has to block (e.g., waiting for I/O or a semaphore). 3. Which of the following statements are true in describing the various data structures that can be used for synchronization? ____ Lock.release() has no effect if no other thread is stuck on Lock.acquire() ____ Signal() on condition variable can impact future calls to Wait() _X__ Implementation of lock that uses Test-and-Set does not waste CPU Lock.release() has effect even if there is no other thread waiting: it marks the lock state to be available. Signal() on condition variables on the other hand has no side effect if there are no threads waiting. So, it does not affect future calls to wait. The Test-and-set lock implementation we worked with is a spin lock: it buy waits repeatedly checking if the lock is available, wasting CPU time. 4. Not covered II. Hoare semantics specifiy that the waiting thread is scheduled immediately (and given the mutex lock) when the other thread signals it to say that the condition has been met. As a result, it is safe to just continue execution at that point. In contrast, under Mesa semantics, the thread that signals continues execution. By the time the thread that was waiting is scheduled, the condition it was waiting for may no longer be true. So, Mesa semantics require a while loop so that the condition is checked again before the thread can proceed. III. Similarity: both cause a mode switch to the OS. Both handled through an interrupt handler. Difference: Faults are synchronous (caused in response to an instruction being executed and depending only on the state of the CPU), while interrupts are asynchronous. IV. P1-->P2-->P3 Explanation: check the arrows where a process waits on a resource, and see what process holds that resources. The first process is waiting on the second (i.e., arrow from the first to the second in the WFG). In our problem, P3 is waiting on nothing (no outgoing arrows). P2 is waiting on R4, which is held by P3 and R5 which is also held by P3: P2 is waiting for P3 and therefore there is an arrow from P2 to P3. Finally, P1 is waiting for R2, which is held by P2. P1 is waiting for P2, giving the final answer. V J1 arrives 0, runtime 6 J2 arrives 2, runtime 3 J3 arrives 6, runtime 2 I am computing Turnaround time (Tfinish - Tstart) rather than wait time, which was defined imprecisely in the slides as the average time spent in waiting queues (and therefore does not apply to this case as it is defined). If you get a problem in the exam, I will give you a clear definition of any metrics you are asked to compute. (A) SJF: 0-------------6-----8--------11 | J1 | J3 | J2 | +----------------------------+ Explanation: Non-preemptive, when something starts, it does not stop until done. So, J1 starts since its only job at 0. Its done at 6. At that point, both J3 and J2 are ready, we pick J3 because it is shorter. Turnaround time: J1=6, J2=9, J3=2 Normalized turnaround= Turnaround time/run time J1=1, J2=3, J3=1 (B) 0----2-------5---6------8--------11 | J1 | J2 |J1 | J3 | J1 | +--------------------------------+ Explanation: Preemptive. J1 starts, but at 2, J2 arrives. At this point, J1 has 4 remaining, while J2 has 3, so we pick J2. J2 runs to completion at time 5. At this point, only J1 is available, so we schedule it again. At 6, J3 arrives, and has length 2. J1 has 3 left, so we prefer J3. J3 is scheduled and runs until 8, and then J1 runs to completion. Turnaround time J1=11, J2=3, J3=2 Normalized turnaround time J1=11/6, J2=1, J3=1 <---SRT is optimal according to this metric (C) 0----2--3--4--5--6--7--8--9--10-11 | J1 |J2|J1|J2|J1|J2|J3|J1|J3|J1| +-------------------------------+ Explanation: J1 is the only job until time 2, so it gets the first two slices. At time 2, J2 arrives and is scheduled for one quantum, and alternates with J1 until time 6. At 6, J1 has just finished and assuming J3 just arrived before that, it get queued behind J2, and is followed by J1. J2 runs for its third second and finishes, then J3 and J1 alternate until they both finish. Turnaround time: J1=11, J2=5, J3=4 Normalized turnaround: J1=11/6, J2=5/3, J3=2 <---seemingly much worse, but if you care about response time (wait time between the periods a process runs) RR is much better. VI. (a) It waits at line 4 since the first reader holds the rmutex lock while it waits on wmutex. (b), (i) False: it will not wait at 6 since num_readers = 1, line 6 is not executed. (ii) False: Line 4 ensures mutual exclusion among readers for access of the num_readers variable and the coordination with the writer. Since the other reader is in perform read, the rmutex semaphore is available, and the reader will not wait there. (c) False. Writers may starve: as long as there are readers, we do not signal wmutex (only when num_readers=0) so we could have a situation where readers keep coming and the writers starve waiting for wmutex. (d) In this case, rmutex enforces mutual exclusion over the full reader code, meaning only one reader can go into the critical region at a time, significantly reducing concurrency.