Class Note for CMPSCI 377 at UMass(27)
Class Note for CMPSCI 377 at UMass(27)
Popular in Course
Popular in Department
This 3 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 11 views.
Reviews for Class Note for CMPSCI 377 at UMass(27)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
CMPSCI 377 Operating Systems Fall 2008 Lecture 6 September 18 Lecturer Pmshant Shenoy Scribe Shashz39 Singh 61 Interprocess Communication Remember that by using fork we can split one process into two and that the input of each process will be basically whatever is the state of the program right before the fork Also notice that after the fork each process is free to follow its own path of execution and that its output will be given by whatever value is return via the exit call Now considering that a machine can have several processes running in parallel we might want to consider make them communicate with each other On last class we discussed some of the possibilities for doing that signals sendingreceiving simple integer numbers mmap implicit communication by memory sharing pipes unidirectional communication channels and sockets explicit message passing 62 Threads First remember that different processes keep their own data on a distinct address spaces Threads on the other hand explicitly share their entire address space with one another Although this can make things a lot faster it comes with the cost of making programming a lot more complicated ln UnixPOSlX the threads API is composed by two main calls 0 pthreadcreate which starts a separate thread 0 pthreadjoin which waits for a thread to complete Notice that the general syntax for using these is pid pthreadcreateamptid NULL functionptr argument pthreadjointid ampresult Example void run void d int q int d int v 0 for int i0 iltq i v v somefunctionca11 return void v 61 62 Lecture 6 September 18 int main pthreadt t1 t2 int r1 r2 pthreadcreateampt1 NULL run int 100 the last parameter is a hack it should be a pointer but we can pass the desired data say an int as if it were a pointer pthreadcreateampt2 NULL run int 666 pthreadjoint1 void ampr1 pthreadjoint2 void ampr2 cout ltlt r1 ltlt rl ltlt r2 ltlt r2 ltlt endl Notice that the above threads maintain di erent stacks and different sets of registers except for those however they share all their address spaces Also notice that if you were to run this code in a 2 core machine it would be expected that it ran sort of twice as fast as it would in a single core machine If you ran it in a 4 core machine however it would run as fast as in the 2 core machine since there would be no suf cient threads to exploit the available parallelismi 621 Processes vs threads One might argue that in general processes are more exible than threadsi For one thing they can live in two different machines and communicate via sockets they are easy to spawn remotely eg ssh fooicsiumassiedu ls l etc However processes requires explicit communication and risky hackeryi Threads also have their own problems because they communicate through shared memory they must run on same machine and require threadsafe code So even though threads they are faster they are much harder to programi In a sense we can say that processes are far more robuts than threads since they are completely isolated from other anotheri Threads on the other hand are not that safe since whenever one thread crashes the whole process terminatesi When comparing processes and threads we can also analyse the context switch costi Whenever it is needed to switch between two processes we must invalidate the TLB cache the so called TLB showdown This of course makes everything sloweri When we switch between two threads on the other hand it is not needed to invalidate the TLB because all threads share the same address space and thus have the same contents in the cache In other words the cost of switching between threads is much smaller than the cost of switching between procesesi 622 Kernel Threads and UserLevel Threads OS managed threads are called kernellevel threads or lightweight processesi All thread operations are implemented in the kernel The OS schedules all of the threads in the systemi Example Solaris lightweight processes LWP Kernellevel threads make concurrency much cheaper than processes This is because as compared to processes there is much less state to allocate and initialize However for ne grained concurrency kernellevel threads still suffer from too much overheadi Thread operations still require system calls ideally we want thread operations to be as fast as a function call Kernellevel threads have to be Lecture 6 September 18 63 general to support the needs of all programmers languages runtimes etc For such negrained concurrency we need even cheaper threads To make threads cheap and fast they need to be implemented at user level Userlevel threads are managed entirely by the runtime system userlevel library A thread is simply represented by a program counter registers stack and small thread control block TCB Creating a new thread switching between threads and synchronizing threads are done via function calls without any kernel involvement Userlevel threads are about 100 times faster than kernel threads But userlevel threads are not a perfect solution They are invisble to the 03 As a result the OS cana make poor decisions like scheduling a process with idle threads blocking a process whose thread initiated an lO even though the process has other threads that can execute unscheduling a process with a thread holding a lock even when other threads do not hold any locksi Solving this problem requires communication between the kernel and the userlevel thread manageri
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'