Class Note for CMPSCI 377 at UMass(19)
Class Note for CMPSCI 377 at UMass(19)
Popular in Course
Popular in Department
This 2 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 16 views.
Reviews for Class Note for CMPSCI 377 at UMass(19)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
CMPSCI 377 Operating Systems Spring 2009 Lecture 5 February 10 Lecturer Mark Corner Scribes Bruno Silvaim Partzm 51 Processes 511 Unix process API Re visiting the Unix process API from the end of last lecture is the sleep1 call necessary to allow the child process to start The answer is no it is not at all necessary In general if you think you need to sleep in a program you are probably doing something wrong and just slowing down your program The call to waitpid is a blocking wait and will rst wait to let the child process start if it hasn t already then will wait until it ends 512 Interprocess Communication Now we consider the following questions how can the parent process communicate with its child Or how can children processes communicate with other children The exact answer depends on the exact problem being treated but in general we call the several different approaches to this question by the name of lnterprocess Communication IPC On possibility for lPC is to use sockets This approach is based on explicit message passing and has the advantage that processes can be distributed anywhere on the Internet among several different machines An application designed to use sockets can fairly easily be redeployed as a multiserver application when its needs outgrow a single server Another possibility is the use of mmap which is a hack although a very common one Mmap uses memory sharing as an indirect way of communicating The way that this works is that all processes map the same le into a xed memory location In this case however since we have objects being read from and written to a shared memory region we must use some kind of processlevel synchronization Unfortunately these synchronization procedures are in general much more expensive than the use of threads itself One can also use signals in this case processes can send and receive integer numbers associated with particular signal numbers In order to be able to receive and treat these numbers processes must rst set up signal handlers This approach works best rare eventsl In general signals are not very useful for parallel or concurrent programming The other possibility is the use of pipes These are unidirectional communication channels which make it possible for the output of one program to be used as input to another one just like in Unix for example when we use the pipe symbol as in ls l wc l The advantage of using pipes is that they are easy and fast 1Exalnple processes treating a SIGSEGV 51 52 Lecture 5 February 10 52 Threads First remember that different processes keep their own data in distinct address spaces Threads on the other hand explicitly share their entire address space with one another Although this can make things a lot faster it comes with the cost of making programming a lot more complicated ln UnixPOSIX the threads API is composed by two main calls 0 pthreadcreate which starts a separate thread 0 pthreadjoin which waits for a thread to complete Notice that the general syntax for using these is pid pthreadcreateamptid NULL functionptr argument pthreadjointid ampresu1t Example void run void d int q int d int v 0 for int i0 iltq i v v someexpensivefunctionca11 return void v int main pthreadt t1 t2 int r1 r2 int arg1100 int arg2666 pthreadcreateampt1 NULL run amparg1 pthreadcreateampt2 NULL run amparg2 pthreadjoint1 void ampr1 pthreadjoint2 void ampr2 cout ltlt r1 ltlt rl ltlt r2 ltlt r2 ltlt endl Notice that the above threads maintain di erent stacks and different sets of registers except for those however they share all their address spaces Also notice that if you were to run this code in a 2 core machine it would be expected that it ran roughly twice as fast as it would in a single core machine If you ran it in a 4 core machine however it would run as fast as in the 2 core machine since there would be no suf cient threads to exploit the available parallelismi
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'