Class Note for CMPSCI 377 at UMass(38)
Class Note for CMPSCI 377 at UMass(38)
Popular in Course
Popular in Department
This 2 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 18 views.
Reviews for Class Note for CMPSCI 377 at UMass(38)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
CMPSCI 377 Operating Systems Distributed Parallel Programming 101 Distributed parallel programming So far we have been focusing on how to use threads in order to exploit concurrencyi The problem with using only threads is that eventually we can run out of resources cores memory etc So instead of programming with threads that is of using shared memory on a single machine we now focus on how to use message passing in order to distribute the processing across several machines This approach works well for highly parallelizable problems such as simulating the weather nuclear blasts solving bioinformatics problems etc Notice that since this approach requires communication across the network which is slow we typically want each machine to perform the maximum amount of computation it can on its own and only then to sparsely communicate the results with the other nodes in the network 1011 Message passing Message passing is the mechanism that allows parallel computers to communicate with each other The use of message passing assumes that we have a good way of partitioning the problem into a bunch of machines In general message passing is efficient since it makes data sharing explicit and also because it can communicate only what is strictly necessary for performing the computationli However because message passing requires the manual partitioning of the problem its use is not trivial Message passing can be used on a variety of computer system architectures from large clusters of machines to NUMA supercomputers to SMPs lts advantage is that it performs well on all of these architectures Sharedmemory parallelism threads can perform well on SMPs but does not perform well on distributed cluster systems The actual implementation of a message passing architecture usually makes use of a Message Passing Interface MPli The MPl is a languageindependent communications protocol used to program parallel computers MPl is implemented as a library generally produced by machine vendors in a version optimized for their systems For more details please check the slides and also the Wikipedia entry for MPl httpen wikipediaorgwikiMessagePassingInterfacer MPlls execution model is what is called SPMD standing for Single Program Multiple Data77 Each machine in the cluster runs the same program with different data and different local memoryi Let us now see how we could use an MPl to implement a program that runs in parallel in several machines int mainargc argv int rank size sizenumber of machines that will run this process rankwhich processor am I MPIInitampargc ampargv MPICommSizeMPICUMMWURLD ampsiZe MPICommrankMPICOMMWORLD amprank 1Contrarin to threads which implicitly share everything 10 1 102 Chapter 10 Distributed Parallel Programming printfquothello world from process 70d of 7dquot rank size MPIFinalize return 0 We could start this program on7 say7 10 machines7 by running mpirun np 10 exampleProgram Notice that the pn39ntf magically passes its output back to the machine Who spawned mpirunl
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'