PARALLEL AND DISTRIBUTED COMPUTING
The course introduces to the use of methods and tools to develop parallel software.
knowledge transmitted by Mathematic I, Computer Programming I, Operating Systems.
Introduction to high-performance computing and evolution of supercomputers (2h).
Kinds of parallelism: temporal, spatial and asynchronous (2h). Flynn classification (3h) - MIMD shared memory (SM) and MIMD distributed memory (DM) (2h).
First and second type of parallelism on chip (2h).
Cluster and multicore architectures (2h).
Basic differences between parallel computing and distributed computing concepts: the cloud computing (1h).
Speed-up, overhead and efficiency of a parallel algorithm, Ware-Amdahl, communication overhead, scaling speedup and efficiency, isofficiency, scalability (5h).
Summation in parallel: strategies I, II and III (in MIMD-SM and MIMD DM environment) (4h).
Matrix for vector product: I, II and III (in MIMD-SM and MIMD-DMemory environment) (4h). MPI library: basic features and functions, major routines for process management and communication. Virtual topologies: Processor grids (5h). OpenMp library: processes and threads, synchronization and semaphore, fork-join parallel execution model, compiler directives, constructs and clauses, runtime library routines and environment variables (6h). - Writing, compiling and running programs that use the MPI library and the OpenMp library employing the C/C++ language (6h). The cloud computing service: AWS. Login with Account and Configuration, management of instances (4h)
For laboratory activity, the course involves the use of C/C++ programming language and an introduction to standard parallel computing libraries (MPI and OpenMP), for using them to parallel software development in the different high performance environments, such as clusters of multiprocessors and/or multicore CPU.
Knowledge and understanding: the student must demonstrate knowledge of the fundamentals of parallel and distributed computing, particulary with regard to the different forms of hardware and software parallelism and the parallel strategies for some basic computational kernels of programming.
Ability to apply knowledge and understanding: the student must demonstrate how to use the parallel strategies studied and the standard libraries available to develop algorithms in a high-performance environment, leveraging the knowledge of parallel software evaluation parameters and the kind of hardware available.
Autonomy of judgement: the student be able to know independently how to evaluate the results of a parallel algorithm by analyzing the speedup and software efficiency.
Communication skills: the student should be able to illustrate a parallel algorithm and document its implementation in a high-performance environment.
Learning skills: the student must be able to update and deepen topics and specific applications of numerical computing, even accessing databases, on-line scientific software repositories and other tools available on the web.
A. Grama, G. Karypis, V. Kumar, A. Gupta: “Introduction to Parallel Computing (2nd Edition)”, Ed. Addison Wesley, 2003.
All lessons are available as slides (in pdf format) on the e-learning platform of the Department of Science and Technology, together with self-assessment exercises, libraries manuals, exams, recent papers on the most innovative parallel topics.
The goal of the verification procedure is to quantify, for each student, the degree of achievement of the learning objectives listed above. To be specific, the exam consists of a laboratory test that verifies the ability to implement a simple high-performance computing program (30% of the vote), a written test for assessing the knowledge of parallel strategies for the basic kernel of linear algebra computation (40% of the vote), an oral test to examine the analytical capacity of a parallel software in terms of efficiency (30% of the vote).