Automatic parallelization of programs |
ter |
|||||
Teaching Staff in Charge |
|
Aims |
The course expects in the end the students to understand the motivation for adopting specific parallelism approaches and intends also to make a comprehensive introduction to the topics represented by parallelizing compilers. Code generation and run time optimization complete the route from sequential source code to the executable parallel program version, route which is today widespread used for obtaining parallel computations. The course also intends to introduce the required mathematical formalism which constitutes the basis of automatic parallelization and parallel code generation.
|
Content |
1. Parallel processing: models, types of parallelism, the motivation of the necessary transition to massively parallel processing.
2. Parallel execution at the program level. 3. Parallel programming languages models: procedure oriented languages, message oriented, operation oriented, functional and logic languages. 4. Performance and efficiency in programming languages. 5. Automatic parallelization of sequential languages. 5.1 Data dependency. Dependence vectors. Dependence directions. 5.2 Determining the data dependence relationship. Decision algorithms. 5.3 Equivalence transformation of parallel oriented loops. 5.4 Parallel code generation: Lamport & #8217;s and Feautrier & #8217;s methods. 5.5 Case studies: the Omega Project, ADARP, NESL and LOOPO systems. 6. Parallel programs execution. 6.1 Load balancing: techniques and algorithms. 6.2 The mapping problem in multiprocessors systems: static and dynamic issues. |
References |
1. E.V.KHRISHNAMURTHY - Parallel Processing, Addison-Wesley, 1990.
2. BARR E. BAUER - Practical Parallel Programming, Academic Press, 1992. 3. UTPAL BANERJEE - Data Dependence Analysis and Loop Transformations, Kluwer Academic Publishers, 1997. 4. H.ZIMA AND B.CHAPMAN - Supercomputers for Parallel and Vector Computers, ACM Press, 1991. 5. C.D.POLYCHRONOPOULOS - Parallel Programming and Compilers, Kluwer Academic Publishers, 1988. 6. MICHAEL WOLFE - Optimizing supercompilers for supercomputers, MIT Press, 1990. 7. Reteaua Internet. |
Assessment |
Written exam. The final evaluation mark is given as an average of the mark obtained at the written paper, the mark obtained for the laboratory work and a mark given for a small project illustrating actual results in the domain of parallel processing. |