Message Passing Programming with MPI
The world's largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.
Details
Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.
The course is normally delivered in an intensive format, in this case over two days. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.
This course is free to all academics.
Intended learning outcomes
On completion of this course students should be able to:
- Understand the message-passing model in detail.
- Implement standard message-passing algorithms in MPI.
- Debug simple MPI codes.
- Measure and comment on the performance of MPI codes.
- Design and implement efficient parallel programs to solve regular-grid problems.
Pre-requisites
Programming Languages:
- Fortran, C or C++.
It is not possible to do the exercises in Java or in Python.
Pre-course setup
All attendees should bring their own wireless-enable laptop. Practical exercises will be done using a guest account on ARCHER. You will need an ssh client such as terminal on a Mac or Linux machine, or putty or MobaXterm on Windows. The course tutor will be able to assist with settings to connect on the day. You should also have a web browser, a pdf reader and a simple text editor.
Timetable
Day 1- 09:30 - 10:15 : Message-Passing Concepts
- 10:15 - 11:00 : Practical: Parallel Traffic Modelling
- 11:00 - 11:30 : Break
- 11:30 - 12:00 : MPI Programs
- 12:00 - 12:15 : MPI on ARCHER
- 12:15 - 13:00 : Practical: Hello World
- 13:00 - 14:00 : Lunch
- 14:00 - 14:30 : Point-to-Point Communication
- 14:30 - 15:30 : Practical: Pi
- 15:30 - 16:00 : Break
- 16:00 - 16:45 : Communicators, Tags and Modes
- 16:45 - 17:30 : Practical: Ping-Pong
- 09:30 - 10:00 : Non-Blocking Communication
- 10:00 - 11:00 : Practical: Message Round a Ring
- 11:00 - 11:30 : Break
- 11:30 - 12:00 : Collective Communicaton
- 12:00 - 13:00 : Practical: Collective Communication
- 13:00 - 14:00 : Lunch
- 14:00 - 14:45 : Introduction to the Case Study
- 14:45 - 15:30 : Scaling and Performance Analysis
- 15:30 - 16:00 : Break
- 16:00 - 17:30 : Practical: Case Study performance
Course Materials
http://www.archer.ac.uk/training/course-material/2017/09/mpi-york/index.php
Location
The course will be held in University of York
Registration
Please use the registration page to register for ARCHER courses.
Questions?
If you have any questions please contact the ARCHER Helpdesk.