==================== Distributed Matrices [TOC:2] ==================== Class hpc::mpi:GeMatrix is supposed to represent a $M \times N$ matrix where blocks are locally stored on different compute nodes. Hereby the compute nodes are organized in a two dimensional $m \times n$ grid. So such matrix $B$ is partitioned as follows: ---- LATEX --------------------------------------------------------------------- B = \left(\begin{array}{c|c|c|c} B_{0,0} & B_{0,1} & \dots & B_{0,n-1} \\ \hline \vdots & & & \vdots \\ \hline B_{m-1,0} & B_{0,1} & \dots & B_{m-1,n-1} \end{array}\right) -------------------------------------------------------------------------------- The partitioning is quasi-equidistant, such that all inner blocks have the same dimension $\left\lceil \frac{M}{m} \right\rceil \times \left\lceil \frac{N}{n} \right\rceil$. This is also the maximal dimension required for storing a local matrix block. Using this abstraction allows to hide some technical details related to MPI. For example: - Assume A is of type hpc::matvec::GeMatrix storing the complete $M \times N$ matrix on a local (root) node. - Then for B of type hpc::mpi::GeMatrix representing a distributed matrix the _scatter_ and _gather_ operations can be carried out by ==== CODE (type=cc) ============================================================ // Setup A locally on a single node gecopy(A, B); // Scatter/distribute the matrix from root node to grid // Perform some parallel operations on grid gecopy(B, A); // Gather/collect blocks into local matrix on root node ================================================================================ Class hpc::mpi::Grid ====================== Information about the MPI grid and the MPI communicator is encapsulated as an object of type hpc::mpi::Grid in the hpc::mpi::GeMatrix class. Your first exercise will be the implementation of class hpc::mpi::Grid: :import: session26/ex01/grid.hpp Exercise -------- Implement class hpc::mpi::Grid. In the constructor, setup the two-dimensional grid with MPI_Cart_create(__more__) from MPI_COMM_WORLD. Also setup the following attributes: - numNodeRows, numNodeCols are supposed to be the number of grid rows and columns, respectively. - nodeRow, nodeCol are supposed to contain the row and column position of the process within the grid, respectively. :links: more -> https://www.mpich.org/static/docs/v3.1/www3/MPI_Cart_create.html Class hpc::mpi::GeMatrix ========================== Each matrix of type hpc::mpi::GeMatrix stores its local matrix block in a matrix of type hpc::matvec::GeMatrix. Information regarding the partitioning can be accessed through the following methods: - rowOffset(p) internally maps the process id $p$ to $(r,c)$ where $r$ is the _node row_ and $c$ the _node col_ in the node grid. It then returns the *global row index* of the top-left element in $A_{r,c}$. - colOffset(p) internally maps the process id $p$ to $(r,c)$ where $r$ is the _node row_ and $c$ the _node col_ in the node grid. It then returns the *global col index* of the top-left element in $A_{r,c}$. Analogously - numLocalRows(p) returns the number of rows in block $A_{r,c}$, - numLocalCols(p) returns the number of columns in block $A_{r,c}$, The number of rows and cols of the complete matrix can be accessed through the attributes numRows and numCols. Exercise -------- Use the following skeleton for the implementation: :import: session26/ex01/gematrix.hpp Exercise: Scatter/Gather ======================== Implement the scatter/gather operations: - For the scatter operation the node with rank zero sends blocks and for - the gather operation the node with rank zero receives blocks. :import: session26/ex01/copy.hpp Program for testing ------------------- :import: session26/ex01/test_gematrix.cpp You can compile on theon with ---- SHELL (path=session26/ex01) ----------------------------------------------- mpic++ -g -std=c++17 -I. -I/home/numerik/pub/hpc/ws18/session25 +++ -o test_gematrix test_gematrix.cpp --------------------------------------------------------------------------------