Transfer of self-defined data types using MPI

Content

Let us return to the interfaces of MPI_Send and MPI_Recv:

int MPI_Send(void* buf, int count, MPI_Datatype datatype,
             int dest, int tag, MPI_Comm comm);
int MPI_Recv(void* buf, int count, MPI_Datatype datatype,
             int source, int tag, MPI_Comm comm,
             MPI_Status* status);

The 1:1 message interchange operations MPI_Send and MPI_Recv support the transfer of arrays but insist that the array elements are located consecutively in memory. Hence, these operations do not allow us to transfer arbitrary objects of types DenseVectorView or GeMatrixView as the offsets between consecutive elements (which can be configured through inc or incRow and incCol, respectively) can be arbitrary.

MPI allows us to define objects of type MPI_Datatype which permit us to support vector and matrix types which match those of our vector and matrix views. However, we should first learn a little bit more about data types in MPI.

MPI comes with a set of elementary types \(ET\) which includes, for example, MPI_DOUBLE and MPI_INT. Within the MPI library, a data type \(T\) with the cardinality \(n\) is a sequence of tuples \(\left\{ (et_1, o_1), (et_2, o_2), \dots, (et_n, o_n) \right\}\) where \(et_i \in ET\) and the associated offsets \(o_i \in \mathbb{Z}\) for \(i = 1, \dots, n\). The offsets are interpreted in bytes relatively to the start address.

Two types \(T_1\) and \(T_2\) are considered as compatible to each other for MPI_Send and MPI_Recv if and only if the cardinalities \(n_1\) and \(n_2\) are equal and \(et_{1_i} = et_{2_i}\) for all \(i = 1, \dots, n_1\). Overlapping offsets are permitted for MPI_Send, in case of MPI_Recv their effect is undefined.

Some examples, all of them assume that the underlying matrix is stored in row-major:

The type on the sending side must not be identical to the type on the receiving side. They just need to be compatible to each other. This means that the offsets are interpreted locally only. Hence, the receiving side is free to store transfered objects differently. Row vectors can be received as column vectors and matrices in row-major can be received in a column-major order.

In MPI, multiple functions support the construction of data types. All of them construct a sequence of tuples as described above. Let us start with the type construction function MPI_Type_vector:

int MPI_Type_vector(int count, int blocklength, int stride,
                    MPI_Datatype oldtype, MPI_Datatype* newtype);

The element type is always given as oldtype. The parameter blocklength specifies how many elements of this type are consecutively in memory. They are considered as a block. The offset of two consecutive blocks is specified as stride -- this value is implicitly multiplied with the size of the element type. In total, the data type covers count blocks.

Whenever a data type is constructed using functions like MPI_Type_vector it is to be converted into a flat sequence of tuples using the MPI_Type_commit function before it can be used for actual transfers.

int MPI_Type_commit(MPI_Datatype* datatype);

Types which are no longer required can be released:

int MPI_Type_free(MPI_Datatype* datatype);

This does not affect any on-going communication using this data type nor any other types constructed from it. Internally, the MPI library uses here a mechanism similar to that of std::shared_ptr of C++ on base of reference counts.

Exercise

The following source code is already prepared for a small test to transfer vectors. Construct the required MPI data types that allow to transfer a vector with a single MPI_Send and a single MPI_Recv operation.

It is to be recommended to write a small function that constructs a matching MPI_Datatype object for a vector. Such a template function could be written as follows and put into a separate header file named hpc/mpi/vector.hpp. Then you need to add ā€œ-I.ā€ to the compilation options (in front of the other ā€œ-Iā€ option).

template<typename T, template<typename> typename Vector,
   Require<Dense<Vector<T>>> = true>
MPI_Datatype get_type(const Vector<T>& vector) {
   MPI_Datatype datatype;
   /* ... */
   return datatype;
}

If you are looking for a general approach to map elementary types of C++ to elementary MPI data type objects, you are free to use #include <hpc/mpi/fundamental.hpp> where you will find the template function get_type which supports all elementary types of MPI. If you have, for example, double x; then hpc::mpi::get_type(x) will return MPI_DOUBLE.

The library for this session is to be found at /home/numerik/pub/pp/ss19/lib on our hosts.

By default, mpic++ selects on our machines in E.44 g++ version 6.3.0 which is shipped with Debian stretch. Some of our libraries need, however, a more recent version. You can select another C++ compiler using the OMPI_CXX environment variable:

heim$ OMPI_CXX=g++-8.3 mpic++ --version
g++-8.3 (GCC) 8.3.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
heim$ 

Source code

#include <cassert>
#include <cstdlib>
#include <mpi.h>
#include <printf.hpp>
#include <hpc/matvec/densevector.hpp>
#include <hpc/matvec/iterators.hpp>
#include <hpc/matvec/gematrix.hpp>

int main(int argc, char** argv) {
   MPI_Init(&argc, &argv);

   int nof_processes; MPI_Comm_size(MPI_COMM_WORLD, &nof_processes);
   int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank);
   assert(nof_processes == 2);

   using namespace hpc::matvec;
   using namespace hpc::aux;

   std::size_t nof_rows = 3;
   std::size_t nof_cols = 7;

   if (rank == 0) {
      GeMatrix<double> A(nof_rows, nof_cols, Order::RowMajor);
      for (auto [i, j, Aij]: A) {
	 Aij = i * 100 + j;
      }
      auto row = A.row(2, 0);
      auto col = A.col(0, 0);

      MPI_Send(/* row to rank 1 */);
      MPI_Send(/* col to rank 1 */);

      /* receive it back for verification */
      DenseVector<double> vec1(nof_cols), vec2(nof_rows);
      MPI_Status status;
      MPI_Recv(/* vec1 from rank 1 */);
      MPI_Recv(/* vec2 from rank 1 */);

      /* verify it */
      for (auto [i, xi]: vec1) {
	 if (vec1(i) != row(i)) {
	    fmt::printf("verification failed for row(%d): %lg vs %lg\n",
	       i, vec1(i), row(i));
	 }
      }
      for (auto [i, xi]: vec2) {
	 if (vec2(i) != col(i)) {
	    fmt::printf("verification failed for col(%d): %lg vs %lg\n",
	       i, vec2(i), col(i));
	 }
      }
   } else {
      DenseVector<double> vec1(nof_cols), vec2(nof_rows);
      MPI_Status status;
      MPI_Recv(/* vec1 from rank 0 */);
      MPI_Recv(/* vec2 from rank 0 */);

      /* send it back for verification */
      MPI_Send(/* vec1 to rank 0 */);
      MPI_Send(/* vec2 to rank 0 */);
   }
   MPI_Finalize();
}
#ifndef HPC_MPI_FUNDAMENTAL_HPP
#define HPC_MPI_FUNDAMENTAL_HPP 1

#include <mpi.h>
#include <complex>

namespace hpc { namespace mpi {

template<typename T> struct fundamental_type {
   static constexpr bool defined = false;
};

template<>
struct fundamental_type<char> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_CHAR; }
};
template<>
struct fundamental_type<signed char> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_SIGNED_CHAR; }
};
template<>
struct fundamental_type<unsigned char> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_UNSIGNED_CHAR; }
};
template<>
struct fundamental_type<short> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_SHORT; }
};
template<>
struct fundamental_type<unsigned short> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_UNSIGNED_SHORT; }
};
template<>
struct fundamental_type<int> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_INT; }
};
template<>
struct fundamental_type<unsigned> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_UNSIGNED; }
};
template<>
struct fundamental_type<long> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_LONG; }
};
template<>
struct fundamental_type<unsigned long> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_UNSIGNED_LONG; }
};
template<>
struct fundamental_type<long long> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_LONG_LONG; }
};
template<>
struct fundamental_type<unsigned long long> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_UNSIGNED_LONG_LONG; }
};
template<>
struct fundamental_type<float> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_FLOAT; }
};
template<>
struct fundamental_type<double> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_DOUBLE; }
};
template<>
struct fundamental_type<long double> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_LONG_DOUBLE; }
};

/* the following types are not supported by older
   MPI implementations; hence we make it dependent
   on the corresponding preprocessor symbols */
#ifdef MPI_CXX_FLOAT_COMPLEX
template<>
struct fundamental_type<std::complex<float>> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_CXX_FLOAT_COMPLEX; }
};
#endif
#ifdef MPI_CXX_DOUBLE_COMPLEX
template<>
struct fundamental_type<std::complex<double>> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_CXX_DOUBLE_COMPLEX; }
};
#endif
#ifdef MPI_CXX_LONG_DOUBLE_COMPLEX
template<>
struct fundamental_type<std::complex<long double>> {
   static constexpr bool defined = true;
   constexpr MPI_Datatype get() const { return MPI_CXX_LONG_DOUBLE_COMPLEX; }
};
#endif

template<typename T>
typename std::enable_if<
   fundamental_type<T>::defined,
   MPI_Datatype>::type
get_type(const T& value) {
   return fundamental_type<T>().get();
}

} } // namespaces mpi, hpc

#endif

(Please note that transfer_vectors does not print a message if the tests succeed: No news are good news.)