MPI functions

In this page, the interface with MPI is detailed. This interface has been written in order to call the same functions for transferring a vector of integers, doubles, complex doubles, or multiple precision numbers.

GetMpiDataType returns the MPI type of the data
GetRatioMpiDataType returns the ratio to use to transfer the data
MpiIsend calls MPI_Isend (non-blocking transfer)
MpiIrecv calls MPI_Irecv (non-blocking transfer)
MpiCompleteIrecv completes a transfer after MPI_Irecv and MPI_Wait
MpiSsend calls MPI_Ssend (synchronous transfer)
MpiSend calls MPI_Send (asynchronous transfer)
MpiGather calls MPI_Gather
MpiAllreduce calls MPI_Allreduce
MpiReduce calls MPI_Reduce
MpiRecv calls MPI_Recv
MpiBcast calls MPI_Bcast
MPI_Bcast_string casts a string to other processors

GetMpiDataType

Syntax :

  MPI_Datatype GetMpiDataType(Vector<T>&)
  MPI_Datatype GetMpiDataType(T&)
 

This function returns the MPI data type associated with the class T (eg. MPI_DOUBLE for T=double, MPI_INTEGER for T=int). The implemented types are bool, int, float, double, long double, __float128, complex<float>, complex<double<, complex<long double> and complex<__float128>. Usually this function should not be used since MpiSend, MpiRecv, etc are already handling any type T thanks to this function GetMpiDataType.

Example :

// data to send
Vector<complex<double> > x;

// this function can be used if you want to overload a MPI routine
// in MpiSend, we call it :
int target = 2, tag = 33;
MPI_Send(x.GetData(), x.GetM()*GetRatioMpiDataType(x), GetMpiDataType(x), target, tag, MPI_COMM_WORLD);

Location :

share/MpiCommunicationInline.cxx

GetRatioMpiDataType

Syntax :

  int GetRatioMpiDataType(Vector<T>&)
  int GetRatioMpiDataType(T&)
 

This function returns the number of elements of the type returned by GetMpiDataType that compose the type T. This function is used to handle transfer with complex numbers. A complex number is made of two real numbers, GetMpiDataType for T=complex<double> will return MPI_DOUBLE and GetRatioMpiDataType will return 2. Usually this function should not be used since MpiSend, MpiRecv, etc are already handling any type T thanks to this function GetRatioMpiDataType.

Example :

// data to send
Vector<complex<double> > x;

// this function can be used if you want to overload a MPI routine
// sending 10 complex values is equivalent to send 20 real values
// x.GetM() is multiplied by GetRatioMpiDataType
// in MpiSend, we call it :
int target = 2, tag = 33;
MPI_Send(x.GetData(), x.GetM()*GetRatioMpiDataType(x), GetMpiDataType(x), target, tag, MPI_COMM_WORLD);

Location :

share/MpiCommunicationInline.cxx

MpiIsend

Syntax :

  MPI_Request MpiIsend(MPI_Comm, T* x, Vector<int64_t>& xtmp, int n, int proc, int tag);
  MPI_Request MpiIsend(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, int n, int proc, int tag);
 

This function sends the data x (n is the number of values to send) to the processor proc. This operation is performed by calling MPI_Isend, xtmp is needed for multiple precision numbers. The function returns a MPI request.

Example :

// data to send from proc 0 to proc 2
Vector<complex<double> > x(10);

// we want to send x to processor 2
Vector<int64_t> xtmp;
MPI_Request request; int tag = 23;
if (MPI::COMM_WORLD.Get_rank() == 0)
  request = MpiIsend(MPI_COMM_WORLD, x, xtmp, x.GetM(), 2, tag);
else if (MPI::COMM_WORLD.Get_rank() == 2)
  request = MpiIrecv(MPI_COMM_WORLD, x, xtmp, x.GetM(), 0, tag);

// transfer is completed
MPI_Status status;
MPI_Wait(&request, &status);

// for multiple precision number MpiCompleteIrecv is mandatory
// it is useless for other types
if (MPI::COMM_WORLD.Get_rank() == 2)
  MpiCompleteIrecv(x, xtmp, x.GetM());

Location :

share/MpiCommunication.cxx

MpiIrecv

Syntax :

  MPI_Request MpiIrecv(MPI_Comm, T* x, Vector<int64_t>& xtmp, int n, int proc, int tag);
  MPI_Request MpiIrecv(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, int n, int proc, int tag);
 

This function receives the data x (n is the number of values to receive) to the processor proc. This operation is performed by calling MPI_Isend, xtmp is needed for multiple precision numbers. The vector x must be allocated with a size sufficiently large to store the received values. The function returns a MPI request.

Example :

// data to send from proc 0 to proc 2
Vector<complex<double> > x(10);

// we want to send x to processor 2
Vector<int64_t> xtmp;
MPI_Request request; int tag = 23;
if (MPI::COMM_WORLD.Get_rank() == 0)
  request = MpiIsend(MPI_COMM_WORLD, x, xtmp, x.GetM(), 2, tag);
else if (MPI::COMM_WORLD.Get_rank() == 2)
  request = MpiIrecv(MPI_COMM_WORLD, x, xtmp, x.GetM(), 0, tag);

// transfer is completed
MPI_Status status;
MPI_Wait(&request, &status);

// for multiple precision number MpiCompleteIrecv is mandatory
// it is useless for other types
if (MPI::COMM_WORLD.Get_rank() == 2)
  MpiCompleteIrecv(x, xtmp, x.GetM());

Location :

share/MpiCommunication.cxx

MpiCompleteIrecv

Syntax :

  void MpiCompleteIrecv(T* x, Vector<int64_t>& xtmp, int n);
  void MpiCompleteIrecv(Vector<T>& x, Vector<int64_t>& xtmp, int n);
 

This function completes the receiving operation (initiated with MpiIrecv). n is the number of values to receive, x the received data. xtmp is needed for multiple precision numbers. This function is actually only needed for transfer of multiple precision numbers.

Example :

// data to send from proc 0 to proc 2
Vector<Complex_wp> x(10);

// we want to send x to processor 2
Vector<int64_t> xtmp;
MPI_Request request; int tag = 23;
if (MPI::COMM_WORLD.Get_rank() == 0)
  request = MpiIsend(MPI_COMM_WORLD, x, xtmp, x.GetM(), 2, tag);
else if (MPI::COMM_WORLD.Get_rank() == 2)
  request = MpiIrecv(MPI_COMM_WORLD, x, xtmp, x.GetM(), 0, tag);

// transfer is completed
MPI_Status status;
MPI_Wait(&request, &status);

// for multiple precision number MpiCompleteIrecv is mandatory
// it is useless for other types
if (MPI::COMM_WORLD.Get_rank() == 2)
  MpiCompleteIrecv(x, xtmp, x.GetM());

Location :

share/MpiCommunication.cxx

MpiSsend

Syntax :

  void MpiSsend(MPI_Comm, T* x, Vector<int64_t>& xtmp, int n, int proc, int tag);
  void MpiSsend(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, int n, int proc, int tag);
 

This function sends the data x (n is the number of values to send) to the processor proc. This operation is performed by calling MPI_Ssend, xtmp is needed for multiple precision numbers.

Example :

// data to send from proc 0 to proc 2
Vector<complex<double> > x(10);

// we want to send x to processor 2
Vector<int64_t> xtmp;
int tag = 23;
if (MPI::COMM_WORLD.Get_rank() == 0)
  MpiSsend(MPI_COMM_WORLD, x, xtmp, x.GetM(), 2, tag);
else if (MPI::COMM_WORLD.Get_rank() == 2)
  MpiRecv(MPI_COMM_WORLD, x, xtmp, x.GetM(), 0, tag);

Location :

share/MpiCommunication.cxx

MpiSend

Syntax :

  void MpiSend(MPI_Comm, T* x, Vector<int64_t>& xtmp, int n, int proc, int tag);
  void MpiSend(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, int n, int proc, int tag);
 

This function sends the data x (n is the number of values to send) to the processor proc. This operation is performed by calling MPI_Send, xtmp is needed for multiple precision numbers.

Example :

// data to send from proc 0 to proc 2
Vector<complex<double> > x(10);

// we want to send x to processor 2
Vector<int64_t> xtmp;
int tag = 23;
if (MPI::COMM_WORLD.Get_rank() == 0)
  MpiSend(MPI_COMM_WORLD, x, xtmp, x.GetM(), 2, tag);
else if (MPI::COMM_WORLD.Get_rank() == 2)
  MpiRecv(MPI_COMM_WORLD, x, xtmp, x.GetM(), 0, tag);

Location :

share/MpiCommunication.cxx

MpiRecv

Syntax :

  void MpiRecv(MPI_Comm, T* x, Vector<int64_t>& xtmp, int n, int proc, int tag, MPI::Status&);
  void MpiRecv(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, int n, int proc, int tag, MPI::Status&);
 

This function receives the data x (n is the number of values to receive) from the processor proc. This operation is performed by calling MPI_Recv, xtmp is needed for multiple precision numbers. x must be allocated with a size large enough to contain received values.

Example :

// data to send from proc 0 to proc 2
Vector<complex<double> > x(10);

// we want to send x to processor 2
Vector<int64_t> xtmp;
int tag = 23;
if (MPI::COMM_WORLD.Get_rank() == 0)
  MpiSsend(MPI_COMM_WORLD, x, xtmp, x.GetM(), 2, tag);
else if (MPI::COMM_WORLD.Get_rank() == 2)
  MpiRecv(MPI_COMM_WORLD, x, xtmp, x.GetM(), 0, tag);

Location :

share/MpiCommunication.cxx

MpiGather

Syntax :

  void MpiGather(MPI_Comm, T* x, Vector<int64_t>& xtmp, T* y, int n, int proc);
  void MpiGather(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, Vector<T>& y, int n, int proc);
 

This function gathers the data contained in x (n is the number of values of x to be sent), the result is placed on y for the processor proc. This operation is performed by calling MPI_Gather, xtmp is needed for multiple precision numbers. y must be allocated on the processor proc with a size large enough to contain received values.

Example :

// data to gather
Vector<complex<double> > x(10);
Vector<complex<double> > y;
if (MPI::COMM_WORLD.Get_rank() == 2)
  y.Reallocate(10*MPI::COMM_WORLD.Get_size());

// we want to gather the vectors x (for each processor) in the vector y for processor 2
Vector<int64_t> xtmp;
MpiGather(MPI_COMM_WORLD, x, xtmp, y, x.GetM(), 2);

// you can display the result y = (x0, x1, x2, ..., x_{N-1}) where N is the number of processors in the communicator
if (MPI::COMM_WORLD.Get_rank() == 2)
  DISP(y);

Location :

share/MpiCommunication.cxx

MpiReduce

Syntax :

  void MpiReduce(MPI_Comm, T* x, Vector<int64_t>& xtmp, T* y, int n, MPI_Op, int proc);
  void MpiReduce(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, Vector<T>& y, int n, MPI_Op, int proc);
 

This function reduces the data contained in x (n is the number of values of x to be reduced), the result is placed on y for the processor proc. This operation is performed by calling MPI_Reduce, xtmp is needed for multiple precision numbers. y must be allocated on the processor proc with a size large enough to contain received values.

Example :

// data to reduce
Vector<complex<double> > x(10);
Vector<complex<double> > y;
if (MPI::COMM_WORLD.Get_rank() == 2)
  y.Reallocate(10);

// we want to reduce the vectors x (for each processor) in the vector y for processor 2
Vector<int64_t> xtmp;
MpiReduce(MPI_COMM_WORLD, x, xtmp, y, x.GetM(), MPI_SUM, 2);

// you can display the result y = x0 + x1 + x2+  ... + x_{N-1})
//  where N is the number of processors in the communicator
// here we have a sum since MPI::SUM has been selected as the reduction operator
if (MPI::COMM_WORLD.Get_rank() == 2)
  DISP(y);

Location :

share/MpiCommunication.cxx

MpiAllreduce

Syntax :

  void MpiAllreduce(MPI_Comm, T* x, Vector<int64_t>& xtmp, T* y, int n, MPI_Op);
  void MpiAllreduce(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, Vector<T>& y, int n, MPI_Op);
 

This function reduces the data contained in x (n is the number of values of x to be reduced) and broadcasts the result in y for all processors of the communicator. This operation is performed by calling MPI_Allreduce, xtmp is needed for multiple precision numbers. y must be allocated with a size large enough to contain received values.

Example :

// data to reduce
Vector<complex<double> > x(10);
Vector<complex<double> > y(10);

// we want to reduce the vectors x (for each processor) in the vector y
Vector<int64_t> xtmp;
MpiAllreduce(MPI_COMM_WORLD, x, xtmp, y, x.GetM(), MPI_SUM);

// you can display the result y = x0 + x1 + x2+  ... + x_{N-1})
//  where N is the number of processors in the communicator
// here we have a sum since MPI::SUM has been selected as the reduction operator
DISP(y);

Location :

share/MpiCommunication.cxx

MpiBcast

Syntax :

  void MpiBcast(MPI_Comm, T* x, Vector<int64_t>& xtmp, int n, int proc);
  void MpiBcast(MPI_Comm, Vector<T>& x, Vector<int64_t>& xtmp, int n, int proc);
 

This function broadcasts the data contained in x (n is the number of values of x to be broadcasted). This operation is performed by calling MPI_Bcast, xtmp is needed for multiple precision numbers. x must be allocated on all the processors of the communicator with a size large enough to contain received values.

Example :

// data to broadcast
Vector<complex<double> > x(10);
if (MPI::COMM_WORLD.Get_rank() == 2)
  x.FillRand();

// we want to broadcast the vector x to all the processors
Vector<int64_t> xtmp;
MpiBcast(MPI_COMM_WORLD, x, xtmp, x.GetM(), 2);

Location :

share/MpiCommunication.cxx

MPI_Bcast_string

Syntax :

  void MPI_Bcast_string(string& s, int root, const MPI_Comm& comm);
 

This function broadcasts the string s from the root processor to all processors of the communicator comm.

Example :

string s;
// s is constructed only on a given processor
if (MPI::COMM_WORLD.Get_rank() == 0)
 s = "toto";

// then s can be broadcasted to other processors of the communicator
MPI_Bcast_string(s, 0, MPI_COMM_WORLD);

Location :

CommonMontjoie.cxx