|
Every C/C++ MPI program must include the MPI header file (which contains the MPI function type declarations)
#include "mpi.h" |
int MPI_Xxxxx( param1, .... ); |
Every MPI function returns an integer ERROR CODE
#include "mpi.h" int main( int argc, char *argv[] ) { MPI_Init( &argc, &argv ); ..... ..... MPI_Finalize(); } |
MPI_Init( &argc, &argv ); |
must be the first statement of the program
In other words: you can write C/C++ statements before this, but to make sure your MPI program will run in future MPI implementation, it's a good idea to follow the standard....
#include "mpi.h" #include <stdio.h> int main(int argc, char **argv) { MPI_Init( &argc, & argv ); printf("Hello World !\n"); MPI_Finalize(); } |
mpicc -o Hello Hello.c mpirun -np 4 ./Hello |
Output:
$ mpirun -np 4 ./Hello Hello World ! Hello World ! Hello World ! Hello World ! |
The message Hello World !" is printed 4 times, because the program is run on 4 processors (using the option -np 4)
A Communicator Group is
|
|
Function Name | Usage |
---|---|
MPI_Init(int *argc, char*argv[]) | Initialize the MPI system - must be the first statement in an MPI program |
MPI_Finalize() | Terminates the MPI system |
MPI_Comm_size(MPI_Comm comm, int *size) |
Determines the size of the group associated with the communictor
comm
The size is returned in the integer variable size |
MPI_Comm_rank(MPI_Comm comm, int *rank) |
Determines the
rank of the calling process in the communicator
comm
The rank is returned in the integer variable rank |
MPI_Send(void *buff,
int count, MPI_Datatype type, int dest, int tag, int comm) |
Send a point-to-point message
to process
dest
in the communication group
comm
The message is stored at memory location buff and consists of count items of datatype type The message will be tagged with the tag value tag The MPI_Send() function will only return if the message sent has been received by the destination. (It's safe to reuse the buffer buff right away). |
MPI_Recv(void *buff,
int count, MPI_Datatype type, int source, int tag, int comm, MPI_Status *status) |
Receive a point-to-point message
The message MUST BE from the process source in the communication group comm AND the message MUST BE tagged with the tag value tag The message received will be stored at memory location buff which will have space to store count items of datatype type When the function exits, the exit status is stored in status. Information about the received message is returned in a status variable, e.g.,:
In most cases, you know the structure of data received and then you can ignore the status value... If you pass NULL as status parameter, MPI will not return the status value. The MPI_Recv() function will only return if the desired message (from source with tag tag) has been received - or exits with an error code... |
Well, printing "hello world" is not very illustrative in MPI
We will make processes send "hello" message to each other...
|
#include "mpi.h" #include <stdio.h> #include <string.h> int main(int argc, char **argv) { char idstr[32]; char buff[128]; int numprocs; int myid; int i; MPI_Status stat; /********************************************* Begin program *********************************************/ MPI_Init(&argc,&argv); // Initialize MPI_Comm_size(MPI_COMM_WORLD, &numprocs); // Get # processors MPI_Comm_rank(MPI_COMM_WORLD, &myid); // Get my rank (id) if( myid == 0 ) { // Master printf("WE have %d processors\n", numprocs); for( i = 1; i < numprocs; i++) { sprintf(buff, "Hello %d", i); MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); } for( i = 1; i < numprocs; i++) { MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat); printf("%s\n", buff); } } else { // Slave MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat); sprintf(idstr, "Hello 0, Processor %d is present and accounted for !", myid); MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); } |
To compile the program:
|
To run the program:
|
Each computer in the MPI cluster has a unique identofier
The user interacts with the MPI console process (always runs on MPI node 0) that reads user's command and execute them.
E.g.: how node 0 send message to node 2:
Note:
|
mpirun -np 4 ./Hello |
The SAME program (./Hello) will be run on 4 different MPI processors (because of the option x-np 4)