3.1.8.4.7.3. Hybrid parallel¶
The following is an example of compiling a C program for a multi-node job (hybrid parallel).
- Prepare source program.Sample program is prepared as
/home/system/sample/C/mpi/clang_sample_hybrid.c
.
1#include <stdio.h>
2#include "mpi.h"
3#define SIZE 9000
4
5int main(int argc, char *argv[])
6{
7 int rank, size, root;
8 int data, result;
9 int i,j;
10 double a[SIZE][SIZE],b[SIZE][SIZE],c[SIZE][SIZE];
11
12 result = 0;
13
14 MPI_Init(&argc, &argv);
15 MPI_Comm_rank(MPI_COMM_WORLD, &rank);
16 MPI_Comm_size(MPI_COMM_WORLD, &size);
17
18 for(i=0; i < SIZE; i++){
19 for(j=0; j < SIZE; j++){
20 a[i][j] = (double)(i+j*0.5);
21 b[i][j] = (double)(i+j/(rank+1));
22 c[i][j] = a[i][j] + b[i][j];
23 }
24 }
25
26 data = c[1][1]/(rank+1);
27
28 if (rank == 0) {
29 fprintf(stdout, "MPI communication start. size=%d\n", size);
30 fflush(stdout);
31 }
32
33 root = 0;
34 MPI_Reduce(&data, &result, 1, MPI_INT, MPI_SUM, root, MPI_COMM_WORLD);
35
36 if (rank == 0) {
37 fprintf(stdout, "MPI communication end\n");
38 fprintf(stdout, "result(%d)\n",result);
39 fflush(stdout);
40 }
41
42 MPI_Finalize();
43 return 0;
44}
Compile sample program.
[_LNlogin]$ mpifccpx -Nclang -Ofast -fopenmp -Rpass=.* -ffj-lst=t -o sample_mpi clang_sample_hybrid.c clang_sample_hybrid.c:21:25: remark: sinking zext [-Rpass=licm] a[i][j] = (double)(i+j*0.5); ^ clang_sample_hybrid.c:21:25: remark: sinking zext [-Rpass=licm] clang_sample_hybrid.c:21:44: remark: hoisting sitofp [-Rpass=licm] a[i][j] = (double)(i+j*0.5); ^ (omitted)
- Prepare job script.Job script sample is prepared as
/home/system/sample/C/mpi/clang_job_mpi.sh
.
#!/bin/sh #PJM -L "node=2" #PJM -L "rscgrp=small" #PJM -L "elapse=10:00" #PJM --mpi max-proc-per-node=4 #PJM -x PJM_LLIO_GFSCACHE=/vol000N #PJM -g groupname #PJM -s # execute job export OMP_NUM_THREADS=12 mpiexec -n 8 ./sample_mpi
Submit a job with pjsub command.
[_LNlogin]$ pjsub clang_job_mpi.sh [INFO] PJM 0000 pjsub Job 26 submitted.
- Check execution result.The standard output is output as
job name.job ID.out
.
[_LNlogin]$ cat clang_job_mpi.sh.26.out MPI communication start. size=8 MPI communication end result(4)