INTRO_SHMEM(3) | Open MPI | INTRO_SHMEM(3) |
intro_shmem - Introduction to the OpenSHMEM programming model
The SHMEM programming model consists of library routines that provide low-latency, high-bandwidth communication for use in highly parallelized scalable programs. The routines in the OpenSHMEM application programming interface (API) provide a programming model for exchanging data between cooperating parallel processes. The resulting programs are similar in style to Message Passing Interface (MPI) programs. The SHMEM API can be used either alone or in combination with MPI routines in the same parallel program.
An OpenSHMEM program is SPMD (single program, multiple data) in style. The SHMEM processes, called processing elements or PEs, all start at the same time and they all run the same program. Usually the PEs perform computation on their own subdomains of the larger problem and periodically communicate with other PEs to exchange information on which the next computation phase depends.
The OpenSHMEM routines minimize the overhead associated with data transfer requests, maximize bandwidth and minimize data latency. Data latency is the period of time that starts when a PE initiates a transfer of data and ends when a PE can use the data. OpenSHMEM routines support remote data transfer through put operations, which transfer data to a different PE, get operations, which transfer data from a different PE, and remote pointers, which allow direct references to data objects owned by another PE. Other operations supported are collective broadcast and reduction, barrier synchronization, and atomic memory operations. An atomic memory operation is an atomic read-and-update operation, such as a fetch-and-increment, on a remote or local data object.
This section lists the significant OpenSHMEM message-passing routines.
Consistent with the SPMD nature of the OpenSHMEM programming model is the concept of symmetric data objects. These are arrays or variables that exist with the same size, type, and relative address on all PEs. Another term for symmetric data objects is "remotely accessible data objects". In the interface definitions for OpenSHMEM data transfer routines, one or more of the parameters are typically required to be symmetric or remotely accessible.
The following kinds of data objects are symmetric:
Multiple pSync arrays are often needed if a particular PE calls as OpenSHMEM collective routine twice without intervening barrier synchronization. Problems would occur if some PEs in the active set for call 2 arrive at call 2 before processing of call 1 is complete by all PEs in the call 1 active set. You can use shmem_barrier(3) or shmem_barrier_all(3) to perform a barrier synchronization between consecutive calls to OpenSHMEM collective routines.
There are two special cases:
Because the SHMEM routines restore pSync to its original contents, multiple calls that use the same pSync array do not require that pSync be reinitialized after the first call.
This section lists the significant SHMEM environment variables.
The first call to SHMEM must be start_pes(3). This routines initialize the SHMEM runtime.
Calling any other SHMEM routines beforehand has undefined behavior. Multiple calls to this routine is not allowed.
The OpenSHMEM specification is silent regarding how OpenSHMEM programs are compiled, linked and run. This section shows some examples of how wrapper programs could be utilized to compile and launch applications. The commands are styled after wrapper programs found in many MPI implementations.
The following sample command line demonstrates running an OpenSHMEM Program using a wrapper script (oshrun in this case):
oshcc c_program.c
oshfort fortran_program.f
The following sample command line demonstrates running an OpenSHMEM Program assuming that the library provides a wrapper script for such purpose (named oshrun for this example):
oshrun -np 32 ./a.out
Example 1: The following Fortran OpenSHMEM program directs all PEs to sum simultaneously the numbers in the VALUES variable across all PEs:
PROGRAM REDUCTION
REAL VALUES, SUM
COMMON /C/ VALUES
REAL WORK
CALL START_PES(0)
VALUES = MY_PE()
CALL SHMEM_BARRIER_ALL ! Synchronize all PEs
SUM = 0.0
DO I = 0, NUM_PES()-1
CALL SHMEM_REAL_GET(WORK, VALUES, 1, I) ! Get next value
SUM = SUM + WORK ! Sum it
ENDDO
PRINT *, 'PE ', MY_PE(), ' COMPUTED SUM=', SUM
CALL SHMEM_BARRIER_ALL END
Example 2: The following C OpenSHMEM program transfers an array of 10 longs from PE 0 to PE 1:
#include <mpp/shmem.h> main() {
long source[10] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
static long target[10];
shmem_init();
if (shmem_my_pe() == 0) {
/* put 10 elements into target on PE 1 */
shmem_long_put(target, source, 10, 1);
}
shmem_barrier_all(); /* sync sender and receiver */
if (shmem_my_pe() == 1)
printf("target[0] on PE %d is %d\n", shmem_my_pe(), target[0]); }
The following man pages also contain information on OpenSHMEM routines. See the specific man pages for implementation information.
shmem_add(3), shmem_and(3), shmem_barrier(3), shmem_barrier_all(3), shmem_broadcast(3), shmem_cache(3), shmem_collect(3), shmem_cswap(3), shmem_fadd(3), shmem_fence(3), shmem_finc(3), shmem_get(3), shmem_iget(3), shmem_inc(3), shmem_iput(3), shmem_lock(3), shmem_max(3), shmem_min(3), shmem_my_pe(3), shmem_or(3), shmem_prod(3), shmem_put(3), shmem_quiet(3), shmem_short_g(3), shmem_short_p(3), shmem_sum(3), shmem_swap(3), shmem_wait(3), shmem_xor(3), shmem_pe_accessible(3), shmem_addr_accessible(3), shmem_init(3), shmem_malloc(3C), shmem_my_pe(3I), shmem_n_pes(3I)
October 29, 2018 | 3.1.3 |