MPI4PY(1) | MPI for Python | MPI4PY(1) |
mpi4py - MPI for Python
This document describes the MPI for Python package. MPI for Python provides Python bindings for the Message Passing Interface (MPI) standard, allowing Python applications to exploit multiple processors on workstations, clusters and supercomputers.
This package builds on the MPI specification and provides an object oriented interface resembling the MPI-2 C++ bindings. It supports point-to-point (sends, receives) and collective (broadcasts, scatters, gathers) communication of any picklable Python object, as well as efficient communication of Python objects exposing the Python buffer interface (e.g. NumPy arrays and builtin bytes/array/memoryview objects).
Over the last years, high performance computing has become an affordable resource to many more researchers in the scientific community than ever before. The conjunction of quality open source software and commodity hardware strongly influenced the now widespread popularity of Beowulf class clusters and cluster of workstations.
Among many parallel computational models, message-passing has proven to be an effective one. This paradigm is specially suited for (but not limited to) distributed memory architectures and is used in today’s most demanding scientific and engineering application related to modeling, simulation, design, and signal processing. However, portable message-passing parallel programming used to be a nightmare in the past because of the many incompatible options developers were faced to. Fortunately, this situation definitely changed after the MPI Forum released its standard specification.
High performance computing is traditionally associated with software development using compiled languages. However, in typical applications programs, only a small part of the code is time-critical enough to require the efficiency of compiled languages. The rest of the code is generally related to memory management, error handling, input/output, and user interaction, and those are usually the most error prone and time-consuming lines of code to write and debug in the whole development process. Interpreted high-level languages can be really advantageous for this kind of tasks.
For implementing general-purpose numerical computations, MATLAB [1] is the dominant interpreted programming language. In the open source side, Octave and Scilab are well known, freely distributed software packages providing compatibility with the MATLAB language. In this work, we present MPI for Python, a new package enabling applications to exploit multiple processors using standard MPI “look and feel” in Python scripts.
MPI, [mpi-using] [mpi-ref] the Message Passing Interface, is a standardized and portable message-passing system designed to function on a wide variety of parallel computers. The standard defines the syntax and semantics of library routines and allows users to write portable programs in the main scientific programming languages (Fortran, C, or C++).
Since its release, the MPI specification [mpi-std1] [mpi-std2] has become the leading standard for message-passing libraries for parallel computers. Implementations are available from vendors of high-performance computers and from well known open source projects like MPICH [mpi-mpich] and Open MPI [mpi-openmpi].
Python is a modern, easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming with dynamic typing and dynamic binding. It supports modules and packages, which encourages program modularity and code reuse. Python’s elegant syntax, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms.
The Python interpreter and the extensive standard library are available in source or binary form without charge for all major platforms, and can be freely distributed. It is easily extended with new functions and data types implemented in C or C++. Python is also suitable as an extension language for customizable applications.
Python is an ideal candidate for writing the higher-level parts of large-scale scientific applications [Hinsen97] and driving simulations in parallel architectures [Beazley97] like clusters of PC’s or SMP’s. Python codes are quickly developed, easily maintained, and can achieve a high degree of integration with other libraries written in compiled languages.
As this work started and evolved, some ideas were borrowed from well known MPI and Python related open source projects from the Internet.
Additionally, we would like to mention some available tools for scientific computing and software development with Python.
MPI for Python provides an object oriented approach to message passing which grounds on the standard MPI-2 C++ bindings. The interface was designed with focus in translating MPI syntax and semantics of standard MPI-2 bindings for C++ to Python. Any user of the standard C/C++ MPI bindings should be able to use this module without need of learning a new interface.
The Python standard library supports different mechanisms for data persistence. Many of them rely on disk storage, but pickling and marshaling can also work with memory buffers.
The pickle modules provide user-extensible facilities to serialize general Python objects using ASCII or binary formats. The marshal module provides facilities to serialize built-in Python objects using a binary format specific to Python, but independent of machine architecture issues.
MPI for Python can communicate any built-in or user-defined Python object taking advantage of the features provided by the pickle module. These facilities will be routinely used to build binary representations of objects to communicate (at sending processes), and restoring them back (at receiving processes).
Although simple and general, the serialization approach (i.e., pickling and unpickling) previously discussed imposes important overheads in memory as well as processor usage, especially in the scenario of objects with large memory footprints being communicated. Pickling general Python objects, ranging from primitive or container built-in types to user-defined classes, necessarily requires computer resources. Processing is also needed for dispatching the appropriate serialization method (that depends on the type of the object) and doing the actual packing. Additional memory is always needed, and if its total amount is not known a priori, many reallocations can occur. Indeed, in the case of large numeric arrays, this is certainly unacceptable and precludes communication of objects occupying half or more of the available memory resources.
MPI for Python supports direct communication of any object exporting the single-segment buffer interface. This interface is a standard Python mechanism provided by some types (e.g., strings and numeric arrays), allowing access in the C side to a contiguous memory buffer (i.e., address and length) containing the relevant data. This feature, in conjunction with the capability of constructing user-defined MPI datatypes describing complicated memory layouts, enables the implementation of many algorithms involving multidimensional numeric arrays (e.g., image processing, fast Fourier transforms, finite difference schemes on structured Cartesian grids) directly in Python, with negligible overhead, and almost as fast as compiled Fortran, C, or C++ codes.
In MPI for Python, Comm is the base class of communicators. The Intracomm and Intercomm classes are sublcasses of the Comm class. The Comm.Is_inter method (and Comm.Is_intra, provided for convenience but not part of the MPI specification) is defined for communicator objects and can be used to determine the particular communicator class.
The two predefined intracommunicator instances are available: COMM_SELF and COMM_WORLD. From them, new communicators can be created as needed.
The number of processes in a communicator and the calling process rank can be respectively obtained with methods Comm.Get_size and Comm.Get_rank. The associated process group can be retrieved from a communicator by calling the Comm.Get_group method, which returns an instance of the Group class. Set operations with Group objects like like Group.Union, Group.Intersection and Group.Difference are fully supported, as well as the creation of new communicators from these groups using Comm.Create and Comm.Create_group.
New communicator instances can be obtained with the Comm.Clone, Comm.Dup and Comm.Split methods, as well methods Intracomm.Create_intercomm and Intercomm.Merge.
Virtual topologies (Cartcomm, Graphcomm and Distgraphcomm classes, which are specializations of the Intracomm class) are fully supported. New instances can be obtained from intracommunicator instances with factory methods Intracomm.Create_cart and Intracomm.Create_graph.
Point to point communication is a fundamental capability of message passing systems. This mechanism enables the transmission of data between a pair of processes, one side sending, the other receiving.
MPI provides a set of send and receive functions allowing the communication of typed data with an associated tag. The type information enables the conversion of data representation from one architecture to another in the case of heterogeneous computing environments; additionally, it allows the representation of non-contiguous data layouts and user-defined datatypes, thus avoiding the overhead of (otherwise unavoidable) packing/unpacking operations. The tag information allows selectivity of messages at the receiving end.
MPI provides basic send and receive functions that are blocking. These functions block the caller until the data buffers involved in the communication can be safely reused by the application program.
In MPI for Python, the Comm.Send, Comm.Recv and Comm.Sendrecv methods of communicator objects provide support for blocking point-to-point communications within Intracomm and Intercomm instances. These methods can communicate memory buffers. The variants Comm.send, Comm.recv and Comm.sendrecv can communicate general Python objects.
On many systems, performance can be significantly increased by overlapping communication and computation. This is particularly true on systems where communication can be executed autonomously by an intelligent, dedicated communication controller.
MPI provides nonblocking send and receive functions. They allow the possible overlap of communication and computation. Non-blocking communication always come in two parts: posting functions, which begin the requested operation; and test-for-completion functions, which allow to discover whether the requested operation has completed.
In MPI for Python, the Comm.Isend and Comm.Irecv methods initiate send and receive operations, respectively. These methods return a Request instance, uniquely identifying the started operation. Its completion can be managed using the Request.Test, Request.Wait and Request.Cancel methods. The management of Request objects and associated memory buffers involved in communication requires a careful, rather low-level coordination. Users must ensure that objects exposing their memory buffers are not accessed at the Python level while they are involved in nonblocking message-passing operations.
Often a communication with the same argument list is repeatedly executed within an inner loop. In such cases, communication can be further optimized by using persistent communication, a particular case of nonblocking communication allowing the reduction of the overhead between processes and communication controllers. Furthermore , this kind of optimization can also alleviate the extra call overheads associated to interpreted, dynamic languages like Python.
In MPI for Python, the Comm.Send_init and Comm.Recv_init methods create persistent requests for a send and receive operation, respectively. These methods return an instance of the Prequest class, a subclass of the Request class. The actual communication can be effectively started using the Prequest.Start method, and its completion can be managed as previously described.
Collective communications allow the transmittal of data between multiple processes of a group simultaneously. The syntax and semantics of collective functions is consistent with point-to-point communication. Collective functions communicate typed data, but messages are not paired with an associated tag; selectivity of messages is implied in the calling order. Additionally, collective functions come in blocking versions only.
The more commonly used collective communication operations are the following.
In MPI for Python, the Comm.Bcast, Comm.Scatter, Comm.Gather, Comm.Allgather, Comm.Alltoall methods provide support for collective communications of memory buffers. The lower-case variants Comm.bcast, Comm.scatter, Comm.gather, Comm.allgather and Comm.alltoall can communicate general Python objects. The vector variants (which can communicate different amounts of data to each process) Comm.Scatterv, Comm.Gatherv, Comm.Allgatherv, Comm.Alltoallv and Comm.Alltoallw are also supported, they can only communicate objects exposing memory buffers.
Global reducion operations on memory buffers are accessible through the Comm.Reduce, Comm.Reduce_scatter, Comm.Allreduce, Intracomm.Scan and Intracomm.Exscan methods. The lower-case variants Comm.reduce, Comm.allreduce, Intracomm.scan and Intracomm.exscan can communicate general Python objects; however, the actual required reduction computations are performed sequentially at some process. All the predefined (i.e., SUM, PROD, MAX, etc.) reduction operations can be applied.
Several MPI implementations, including Open MPI and MVAPICH, support passing GPU pointers to MPI calls to avoid explict data movement between the host and the device. On the Python side, GPU arrays have been implemented by many libraries that need GPU computation, such as CuPy, Numba, PyTorch, and PyArrow. In order to increase library interoperability, two kinds of zero-copy data exchange protocols are defined and agreed upon: DLPack and CUDA Array Interface. For example, a CuPy array can be passed to a Numba CUDA-jit kernel.
MPI for Python provides an experimental support for GPU-aware MPI. This feature requires:
See the Tutorial section for further information. We note that
In the context of the MPI-1 specification, a parallel application is static; that is, no processes can be added to or deleted from a running application after it has been started. Fortunately, this limitation was addressed in MPI-2. The new specification added a process management model providing a basic interface between an application and external resources and process managers.
This MPI-2 extension can be really useful, especially for sequential applications built on top of parallel modules, or parallel applications with a client/server model. The MPI-2 process model provides a mechanism to create new processes and establish communication between them and the existing MPI application. It also provides mechanisms to establish communication between two existing MPI applications, even when one did not start the other.
In MPI for Python, new independent process groups can be created by calling the Intracomm.Spawn method within an intracommunicator. This call returns a new intercommunicator (i.e., an Intercomm instance) at the parent process group. The child process group can retrieve the matching intercommunicator by calling the Comm.Get_parent class method. At each side, the new intercommunicator can be used to perform point to point and collective communications between the parent and child groups of processes.
Alternatively, disjoint groups of processes can establish communication using a client/server approach. Any server application must first call the Open_port function to open a port and the Publish_name function to publish a provided service, and next call the Intracomm.Accept method. Any client applications can first find a published service by calling the Lookup_name function, which returns the port where a server can be contacted; and next call the Intracomm.Connect method. Both Intracomm.Accept and Intracomm.Connect methods return an Intercomm instance. When connection between client/server processes is no longer needed, all of them must cooperatively call the Comm.Disconnect method. Additionally, server applications should release resources by calling the Unpublish_name and Close_port functions.
One-sided communications (also called Remote Memory Access, RMA) supplements the traditional two-sided, send/receive based MPI communication model with a one-sided, put/get based interface. One-sided communication that can take advantage of the capabilities of highly specialized network hardware. Additionally, this extension lowers latency and software overhead in applications written using a shared-memory-like paradigm.
The MPI specification revolves around the use of objects called windows; they intuitively specify regions of a process’s memory that have been made available for remote read and write operations. The published memory blocks can be accessed through three functions for put (remote send), get (remote write), and accumulate (remote update or reduction) data items. A much larger number of functions support different synchronization styles; the semantics of these synchronization operations are fairly complex.
In MPI for Python, one-sided operations are available by using instances of the Win class. New window objects are created by calling the Win.Create method at all processes within a communicator and specifying a memory buffer . When a window instance is no longer needed, the Win.Free method should be called.
The three one-sided MPI operations for remote write, read and reduction are available through calling the methods Win.Put, Win.Get, and Win.Accumulate respectively within a Win instance. These methods need an integer rank identifying the target process and an integer offset relative the base address of the remote memory block being accessed.
The one-sided operations read, write, and reduction are implicitly nonblocking, and must be synchronized by using two primary modes. Active target synchronization requires the origin process to call the Win.Start and Win.Complete methods at the origin process, and target process cooperates by calling the Win.Post and Win.Wait methods. There is also a collective variant provided by the Win.Fence method. Passive target synchronization is more lenient, only the origin process calls the Win.Lock and Win.Unlock methods. Locks are used to protect remote accesses to the locked remote window and to protect local load/store accesses to a locked local window.
The POSIX standard provides a model of a widely portable file system. However, the optimization needed for parallel input/output cannot be achieved with this generic interface. In order to ensure efficiency and scalability, the underlying parallel input/output system must provide a high-level interface supporting partitioning of file data among processes and a collective interface supporting complete transfers of global data structures between process memories and files. Additionally, further efficiencies can be gained via support for asynchronous input/output, strided accesses to data, and control over physical file layout on storage devices. This scenario motivated the inclusion in the MPI-2 standard of a custom interface in order to support more elaborated parallel input/output operations.
The MPI specification for parallel input/output revolves around the use objects called files. As defined by MPI, files are not just contiguous byte streams. Instead, they are regarded as ordered collections of typed data items. MPI supports sequential or random access to any integral set of these items. Furthermore, files are opened collectively by a group of processes.
The common patterns for accessing a shared file (broadcast, scatter, gather, reduction) is expressed by using user-defined datatypes. Compared to the communication patterns of point-to-point and collective communications, this approach has the advantage of added flexibility and expressiveness. Data access operations (read and write) are defined for different kinds of positioning (using explicit offsets, individual file pointers, and shared file pointers), coordination (non-collective and collective), and synchronism (blocking, nonblocking, and split collective with begin/end phases).
In MPI for Python, all MPI input/output operations are performed through instances of the File class. File handles are obtained by calling the File.Open method at all processes within a communicator and providing a file name and the intended access mode. After use, they must be closed by calling the File.Close method. Files even can be deleted by calling method File.Delete.
After creation, files are typically associated with a per-process view. The view defines the current set of data visible and accessible from an open file as an ordered set of elementary datatypes. This data layout can be set and queried with the File.Set_view and File.Get_view methods respectively.
Actual input/output operations are achieved by many methods combining read and write calls with different behavior regarding positioning, coordination, and synchronism. Summing up, MPI for Python provides the thirty (30) methods defined in MPI-2 for reading from or writing to files using explicit offsets or file pointers (individual or shared), in blocking or nonblocking and collective or noncollective versions.
Module functions Init or Init_thread and Finalize provide MPI initialization and finalization respectively. Module functions Is_initialized and Is_finalized provide the respective tests for initialization and finalization.
NOTE:
NOTE:
MPI timer functionalities are available through the Wtime and Wtick functions.
In order facilitate handle sharing with other Python modules interfacing MPI-based parallel libraries, the predefined MPI error handlers ERRORS_RETURN and ERRORS_ARE_FATAL can be assigned to and retrieved from communicators using methods Comm.Set_errhandler and Comm.Get_errhandler, and similarly for windows and files.
When the predefined error handler ERRORS_RETURN is set, errors returned from MPI calls within Python code will raise an instance of the exception class Exception, which is a subclass of the standard Python exception python:RuntimeError.
NOTE:
WARNING:
WARNING:
TIP:
TIP:
MPI for Python supports convenient, pickle-based communication of generic Python object as well as fast, near C-speed, direct array data communication of buffer-provider objects (e.g., NumPy arrays).
You have to use methods with all-lowercase names, like Comm.send, Comm.recv, Comm.bcast, Comm.scatter, Comm.gather . An object to be sent is passed as a parameter to the communication call, and the received object is simply the return value.
The Comm.isend and Comm.irecv methods return Request instances; completion of these methods can be managed using the Request.test and Request.wait methods.
The Comm.recv and Comm.irecv methods may be passed a buffer object that can be repeatedly used to receive messages avoiding internal memory allocation. This buffer must be sufficiently large to accommodate the transmitted messages; hence, any buffer passed to Comm.recv or Comm.irecv must be at least as long as the pickled data transmitted to the receiver.
Collective calls like Comm.scatter, Comm.gather, Comm.allgather, Comm.alltoall expect a single value or a sequence of Comm.size elements at the root or all process. They return a single value, a list of Comm.size elements, or None.
NOTE:
You have to use method names starting with an upper-case letter, like Comm.Send, Comm.Recv, Comm.Bcast, Comm.Scatter, Comm.Gather.
In general, buffer arguments to these calls must be explicitly specified by using a 2/3-list/tuple like [data, MPI.DOUBLE], or [data, count, MPI.DOUBLE] (the former one uses the byte-size of data and the extent of the MPI datatype to define count).
For vector collectives communication operations like Comm.Scatterv and Comm.Gatherv, buffer arguments are specified as [data, count, displ, datatype], where count and displ are sequences of integral values.
Automatic MPI datatype discovery for NumPy/GPU arrays and PEP-3118 buffers is supported, but limited to basic C types (all C/C99-native signed/unsigned integral types and single/double precision real/complex floating types) and availability of matching datatypes in the underlying MPI implementation. In this case, the buffer-provider object can be passed directly as a buffer argument, the count and MPI datatype will be inferred.
If mpi4py is built against a GPU-aware MPI implementation, GPU arrays can be passed to upper-case methods as long as they have either the __dlpack__ and __dlpack_device__ methods or the __cuda_array_interface__ attribute that are compliant with the respective standard specifications. Moreover, only C-contiguous or Fortran-contiguous GPU arrays are supported. It is important to note that GPU buffers must be fully ready before any MPI routines operate on them to avoid race conditions. This can be ensured by using the synchronization API of your array library. mpi4py does not have access to any GPU-specific functionality and thus cannot perform this operation automatically for users.
Most MPI programs can be run with the command mpiexec. In practice, running Python programs looks like:
$ mpiexec -n 4 python script.py
to run the program with 4 processors.
from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0:
data = {'a': 7, 'b': 3.14}
comm.send(data, dest=1, tag=11) elif rank == 1:
data = comm.recv(source=0, tag=11)
from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0:
data = {'a': 7, 'b': 3.14}
req = comm.isend(data, dest=1, tag=11)
req.wait() elif rank == 1:
req = comm.irecv(source=0, tag=11)
data = req.wait()
from mpi4py import MPI import numpy comm = MPI.COMM_WORLD rank = comm.Get_rank() # passing MPI datatypes explicitly if rank == 0:
data = numpy.arange(1000, dtype='i')
comm.Send([data, MPI.INT], dest=1, tag=77) elif rank == 1:
data = numpy.empty(1000, dtype='i')
comm.Recv([data, MPI.INT], source=0, tag=77) # automatic MPI datatype discovery if rank == 0:
data = numpy.arange(100, dtype=numpy.float64)
comm.Send(data, dest=1, tag=13) elif rank == 1:
data = numpy.empty(100, dtype=numpy.float64)
comm.Recv(data, source=0, tag=13)
from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0:
data = {'key1' : [7, 2.72, 2+3j],
'key2' : ( 'abc', 'xyz')} else:
data = None data = comm.bcast(data, root=0)
from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() if rank == 0:
data = [(i+1)**2 for i in range(size)] else:
data = None data = comm.scatter(data, root=0) assert data == (rank+1)**2
from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() data = (rank+1)**2 data = comm.gather(data, root=0) if rank == 0:
for i in range(size):
assert data[i] == (i+1)**2 else:
assert data is None
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0:
data = np.arange(100, dtype='i') else:
data = np.empty(100, dtype='i') comm.Bcast(data, root=0) for i in range(100):
assert data[i] == i
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() sendbuf = None if rank == 0:
sendbuf = np.empty([size, 100], dtype='i')
sendbuf.T[:,:] = range(size) recvbuf = np.empty(100, dtype='i') comm.Scatter(sendbuf, recvbuf, root=0) assert np.allclose(recvbuf, rank)
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() sendbuf = np.zeros(100, dtype='i') + rank recvbuf = None if rank == 0:
recvbuf = np.empty([size, 100], dtype='i') comm.Gather(sendbuf, recvbuf, root=0) if rank == 0:
for i in range(size):
assert np.allclose(recvbuf[i,:], i)
from mpi4py import MPI import numpy def matvec(comm, A, x):
m = A.shape[0] # local rows
p = comm.Get_size()
xg = numpy.zeros(m*p, dtype='d')
comm.Allgather([x, MPI.DOUBLE],
[xg, MPI.DOUBLE])
y = numpy.dot(A, xg)
return y
from mpi4py import MPI import numpy as np amode = MPI.MODE_WRONLY|MPI.MODE_CREATE comm = MPI.COMM_WORLD fh = MPI.File.Open(comm, "./datafile.contig", amode) buffer = np.empty(10, dtype=np.int) buffer[:] = comm.Get_rank() offset = comm.Get_rank()*buffer.nbytes fh.Write_at_all(offset, buffer) fh.Close()
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() amode = MPI.MODE_WRONLY|MPI.MODE_CREATE fh = MPI.File.Open(comm, "./datafile.noncontig", amode) item_count = 10 buffer = np.empty(item_count, dtype='i') buffer[:] = rank filetype = MPI.INT.Create_vector(item_count, 1, size) filetype.Commit() displacement = MPI.INT.Get_size()*rank fh.Set_view(displacement, filetype=filetype) fh.Write_all(buffer) filetype.Free() fh.Close()
#!/usr/bin/env python from mpi4py import MPI import numpy import sys comm = MPI.COMM_SELF.Spawn(sys.executable,
args=['cpi.py'],
maxprocs=5) N = numpy.array(100, 'i') comm.Bcast([N, MPI.INT], root=MPI.ROOT) PI = numpy.array(0.0, 'd') comm.Reduce(None, [PI, MPI.DOUBLE],
op=MPI.SUM, root=MPI.ROOT) print(PI) comm.Disconnect()
#!/usr/bin/env python from mpi4py import MPI import numpy comm = MPI.Comm.Get_parent() size = comm.Get_size() rank = comm.Get_rank() N = numpy.array(0, dtype='i') comm.Bcast([N, MPI.INT], root=0) h = 1.0 / N; s = 0.0 for i in range(rank, N, size):
x = h * (i + 0.5)
s += 4.0 / (1.0 + x**2) PI = numpy.array(s * h, dtype='d') comm.Reduce([PI, MPI.DOUBLE], None,
op=MPI.SUM, root=0) comm.Disconnect()
from mpi4py import MPI import cupy as cp comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() sendbuf = cp.arange(10, dtype='i') recvbuf = cp.empty_like(sendbuf) assert hasattr(sendbuf, '__cuda_array_interface__') assert hasattr(recvbuf, '__cuda_array_interface__') cp.cuda.get_current_stream().synchronize() comm.Allreduce(sendbuf, recvbuf) assert cp.allclose(recvbuf, sendbuf*size)
import numpy as np from mpi4py import MPI from mpi4py.util import dtlib comm = MPI.COMM_WORLD rank = comm.Get_rank() datatype = MPI.FLOAT np_dtype = dtlib.to_numpy_dtype(datatype) itemsize = datatype.Get_size() N = 10 win_size = N * itemsize if rank == 0 else 0 win = MPI.Win.Allocate(win_size, comm=comm) buf = np.empty(N, dtype=np_dtype) if rank == 0:
buf.fill(42)
win.Lock(rank=0)
win.Put(buf, target_rank=0)
win.Unlock(rank=0)
comm.Barrier() else:
comm.Barrier()
win.Lock(rank=0)
win.Get(buf, target_rank=0)
win.Unlock(rank=0)
assert np.all(buf == 42)
import numpy as np from mpi4py import MPI from mpi4py.util import dtlib comm = MPI.COMM_WORLD rank = comm.Get_rank() datatype = MPI.FLOAT np_dtype = dtlib.to_numpy_dtype(datatype) itemsize = datatype.Get_size() N = comm.Get_size() + 1 win_size = N * itemsize if rank == 0 else 0 win = MPI.Win.Allocate(
size=win_size,
disp_unit=itemsize,
comm=comm, ) if rank == 0:
mem = np.frombuffer(win, dtype=np_dtype)
mem[:] = np.arange(len(mem), dtype=np_dtype) comm.Barrier() buf = np.zeros(3, dtype=np_dtype) target = (rank, 2, datatype) win.Lock(rank=0) win.Get(buf, target_rank=0, target=target) win.Unlock(rank=0) assert np.all(buf == [rank, rank+1, 0])
/* file: helloworld.c */ void sayhello(MPI_Comm comm) {
int size, rank;
MPI_Comm_size(comm, &size);
MPI_Comm_rank(comm, &rank);
printf("Hello, World! "
"I am process %d of %d.\n",
rank, size); }
// file: helloworld.i %module helloworld %{ #include <mpi.h> #include "helloworld.c" }% %include mpi4py/mpi4py.i %mpi4py_typemap(Comm, MPI_Comm); void sayhello(MPI_Comm comm);
>>> from mpi4py import MPI >>> import helloworld >>> helloworld.sayhello(MPI.COMM_WORLD) Hello, World! I am process 0 of 1.
! file: helloworld.f90 subroutine sayhello(comm)
use mpi
implicit none
integer :: comm, rank, size, ierr
call MPI_Comm_size(comm, size, ierr)
call MPI_Comm_rank(comm, rank, ierr)
print *, 'Hello, World! I am process ',rank,' of ',size,'.' end subroutine sayhello
$ f2py -c --f90exec=mpif90 helloworld.f90 -m helloworld
>>> from mpi4py import MPI >>> import helloworld >>> fcomm = MPI.COMM_WORLD.py2f() >>> helloworld.sayhello(fcomm) Hello, World! I am process 0 of 1.
This is the MPI for Python package.
The Message Passing Interface (MPI) is a standardized and portable message-passing system designed to function on a wide variety of parallel computers. The MPI standard defines the syntax and semantics of library routines and allows users to write portable programs in the main scientific programming languages (Fortran, C, or C++). Since its release, the MPI specification has become the leading standard for message-passing libraries for parallel computers.
MPI for Python provides MPI bindings for the Python programming language, allowing any Python program to exploit multiple processors. This package build on the MPI specification and provides an object oriented interface which closely follows MPI-2 C++ bindings.
Attributes Summary
initialize | Automatic MPI initialization at import |
threads | Request initialization with thread support |
thread_level | Level of thread support to request |
finalize | Automatic MPI finalization at exit |
fast_reduce | Use tree-based reductions for objects |
recv_mprobe | Use matched probes to receive objects |
errors | Error handling policy |
Attributes Documentation
Example
MPI for Python features automatic initialization and finalization of the MPI execution environment. By using the mpi4py.rc object, MPI initialization and finalization can be handled programatically:
import mpi4py mpi4py.rc.initialize = False # do not initialize MPI automatically mpi4py.rc.finalize = False # do not finalize MPI automatically from mpi4py import MPI # import the 'MPI' module MPI.Init() # manual initialization of the MPI environment ... # your finest code here ... MPI.Finalize() # manual finalization of the MPI environment
The following environment variables override the corresponding attributes of the mpi4py.rc and MPI.pickle objects at import time of the MPI module.
NOTE:
Whether to automatically initialize MPI at import time of the mpi4py.MPI module.
SEE ALSO:
New in version 3.1.0.
Whether to automatically finalize MPI at exit time of the Python process.
SEE ALSO:
New in version 3.1.0.
Whether to initialize MPI with thread support.
SEE ALSO:
New in version 3.1.0.
The level of required thread support.
SEE ALSO:
New in version 3.1.0.
Whether to use tree-based reductions for objects.
SEE ALSO:
New in version 3.1.0.
Whether to use matched probes to receive objects.
SEE ALSO:
Controls default MPI error handling policy.
SEE ALSO:
New in version 3.1.0.
Controls the default pickle protocol to use when communicating Python objects.
SEE ALSO:
New in version 3.1.0.
Controls the default buffer size threshold for switching from in-band to out-of-band buffer handling when using pickle protocol version 5 or higher.
SEE ALSO:
New in version 3.1.2.
Extension modules that need to compile against mpi4py should use this function to locate the appropriate include directory. Using Python distutils (or perhaps NumPy distutils):
import mpi4py Extension('extension_name', ...
include_dirs=[..., mpi4py.get_include()])
Ancillary
Datatype([datatype]) | Datatype object |
Status([status]) | Status object |
Request([request]) | Request handle |
Prequest([request]) | Persistent request handle |
Grequest([request]) | Generalized request handle |
Op([op]) | Operation object |
Group([group]) | Group of processes |
Info([info]) | Info object |
Communication
Comm([comm]) | Communicator |
Intracomm([comm]) | Intracommunicator |
Topocomm([comm]) | Topology intracommunicator |
Cartcomm([comm]) | Cartesian topology intracommunicator |
Graphcomm([comm]) | General graph topology intracommunicator |
Distgraphcomm([comm]) | Distributed graph topology intracommunicator |
Intercomm([comm]) | Intercommunicator |
Message([message]) | Matched message handle |
One-sided operations
Win([win]) | Window handle |
Input/Output
File([file]) | File handle |
Error handling
Errhandler([errhandler]) | Error handler |
Exception([ierr]) | Exception class |
Auxiliary
Pickle([dumps, loads, protocol]) | Pickle/unpickle Python objects |
memory(buf) | Memory buffer |
Version inquiry
Get_version() | Obtain the version number of the MPI standard supported by the implementation as a tuple (version, subversion) |
Get_library_version() | Obtain the version string of the MPI library |
Initialization and finalization
Init() | Initialize the MPI execution environment |
Init_thread([required]) | Initialize the MPI execution environment |
Finalize() | Terminate the MPI execution environment |
Is_initialized() | Indicates whether Init has been called |
Is_finalized() | Indicates whether Finalize has completed |
Query_thread() | Return the level of thread support provided by the MPI library |
Is_thread_main() | Indicate whether this thread called Init or Init_thread |
Memory allocation
Alloc_mem(size[, info]) | Allocate memory for message passing and RMA |
Free_mem(mem) | Free memory allocated with Alloc_mem() |
Address manipulation
Get_address(location) | Get the address of a location in memory |
Aint_add(base, disp) | Return the sum of base address and displacement |
Aint_diff(addr1, addr2) | Return the difference between absolute addresses |
Timer
Wtick() | Return the resolution of Wtime |
Wtime() | Return an elapsed time on the calling processor |
Error handling
Get_error_class(errorcode) | Convert an error code into an error class |
Get_error_string(errorcode) | Return the error string for a given error class or error code |
Add_error_class() | Add an error class to the known error classes |
Add_error_code(errorclass) | Add an error code to an error class |
Add_error_string(errorcode, string) | Associate an error string with an error class or errorcode |
Dynamic process management
Open_port([info]) | Return an address that can be used to establish connections between groups of MPI processes |
Close_port(port_name) | Close a port |
Publish_name(service_name, port_name[, info]) | Publish a service name |
Unpublish_name(service_name, port_name[, info]) | Unpublish a service name |
Lookup_name(service_name[, info]) | Lookup a port name given a service name |
Miscellanea
Attach_buffer(buf) | Attach a user-provided buffer for sending in buffered mode |
Detach_buffer() | Remove an existing attached buffer |
Compute_dims(nnodes, dims) | Return a balanced distribution of processes per coordinate direction |
Get_processor_name() | Obtain the name of the calling processor |
Register_datarep(datarep, read_fn, write_fn, ...) | Register user-defined data representations |
Pcontrol(level) | Control profiling |
Utilities
get_vendor() | Infomation about the underlying MPI implementation |
UNDEFINED | int UNDEFINED |
ANY_SOURCE | int ANY_SOURCE |
ANY_TAG | int ANY_TAG |
PROC_NULL | int PROC_NULL |
ROOT | int ROOT |
BOTTOM | Bottom BOTTOM |
IN_PLACE | InPlace IN_PLACE |
KEYVAL_INVALID | int KEYVAL_INVALID |
TAG_UB | int TAG_UB |
HOST | int HOST |
IO | int IO |
WTIME_IS_GLOBAL | int WTIME_IS_GLOBAL |
UNIVERSE_SIZE | int UNIVERSE_SIZE |
APPNUM | int APPNUM |
LASTUSEDCODE | int LASTUSEDCODE |
WIN_BASE | int WIN_BASE |
WIN_SIZE | int WIN_SIZE |
WIN_DISP_UNIT | int WIN_DISP_UNIT |
WIN_CREATE_FLAVOR | int WIN_CREATE_FLAVOR |
WIN_FLAVOR | int WIN_FLAVOR |
WIN_MODEL | int WIN_MODEL |
SUCCESS | int SUCCESS |
ERR_LASTCODE | int ERR_LASTCODE |
ERR_COMM | int ERR_COMM |
ERR_GROUP | int ERR_GROUP |
ERR_TYPE | int ERR_TYPE |
ERR_REQUEST | int ERR_REQUEST |
ERR_OP | int ERR_OP |
ERR_BUFFER | int ERR_BUFFER |
ERR_COUNT | int ERR_COUNT |
ERR_TAG | int ERR_TAG |
ERR_RANK | int ERR_RANK |
ERR_ROOT | int ERR_ROOT |
ERR_TRUNCATE | int ERR_TRUNCATE |
ERR_IN_STATUS | int ERR_IN_STATUS |
ERR_PENDING | int ERR_PENDING |
ERR_TOPOLOGY | int ERR_TOPOLOGY |
ERR_DIMS | int ERR_DIMS |
ERR_ARG | int ERR_ARG |
ERR_OTHER | int ERR_OTHER |
ERR_UNKNOWN | int ERR_UNKNOWN |
ERR_INTERN | int ERR_INTERN |
ERR_INFO | int ERR_INFO |
ERR_FILE | int ERR_FILE |
ERR_WIN | int ERR_WIN |
ERR_KEYVAL | int ERR_KEYVAL |
ERR_INFO_KEY | int ERR_INFO_KEY |
ERR_INFO_VALUE | int ERR_INFO_VALUE |
ERR_INFO_NOKEY | int ERR_INFO_NOKEY |
ERR_ACCESS | int ERR_ACCESS |
ERR_AMODE | int ERR_AMODE |
ERR_BAD_FILE | int ERR_BAD_FILE |
ERR_FILE_EXISTS | int ERR_FILE_EXISTS |
ERR_FILE_IN_USE | int ERR_FILE_IN_USE |
ERR_NO_SPACE | int ERR_NO_SPACE |
ERR_NO_SUCH_FILE | int ERR_NO_SUCH_FILE |
ERR_IO | int ERR_IO |
ERR_READ_ONLY | int ERR_READ_ONLY |
ERR_CONVERSION | int ERR_CONVERSION |
ERR_DUP_DATAREP | int ERR_DUP_DATAREP |
ERR_UNSUPPORTED_DATAREP | int ERR_UNSUPPORTED_DATAREP |
ERR_UNSUPPORTED_OPERATION | int ERR_UNSUPPORTED_OPERATION |
ERR_NAME | int ERR_NAME |
ERR_NO_MEM | int ERR_NO_MEM |
ERR_NOT_SAME | int ERR_NOT_SAME |
ERR_PORT | int ERR_PORT |
ERR_QUOTA | int ERR_QUOTA |
ERR_SERVICE | int ERR_SERVICE |
ERR_SPAWN | int ERR_SPAWN |
ERR_BASE | int ERR_BASE |
ERR_SIZE | int ERR_SIZE |
ERR_DISP | int ERR_DISP |
ERR_ASSERT | int ERR_ASSERT |
ERR_LOCKTYPE | int ERR_LOCKTYPE |
ERR_RMA_CONFLICT | int ERR_RMA_CONFLICT |
ERR_RMA_SYNC | int ERR_RMA_SYNC |
ERR_RMA_RANGE | int ERR_RMA_RANGE |
ERR_RMA_ATTACH | int ERR_RMA_ATTACH |
ERR_RMA_SHARED | int ERR_RMA_SHARED |
ERR_RMA_FLAVOR | int ERR_RMA_FLAVOR |
ORDER_C | int ORDER_C |
ORDER_F | int ORDER_F |
ORDER_FORTRAN | int ORDER_FORTRAN |
TYPECLASS_INTEGER | int TYPECLASS_INTEGER |
TYPECLASS_REAL | int TYPECLASS_REAL |
TYPECLASS_COMPLEX | int TYPECLASS_COMPLEX |
DISTRIBUTE_NONE | int DISTRIBUTE_NONE |
DISTRIBUTE_BLOCK | int DISTRIBUTE_BLOCK |
DISTRIBUTE_CYCLIC | int DISTRIBUTE_CYCLIC |
DISTRIBUTE_DFLT_DARG | int DISTRIBUTE_DFLT_DARG |
COMBINER_NAMED | int COMBINER_NAMED |
COMBINER_DUP | int COMBINER_DUP |
COMBINER_CONTIGUOUS | int COMBINER_CONTIGUOUS |
COMBINER_VECTOR | int COMBINER_VECTOR |
COMBINER_HVECTOR | int COMBINER_HVECTOR |
COMBINER_INDEXED | int COMBINER_INDEXED |
COMBINER_HINDEXED | int COMBINER_HINDEXED |
COMBINER_INDEXED_BLOCK | int COMBINER_INDEXED_BLOCK |
COMBINER_HINDEXED_BLOCK | int COMBINER_HINDEXED_BLOCK |
COMBINER_STRUCT | int COMBINER_STRUCT |
COMBINER_SUBARRAY | int COMBINER_SUBARRAY |
COMBINER_DARRAY | int COMBINER_DARRAY |
COMBINER_RESIZED | int COMBINER_RESIZED |
COMBINER_F90_REAL | int COMBINER_F90_REAL |
COMBINER_F90_COMPLEX | int COMBINER_F90_COMPLEX |
COMBINER_F90_INTEGER | int COMBINER_F90_INTEGER |
IDENT | int IDENT |
CONGRUENT | int CONGRUENT |
SIMILAR | int SIMILAR |
UNEQUAL | int UNEQUAL |
CART | int CART |
GRAPH | int GRAPH |
DIST_GRAPH | int DIST_GRAPH |
UNWEIGHTED | int UNWEIGHTED |
WEIGHTS_EMPTY | int WEIGHTS_EMPTY |
COMM_TYPE_SHARED | int COMM_TYPE_SHARED |
BSEND_OVERHEAD | int BSEND_OVERHEAD |
WIN_FLAVOR_CREATE | int WIN_FLAVOR_CREATE |
WIN_FLAVOR_ALLOCATE | int WIN_FLAVOR_ALLOCATE |
WIN_FLAVOR_DYNAMIC | int WIN_FLAVOR_DYNAMIC |
WIN_FLAVOR_SHARED | int WIN_FLAVOR_SHARED |
WIN_SEPARATE | int WIN_SEPARATE |
WIN_UNIFIED | int WIN_UNIFIED |
MODE_NOCHECK | int MODE_NOCHECK |
MODE_NOSTORE | int MODE_NOSTORE |
MODE_NOPUT | int MODE_NOPUT |
MODE_NOPRECEDE | int MODE_NOPRECEDE |
MODE_NOSUCCEED | int MODE_NOSUCCEED |
LOCK_EXCLUSIVE | int LOCK_EXCLUSIVE |
LOCK_SHARED | int LOCK_SHARED |
MODE_RDONLY | int MODE_RDONLY |
MODE_WRONLY | int MODE_WRONLY |
MODE_RDWR | int MODE_RDWR |
MODE_CREATE | int MODE_CREATE |
MODE_EXCL | int MODE_EXCL |
MODE_DELETE_ON_CLOSE | int MODE_DELETE_ON_CLOSE |
MODE_UNIQUE_OPEN | int MODE_UNIQUE_OPEN |
MODE_SEQUENTIAL | int MODE_SEQUENTIAL |
MODE_APPEND | int MODE_APPEND |
SEEK_SET | int SEEK_SET |
SEEK_CUR | int SEEK_CUR |
SEEK_END | int SEEK_END |
DISPLACEMENT_CURRENT | int DISPLACEMENT_CURRENT |
DISP_CUR | int DISP_CUR |
THREAD_SINGLE | int THREAD_SINGLE |
THREAD_FUNNELED | int THREAD_FUNNELED |
THREAD_SERIALIZED | int THREAD_SERIALIZED |
THREAD_MULTIPLE | int THREAD_MULTIPLE |
VERSION | int VERSION |
SUBVERSION | int SUBVERSION |
MAX_PROCESSOR_NAME | int MAX_PROCESSOR_NAME |
MAX_ERROR_STRING | int MAX_ERROR_STRING |
MAX_PORT_NAME | int MAX_PORT_NAME |
MAX_INFO_KEY | int MAX_INFO_KEY |
MAX_INFO_VAL | int MAX_INFO_VAL |
MAX_OBJECT_NAME | int MAX_OBJECT_NAME |
MAX_DATAREP_STRING | int MAX_DATAREP_STRING |
MAX_LIBRARY_VERSION_STRING | int MAX_LIBRARY_VERSION_STRING |
DATATYPE_NULL | Datatype DATATYPE_NULL |
UB | Datatype UB |
LB | Datatype LB |
PACKED | Datatype PACKED |
BYTE | Datatype BYTE |
AINT | Datatype AINT |
OFFSET | Datatype OFFSET |
COUNT | Datatype COUNT |
CHAR | Datatype CHAR |
WCHAR | Datatype WCHAR |
SIGNED_CHAR | Datatype SIGNED_CHAR |
SHORT | Datatype SHORT |
INT | Datatype INT |
LONG | Datatype LONG |
LONG_LONG | Datatype LONG_LONG |
UNSIGNED_CHAR | Datatype UNSIGNED_CHAR |
UNSIGNED_SHORT | Datatype UNSIGNED_SHORT |
UNSIGNED | Datatype UNSIGNED |
UNSIGNED_LONG | Datatype UNSIGNED_LONG |
UNSIGNED_LONG_LONG | Datatype UNSIGNED_LONG_LONG |
FLOAT | Datatype FLOAT |
DOUBLE | Datatype DOUBLE |
LONG_DOUBLE | Datatype LONG_DOUBLE |
C_BOOL | Datatype C_BOOL |
INT8_T | Datatype INT8_T |
INT16_T | Datatype INT16_T |
INT32_T | Datatype INT32_T |
INT64_T | Datatype INT64_T |
UINT8_T | Datatype UINT8_T |
UINT16_T | Datatype UINT16_T |
UINT32_T | Datatype UINT32_T |
UINT64_T | Datatype UINT64_T |
C_COMPLEX | Datatype C_COMPLEX |
C_FLOAT_COMPLEX | Datatype C_FLOAT_COMPLEX |
C_DOUBLE_COMPLEX | Datatype C_DOUBLE_COMPLEX |
C_LONG_DOUBLE_COMPLEX | Datatype C_LONG_DOUBLE_COMPLEX |
CXX_BOOL | Datatype CXX_BOOL |
CXX_FLOAT_COMPLEX | Datatype CXX_FLOAT_COMPLEX |
CXX_DOUBLE_COMPLEX | Datatype CXX_DOUBLE_COMPLEX |
CXX_LONG_DOUBLE_COMPLEX | Datatype CXX_LONG_DOUBLE_COMPLEX |
SHORT_INT | Datatype SHORT_INT |
INT_INT | Datatype INT_INT |
TWOINT | Datatype TWOINT |
LONG_INT | Datatype LONG_INT |
FLOAT_INT | Datatype FLOAT_INT |
DOUBLE_INT | Datatype DOUBLE_INT |
LONG_DOUBLE_INT | Datatype LONG_DOUBLE_INT |
CHARACTER | Datatype CHARACTER |
LOGICAL | Datatype LOGICAL |
INTEGER | Datatype INTEGER |
REAL | Datatype REAL |
DOUBLE_PRECISION | Datatype DOUBLE_PRECISION |
COMPLEX | Datatype COMPLEX |
DOUBLE_COMPLEX | Datatype DOUBLE_COMPLEX |
LOGICAL1 | Datatype LOGICAL1 |
LOGICAL2 | Datatype LOGICAL2 |
LOGICAL4 | Datatype LOGICAL4 |
LOGICAL8 | Datatype LOGICAL8 |
INTEGER1 | Datatype INTEGER1 |
INTEGER2 | Datatype INTEGER2 |
INTEGER4 | Datatype INTEGER4 |
INTEGER8 | Datatype INTEGER8 |
INTEGER16 | Datatype INTEGER16 |
REAL2 | Datatype REAL2 |
REAL4 | Datatype REAL4 |
REAL8 | Datatype REAL8 |
REAL16 | Datatype REAL16 |
COMPLEX4 | Datatype COMPLEX4 |
COMPLEX8 | Datatype COMPLEX8 |
COMPLEX16 | Datatype COMPLEX16 |
COMPLEX32 | Datatype COMPLEX32 |
UNSIGNED_INT | Datatype UNSIGNED_INT |
SIGNED_SHORT | Datatype SIGNED_SHORT |
SIGNED_INT | Datatype SIGNED_INT |
SIGNED_LONG | Datatype SIGNED_LONG |
SIGNED_LONG_LONG | Datatype SIGNED_LONG_LONG |
BOOL | Datatype BOOL |
SINT8_T | Datatype SINT8_T |
SINT16_T | Datatype SINT16_T |
SINT32_T | Datatype SINT32_T |
SINT64_T | Datatype SINT64_T |
F_BOOL | Datatype F_BOOL |
F_INT | Datatype F_INT |
F_FLOAT | Datatype F_FLOAT |
F_DOUBLE | Datatype F_DOUBLE |
F_COMPLEX | Datatype F_COMPLEX |
F_FLOAT_COMPLEX | Datatype F_FLOAT_COMPLEX |
F_DOUBLE_COMPLEX | Datatype F_DOUBLE_COMPLEX |
REQUEST_NULL | Request REQUEST_NULL |
MESSAGE_NULL | Message MESSAGE_NULL |
MESSAGE_NO_PROC | Message MESSAGE_NO_PROC |
OP_NULL | Op OP_NULL |
MAX | Op MAX |
MIN | Op MIN |
SUM | Op SUM |
PROD | Op PROD |
LAND | Op LAND |
BAND | Op BAND |
LOR | Op LOR |
BOR | Op BOR |
LXOR | Op LXOR |
BXOR | Op BXOR |
MAXLOC | Op MAXLOC |
MINLOC | Op MINLOC |
REPLACE | Op REPLACE |
NO_OP | Op NO_OP |
GROUP_NULL | Group GROUP_NULL |
GROUP_EMPTY | Group GROUP_EMPTY |
INFO_NULL | Info INFO_NULL |
INFO_ENV | Info INFO_ENV |
ERRHANDLER_NULL | Errhandler ERRHANDLER_NULL |
ERRORS_RETURN | Errhandler ERRORS_RETURN |
ERRORS_ARE_FATAL | Errhandler ERRORS_ARE_FATAL |
COMM_NULL | Comm COMM_NULL |
COMM_SELF | Intracomm COMM_SELF |
COMM_WORLD | Intracomm COMM_WORLD |
WIN_NULL | Win WIN_NULL |
FILE_NULL | File FILE_NULL |
pickle | Pickle pickle |
New in version 3.0.0.
This package provides a high-level interface for asynchronously executing callables on a pool of worker processes using MPI for inter-process communication.
The mpi4py.futures package is based on concurrent.futures from the Python standard library. More precisely, mpi4py.futures provides the MPIPoolExecutor class as a concrete implementation of the abstract class Executor. The submit() interface schedules a callable to be executed asynchronously and returns a Future object representing the execution of the callable. Future instances can be queried for the call result or exception. Sets of Future instances can be passed to the wait() and as_completed() functions.
NOTE:
SEE ALSO:
The MPIPoolExecutor class uses a pool of MPI processes to execute calls asynchronously. By performing computations in separate processes, it allows to side-step the global interpreter lock but also means that only picklable objects can be executed and returned. The __main__ module must be importable by worker processes, thus MPIPoolExecutor instances may not work in the interactive interpreter.
MPIPoolExecutor takes advantage of the dynamic process management features introduced in the MPI-2 standard. In particular, the MPI.Intracomm.Spawn method of MPI.COMM_SELF is used in the master (or parent) process to spawn new worker (or child) processes running a Python interpreter. The master process uses a separate thread (one for each MPIPoolExecutor instance) to communicate back and forth with the workers. The worker processes serve the execution of tasks in the main (and only) thread until they are signaled for completion.
NOTE:
WARNING:
initializer is an optional callable that is called at the start of each worker process before executing any tasks; initargs is a tuple of arguments passed to the initializer. If initializer raises an exception, all pending tasks and any attempt to submit new tasks to the pool will raise a BrokenExecutor exception.
Other parameters:
executor = MPIPoolExecutor(max_workers=1) future = executor.submit(pow, 321, 1234) print(future.result())
executor = MPIPoolExecutor(max_workers=3) for result in executor.map(pow, [2]*32, range(32)):
print(result)
executor = MPIPoolExecutor(max_workers=3) iterable = ((2, n) for n in range(32)) for result in executor.starmap(pow, iterable):
print(result)
If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait is False then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing.
If cancel_futures is True, this method will cancel all pending futures that the executor has not started running. Any futures that are completed or running won’t be cancelled, regardless of the value of cancel_futures.
You can avoid having to call this method explicitly if you use the with statement, which will shutdown the executor instance (waiting as if shutdown() were called with wait set to True).
import time with MPIPoolExecutor(max_workers=1) as executor:
future = executor.submit(time.sleep, 2) assert future.done()
NOTE:
WARNING:
Legacy MPI-1 implementations (as well as some vendor MPI-2 implementations) do not support the dynamic process management features introduced in the MPI-2 standard. Additionally, job schedulers and batch systems in supercomputing facilities may pose additional complications to applications using the MPI_Comm_spawn() routine.
With these issues in mind, mpi4py.futures supports an additonal, more traditional, SPMD-like usage pattern requiring MPI-1 calls only. Python applications are started the usual way, e.g., using the mpiexec command. Python code should make a collective call to the MPICommExecutor context manager to partition the set of MPI processes within a MPI communicator in one master processes and many workers processes. The master process gets access to an MPIPoolExecutor instance to submit tasks. Meanwhile, the worker process follow a different execution path and team-up to execute the tasks submitted from the master.
Besides alleviating the lack of dynamic process managment features in legacy MPI-1 or partial MPI-2 implementations, the MPICommExecutor context manager may be useful in classic MPI-based Python applications willing to take advantage of the simple, task-based, master/worker approach available in the mpi4py.futures package.
from mpi4py import MPI from mpi4py.futures import MPICommExecutor with MPICommExecutor(MPI.COMM_WORLD, root=0) as executor:
if executor is not None:
future = executor.submit(abs, -42)
assert future.result() == 42
answer = set(executor.map(abs, [-42, 42]))
assert answer == {42}
WARNING:
Recalling the issues related to the lack of support for dynamic process managment features in MPI implementations, mpi4py.futures supports an alternative usage pattern where Python code (either from scripts, modules, or zip files) is run under command line control of the mpi4py.futures package by passing -m mpi4py.futures to the python executable. The mpi4py.futures invocation should be passed a pyfile path to a script (or a zipfile/directory containing a __main__.py file). Additionally, mpi4py.futures accepts -m mod to execute a module named mod, -c cmd to execute a command string cmd, or even - to read commands from standard input (sys.stdin). Summarizing, mpi4py.futures can be invoked in the following ways:
Before starting the main script execution, mpi4py.futures splits MPI.COMM_WORLD in one master (the process with rank 0 in MPI.COMM_WORLD) and numprocs - 1 workers and connects them through an MPI intercommunicator. Afterwards, the master process proceeds with the execution of the user script code, which eventually creates MPIPoolExecutor instances to submit tasks. Meanwhile, the worker processes follow a different execution path to serve the master. Upon successful termination of the main script at the master, the entire MPI execution environment exists gracefully. In case of any unhandled exception in the main script, the master process calls MPI.COMM_WORLD.Abort(1) to prevent deadlocks and force termination of entire MPI execution environment.
WARNING:
SEE ALSO:
The following julia.py script computes the Julia set and dumps an image to disk in binary PGM format. The code starts by importing MPIPoolExecutor from the mpi4py.futures package. Next, some global constants and functions implement the computation of the Julia set. The computations are protected with the standard if __name__ == '__main__':... idiom. The image is computed by whole scanlines submitting all these tasks at once using the map method. The result iterator yields scanlines in-order as the tasks complete. Finally, each scanline is dumped to disk.
julia.py
from mpi4py.futures import MPIPoolExecutor x0, x1, w = -2.0, +2.0, 640*2 y0, y1, h = -1.5, +1.5, 480*2 dx = (x1 - x0) / w dy = (y1 - y0) / h c = complex(0, 0.65) def julia(x, y):
z = complex(x, y)
n = 255
while abs(z) < 3 and n > 1:
z = z**2 + c
n -= 1
return n def julia_line(k):
line = bytearray(w)
y = y1 - k * dy
for j in range(w):
x = x0 + j * dx
line[j] = julia(x, y)
return line if __name__ == '__main__':
with MPIPoolExecutor() as executor:
image = executor.map(julia_line, range(h))
with open('julia.pgm', 'wb') as f:
f.write(b'P5 %d %d %d\n' % (w, h, 255))
for line in image:
f.write(line)
The recommended way to execute the script is by using the mpiexec command specifying one MPI process (master) and (optional but recommended) the desired MPI universe size, which determines the number of additional dynamically spawned processes (workers). The MPI universe size is provided either by a batch system or set by the user via command-line arguments to mpiexec or environment variables. Below we provide examples for MPICH and Open MPI implementations [1]. In all of these examples, the mpiexec command launches a single master process running the Python interpreter and executing the main script. When required, mpi4py.futures spawns the pool of 16 worker processes. The master submits tasks to the workers and waits for the results. The workers receive incoming tasks, execute them, and send back the results to the master.
When using MPICH implementation or its derivatives based on the Hydra process manager, users can set the MPI universe size via the -usize argument to mpiexec:
$ mpiexec -n 1 -usize 17 python julia.py
or, alternatively, by setting the MPIEXEC_UNIVERSE_SIZE environment variable:
$ MPIEXEC_UNIVERSE_SIZE=17 mpiexec -n 1 python julia.py
In the Open MPI implementation, the MPI universe size can be set via the -host argument to mpiexec:
$ mpiexec -n 1 -host <hostname>:17 python julia.py
Another way to specify the number of workers is to use the mpi4py.futures-specific environment variable MPI4PY_FUTURES_MAX_WORKERS:
$ MPI4PY_FUTURES_MAX_WORKERS=16 mpiexec -n 1 python julia.py
Note that in this case, the MPI universe size is ignored.
Alternatively, users may decide to execute the script in a more traditional way, that is, all the MPI processes are started at once. The user script is run under command-line control of mpi4py.futures passing the -m flag to the python executable:
$ mpiexec -n 17 python -m mpi4py.futures julia.py
As explained previously, the 17 processes are partitioned in one master and 16 workers. The master process executes the main script while the workers execute the tasks submitted by the master.
New in version 3.1.0.
The mpi4py.util package collects miscellaneous utilities within the intersection of Python and MPI.
New in version 3.1.0.
pickle protocol 5 (see PEP 574) introduced support for out-of-band buffers, allowing for more efficient handling of certain object types with large memory footprints.
MPI for Python uses the traditional in-band handling of buffers. This approach is appropriate for communicating non-buffer Python objects, or buffer-like objects with small memory footprints. For point-to-point communication, in-band buffer handling allows for the communication of a pickled stream with a single MPI message, at the expense of additional CPU and memory overhead in the pickling and unpickling steps.
The mpi4py.util.pkl5 module provides communicator wrapper classes reimplementing pickle-based point-to-point communication methods using pickle protocol 5. Handling out-of-band buffers necessarily involve multiple MPI messages, thus increasing latency and hurting performance in case of small size data. However, in case of large size data, the zero-copy savings of out-of-band buffer handling more than offset the extra latency costs. Additionally, these wrapper methods overcome the infamous 2 GiB message count limit (MPI-1 to MPI-3).
NOTE:
python -m pip install pickle5
Custom request class for nonblocking communications.
NOTE:
Custom message class for matching probes.
NOTE:
Base communicator wrapper class.
WARNING:
Intracommunicator wrapper class.
Intercommunicator wrapper class.
test-pkl5-1.py
import numpy as np from mpi4py import MPI from mpi4py.util import pkl5 comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper size = comm.Get_size() rank = comm.Get_rank() dst = (rank + 1) % size src = (rank - 1) % size sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB sreq = comm.isend(sobj, dst, tag=42) robj = comm.recv (None, src, tag=42) sreq.Free() assert np.min(robj) == src assert np.max(robj) == src
test-pkl5-2.py
import numpy as np from mpi4py import MPI from mpi4py.util import pkl5 comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper size = comm.Get_size() rank = comm.Get_rank() dst = (rank + 1) % size src = (rank - 1) % size sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB sreq = comm.isend(sobj, dst, tag=42) status = MPI.Status() rmsg = comm.mprobe(status=status) assert status.Get_source() == src assert status.Get_tag() == 42 rreq = rmsg.irecv() robj = rreq.wait() sreq.Free() assert np.max(robj) == src assert np.min(robj) == src
New in version 3.1.0.
The mpi4py.util.dtlib module provides converter routines between NumPy and MPI datatypes.
New in version 3.0.0.
At import time, mpi4py initializes the MPI execution environment calling MPI_Init_thread() and installs an exit hook to automatically call MPI_Finalize() just before the Python process terminates. Additionally, mpi4py overrides the default ERRORS_ARE_FATAL error handler in favor of ERRORS_RETURN, which allows translating MPI errors in Python exceptions. These departures from standard MPI behavior may be controversial, but are quite convenient within the highly dynamic Python programming environment. Third-party code using mpi4py can just from mpi4py import MPI and perform MPI calls without the tedious initialization/finalization handling. MPI errors, once translated automatically to Python exceptions, can be dealt with the common try…except…finally clauses; unhandled MPI exceptions will print a traceback which helps in locating problems in source code.
Unfortunately, the interplay of automatic MPI finalization and unhandled exceptions may lead to deadlocks. In unattended runs, these deadlocks will drain the battery of your laptop, or burn precious allocation hours in your supercomputing facility.
Consider the following snippet of Python code. Assume this code is stored in a standard Python script file and run with mpiexec in two or more processes.
from mpi4py import MPI assert MPI.COMM_WORLD.Get_size() > 1 rank = MPI.COMM_WORLD.Get_rank() if rank == 0:
1/0
MPI.COMM_WORLD.send(None, dest=1, tag=42) elif rank == 1:
MPI.COMM_WORLD.recv(source=0, tag=42)
Process 0 raises ZeroDivisionError exception before performing a send call to process 1. As the exception is not handled, the Python interpreter running in process 0 will proceed to exit with non-zero status. However, as mpi4py installed a finalizer hook to call MPI_Finalize() before exit, process 0 will block waiting for other processes to also enter the MPI_Finalize() call. Meanwhile, process 1 will block waiting for a message to arrive from process 0, thus never reaching to MPI_Finalize(). The whole MPI execution environment is irremediably in a deadlock state.
To alleviate this issue, mpi4py offers a simple, alternative command line execution mechanism based on using the -m flag and implemented with the runpy module. To use this features, Python code should be run passing -m mpi4py in the command line invoking the Python interpreter. In case of unhandled exceptions, the finalizer hook will call MPI_Abort() on the MPI_COMM_WORLD communicator, thus effectively aborting the MPI execution environment.
WARNING:
The use of -m mpi4py to execute Python code on the command line resembles that of the Python interpreter.
SEE ALSO:
mpi4py.MPI | Message Passing Interface. |
Message Passing Interface.
Classes
Cartcomm([comm]) | Cartesian topology intracommunicator |
Comm([comm]) | Communicator |
Datatype([datatype]) | Datatype object |
Distgraphcomm([comm]) | Distributed graph topology intracommunicator |
Errhandler([errhandler]) | Error handler |
File([file]) | File handle |
Graphcomm([comm]) | General graph topology intracommunicator |
Grequest([request]) | Generalized request handle |
Group([group]) | Group of processes |
Info([info]) | Info object |
Intercomm([comm]) | Intercommunicator |
Intracomm([comm]) | Intracommunicator |
Message([message]) | Matched message handle |
Op([op]) | Operation object |
Pickle([dumps, loads, protocol]) | Pickle/unpickle Python objects |
Prequest([request]) | Persistent request handle |
Request([request]) | Request handle |
Status([status]) | Status object |
Topocomm([comm]) | Topology intracommunicator |
Win([win]) | Window handle |
memory(buf) | Memory buffer |
Cartesian topology intracommunicator
Methods Summary
Get_cart_rank(coords) | Translate logical coordinates to ranks |
Get_coords(rank) | Translate ranks to logical coordinates |
Get_dim() | Return number of dimensions |
Get_topo() | Return information on the cartesian topology |
Shift(direction, disp) | Return a tuple (source, dest) of process ranks for data shifting with Comm.Sendrecv() |
Sub(remain_dims) | Return cartesian communicators that form lower-dimensional subgrids |
Attributes Summary
coords | coordinates |
dim | number of dimensions |
dims | dimensions |
ndim | number of dimensions |
periods | periodicity |
topo | topology information |
Methods Documentation
Attributes Documentation
Communicator
Methods Summary
Abort([errorcode]) | Terminate MPI execution environment |
Allgather(sendbuf, recvbuf) | Gather to All, gather data from all processes and distribute it to all other processes in a group |
Allgatherv(sendbuf, recvbuf) | Gather to All Vector, gather data from all processes and distribute it to all other processes in a group providing different amount of data and displacements |
Allreduce(sendbuf, recvbuf[, op]) | Reduce to All |
Alltoall(sendbuf, recvbuf) | All to All Scatter/Gather, send data from all to all processes in a group |
Alltoallv(sendbuf, recvbuf) | All to All Scatter/Gather Vector, send data from all to all processes in a group providing different amount of data and displacements |
Alltoallw(sendbuf, recvbuf) | Generalized All-to-All communication allowing different counts, displacements and datatypes for each partner |
Barrier() | Barrier synchronization |
Bcast(buf[, root]) | Broadcast a message from one process to all other processes in a group |
Bsend(buf, dest[, tag]) | Blocking send in buffered mode |
Bsend_init(buf, dest[, tag]) | Persistent request for a send in buffered mode |
Call_errhandler(errorcode) | Call the error handler installed on a communicator |
Clone() | Clone an existing communicator |
Compare(comm1, comm2) | Compare two communicators |
Create(group) | Create communicator from group |
Create_group(group[, tag]) | Create communicator from group |
Create_keyval([copy_fn, delete_fn, nopython]) | Create a new attribute key for communicators |
Delete_attr(keyval) | Delete attribute value associated with a key |
Disconnect() | Disconnect from a communicator |
Dup([info]) | Duplicate an existing communicator |
Dup_with_info(info) | Duplicate an existing communicator |
Free() | Free a communicator |
Free_keyval(keyval) | Free an attribute key for communicators |
Gather(sendbuf, recvbuf[, root]) | Gather together values from a group of processes |
Gatherv(sendbuf, recvbuf[, root]) | Gather Vector, gather data to one process from all other processes in a group providing different amount of data and displacements at the receiving sides |
Get_attr(keyval) | Retrieve attribute value by key |
Get_errhandler() | Get the error handler for a communicator |
Get_group() | Access the group associated with a communicator |
Get_info() | Return the hints for a communicator that are currently in use |
Get_name() | Get the print name for this communicator |
Get_parent() | Return the parent intercommunicator for this process |
Get_rank() | Return the rank of this process in a communicator |
Get_size() | Return the number of processes in a communicator |
Get_topology() | Determine the type of topology (if any) associated with a communicator |
Iallgather(sendbuf, recvbuf) | Nonblocking Gather to All |
Iallgatherv(sendbuf, recvbuf) | Nonblocking Gather to All Vector |
Iallreduce(sendbuf, recvbuf[, op]) | Nonblocking Reduce to All |
Ialltoall(sendbuf, recvbuf) | Nonblocking All to All Scatter/Gather |
Ialltoallv(sendbuf, recvbuf) | Nonblocking All to All Scatter/Gather Vector |
Ialltoallw(sendbuf, recvbuf) | Nonblocking Generalized All-to-All |
Ibarrier() | Nonblocking Barrier |
Ibcast(buf[, root]) | Nonblocking Broadcast |
Ibsend(buf, dest[, tag]) | Nonblocking send in buffered mode |
Idup() | Nonblocking duplicate an existing communicator |
Igather(sendbuf, recvbuf[, root]) | Nonblocking Gather |
Igatherv(sendbuf, recvbuf[, root]) | Nonblocking Gather Vector |
Improbe([source, tag, status]) | Nonblocking test for a matched message |
Iprobe([source, tag, status]) | Nonblocking test for a message |
Irecv(buf[, source, tag]) | Nonblocking receive |
Ireduce(sendbuf, recvbuf[, op, root]) | Nonblocking Reduce to Root |
Ireduce_scatter(sendbuf, recvbuf[, ...]) | Nonblocking Reduce-Scatter (vector version) |
Ireduce_scatter_block(sendbuf, recvbuf[, op]) | Nonblocking Reduce-Scatter Block (regular, non-vector version) |
Irsend(buf, dest[, tag]) | Nonblocking send in ready mode |
Is_inter() | Test to see if a comm is an intercommunicator |
Is_intra() | Test to see if a comm is an intracommunicator |
Iscatter(sendbuf, recvbuf[, root]) | Nonblocking Scatter |
Iscatterv(sendbuf, recvbuf[, root]) | Nonblocking Scatter Vector |
Isend(buf, dest[, tag]) | Nonblocking send |
Issend(buf, dest[, tag]) | Nonblocking send in synchronous mode |
Join(fd) | Create a intercommunicator by joining two processes connected by a socket |
Mprobe([source, tag, status]) | Blocking test for a matched message |
Probe([source, tag, status]) | Blocking test for a message |
Recv(buf[, source, tag, status]) | Blocking receive |
Recv_init(buf[, source, tag]) | Create a persistent request for a receive |
Reduce(sendbuf, recvbuf[, op, root]) | Reduce to Root |
Reduce_scatter(sendbuf, recvbuf[, ...]) | Reduce-Scatter (vector version) |
Reduce_scatter_block(sendbuf, recvbuf[, op]) | Reduce-Scatter Block (regular, non-vector version) |
Rsend(buf, dest[, tag]) | Blocking send in ready mode |
Rsend_init(buf, dest[, tag]) | Persistent request for a send in ready mode |
Scatter(sendbuf, recvbuf[, root]) | Scatter data from one process to all other processes in a group |
Scatterv(sendbuf, recvbuf[, root]) | Scatter Vector, scatter data from one process to all other processes in a group providing different amount of data and displacements at the sending side |
Send(buf, dest[, tag]) | Blocking send |
Send_init(buf, dest[, tag]) | Create a persistent request for a standard send |
Sendrecv(sendbuf, dest[, sendtag, recvbuf, ...]) | Send and receive a message |
Sendrecv_replace(buf, dest[, sendtag, ...]) | Send and receive a message |
Set_attr(keyval, attrval) | Store attribute value associated with a key |
Set_errhandler(errhandler) | Set the error handler for a communicator |
Set_info(info) | Set new values for the hints associated with a communicator |
Set_name(name) | Set the print name for this communicator |
Split([color, key]) | Split communicator by color and key |
Split_type(split_type[, key, info]) | Split communicator by split type |
Ssend(buf, dest[, tag]) | Blocking send in synchronous mode |
Ssend_init(buf, dest[, tag]) | Persistent request for a send in synchronous mode |
allgather(sendobj) | Gather to All |
allreduce(sendobj[, op]) | Reduce to All |
alltoall(sendobj) | All to All Scatter/Gather |
barrier() | Barrier |
bcast(obj[, root]) | Broadcast |
bsend(obj, dest[, tag]) | Send in buffered mode |
f2py(arg) | |
gather(sendobj[, root]) | Gather |
ibsend(obj, dest[, tag]) | Nonblocking send in buffered mode |
improbe([source, tag, status]) | Nonblocking test for a matched message |
iprobe([source, tag, status]) | Nonblocking test for a message |
irecv([buf, source, tag]) | Nonblocking receive |
isend(obj, dest[, tag]) | Nonblocking send |
issend(obj, dest[, tag]) | Nonblocking send in synchronous mode |
mprobe([source, tag, status]) | Blocking test for a matched message |
probe([source, tag, status]) | Blocking test for a message |
py2f() | |
recv([buf, source, tag, status]) | Receive |
reduce(sendobj[, op, root]) | Reduce to Root |
scatter(sendobj[, root]) | Scatter |
send(obj, dest[, tag]) | Send |
sendrecv(sendobj, dest[, sendtag, recvbuf, ...]) | Send and Receive |
ssend(obj, dest[, tag]) | Send in synchronous mode |
Attributes Summary
group | communicator group |
info | communicator info |
is_inter | is intercommunicator |
is_intra | is intracommunicator |
is_topo | is a topology communicator |
name | communicator name |
rank | rank of this process in communicator |
size | number of processes in communicator |
topology | communicator topology type |
Methods Documentation
WARNING:
NOTE:
NOTE:
NOTE:
NOTE:
CAUTION:
NOTE:
CAUTION:
Attributes Documentation
Datatype object
Methods Summary
Commit() | Commit the datatype |
Create_contiguous(count) | Create a contiguous datatype |
Create_darray(size, rank, gsizes, distribs, ...) | Create a datatype representing an HPF-like distributed array on Cartesian process grids |
Create_f90_complex(p, r) | Return a bounded complex datatype |
Create_f90_integer(r) | Return a bounded integer datatype |
Create_f90_real(p, r) | Return a bounded real datatype |
Create_hindexed(blocklengths, displacements) | Create an indexed datatype with displacements in bytes |
Create_hindexed_block(blocklength, displacements) | Create an indexed datatype with constant-sized blocks and displacements in bytes |
Create_hvector(count, blocklength, stride) | Create a vector (strided) datatype |
Create_indexed(blocklengths, displacements) | Create an indexed datatype |
Create_indexed_block(blocklength, displacements) | Create an indexed datatype with constant-sized blocks |
Create_keyval([copy_fn, delete_fn, nopython]) | Create a new attribute key for datatypes |
Create_resized(lb, extent) | Create a datatype with a new lower bound and extent |
Create_struct(blocklengths, displacements, ...) | Create an datatype from a general set of block sizes, displacements and datatypes |
Create_subarray(sizes, subsizes, starts[, order]) | Create a datatype for a subarray of a regular, multidimensional array |
Create_vector(count, blocklength, stride) | Create a vector (strided) datatype |
Delete_attr(keyval) | Delete attribute value associated with a key |
Dup() | Duplicate a datatype |
Free() | Free the datatype |
Free_keyval(keyval) | Free an attribute key for datatypes |
Get_attr(keyval) | Retrieve attribute value by key |
Get_contents() | Retrieve the actual arguments used in the call that created a datatype |
Get_envelope() | Return information on the number and type of input arguments used in the call that created a datatype |
Get_extent() | Return lower bound and extent of datatype |
Get_name() | Get the print name for this datatype |
Get_size() | Return the number of bytes occupied by entries in the datatype |
Get_true_extent() | Return the true lower bound and extent of a datatype |
Match_size(typeclass, size) | Find a datatype matching a specified size in bytes |
Pack(inbuf, outbuf, position, comm) | Pack into contiguous memory according to datatype. |
Pack_external(datarep, inbuf, outbuf, position) | Pack into contiguous memory according to datatype, using a portable data representation (external32). |
Pack_external_size(datarep, count) | Return the upper bound on the amount of space (in bytes) needed to pack a message according to datatype, using a portable data representation (external32). |
Pack_size(count, comm) | Return the upper bound on the amount of space (in bytes) needed to pack a message according to datatype. |
Set_attr(keyval, attrval) | Store attribute value associated with a key |
Set_name(name) | Set the print name for this datatype |
Unpack(inbuf, position, outbuf, comm) | Unpack from contiguous memory according to datatype. |
Unpack_external(datarep, inbuf, position, outbuf) | Unpack from contiguous memory according to datatype, using a portable data representation (external32). |
decode() | Convenience method for decoding a datatype |
f2py(arg) | |
py2f() |
Attributes Summary
combiner | datatype combiner |
contents | datatype contents |
envelope | datatype envelope |
extent | |
is_named | is a named datatype |
is_predefined | is a predefined datatype |
lb | lower bound |
name | datatype name |
size | |
true_extent | true extent |
true_lb | true lower bound |
true_ub | true upper bound |
ub | upper bound |
Methods Documentation
Attributes Documentation
Distributed graph topology intracommunicator
Methods Summary
Get_dist_neighbors() | Return adjacency information for a distributed graph topology |
Get_dist_neighbors_count() | Return adjacency information for a distributed graph topology |
Methods Documentation
Error handler
Methods Summary
Free() | Free an error handler |
f2py(arg) | |
py2f() |
Methods Documentation
File handle
Methods Summary
Call_errhandler(errorcode) | Call the error handler installed on a file |
Close() | Close a file |
Delete(filename[, info]) | Delete a file |
Get_amode() | Return the file access mode |
Get_atomicity() | Return the atomicity mode |
Get_byte_offset(offset) | Return the absolute byte position in the file corresponding to 'offset' etypes relative to the current view |
Get_errhandler() | Get the error handler for a file |
Get_group() | Return the group of processes that opened the file |
Get_info() | Return the hints for a file that that are currently in use |
Get_position() | Return the current position of the individual file pointer in etype units relative to the current view |
Get_position_shared() | Return the current position of the shared file pointer in etype units relative to the current view |
Get_size() | Return the file size |
Get_type_extent(datatype) | Return the extent of datatype in the file |
Get_view() | Return the file view |
Iread(buf) | Nonblocking read using individual file pointer |
Iread_all(buf) | Nonblocking collective read using individual file pointer |
Iread_at(offset, buf) | Nonblocking read using explicit offset |
Iread_at_all(offset, buf) | Nonblocking collective read using explicit offset |
Iread_shared(buf) | Nonblocking read using shared file pointer |
Iwrite(buf) | Nonblocking write using individual file pointer |
Iwrite_all(buf) | Nonblocking collective write using individual file pointer |
Iwrite_at(offset, buf) | Nonblocking write using explicit offset |
Iwrite_at_all(offset, buf) | Nonblocking collective write using explicit offset |
Iwrite_shared(buf) | Nonblocking write using shared file pointer |
Open(comm, filename[, amode, info]) | Open a file |
Preallocate(size) | Preallocate storage space for a file |
Read(buf[, status]) | Read using individual file pointer |
Read_all(buf[, status]) | Collective read using individual file pointer |
Read_all_begin(buf) | Start a split collective read using individual file pointer |
Read_all_end(buf[, status]) | Complete a split collective read using individual file pointer |
Read_at(offset, buf[, status]) | Read using explicit offset |
Read_at_all(offset, buf[, status]) | Collective read using explicit offset |
Read_at_all_begin(offset, buf) | Start a split collective read using explict offset |
Read_at_all_end(buf[, status]) | Complete a split collective read using explict offset |
Read_ordered(buf[, status]) | Collective read using shared file pointer |
Read_ordered_begin(buf) | Start a split collective read using shared file pointer |
Read_ordered_end(buf[, status]) | Complete a split collective read using shared file pointer |
Read_shared(buf[, status]) | Read using shared file pointer |
Seek(offset[, whence]) | Update the individual file pointer |
Seek_shared(offset[, whence]) | Update the shared file pointer |
Set_atomicity(flag) | Set the atomicity mode |
Set_errhandler(errhandler) | Set the error handler for a file |
Set_info(info) | Set new values for the hints associated with a file |
Set_size(size) | Sets the file size |
Set_view([disp, etype, filetype, datarep, info]) | Set the file view |
Sync() | Causes all previous writes to be transferred to the storage device |
Write(buf[, status]) | Write using individual file pointer |
Write_all(buf[, status]) | Collective write using individual file pointer |
Write_all_begin(buf) | Start a split collective write using individual file pointer |
Write_all_end(buf[, status]) | Complete a split collective write using individual file pointer |
Write_at(offset, buf[, status]) | Write using explicit offset |
Write_at_all(offset, buf[, status]) | Collective write using explicit offset |
Write_at_all_begin(offset, buf) | Start a split collective write using explict offset |
Write_at_all_end(buf[, status]) | Complete a split collective write using explict offset |
Write_ordered(buf[, status]) | Collective write using shared file pointer |
Write_ordered_begin(buf) | Start a split collective write using shared file pointer |
Write_ordered_end(buf[, status]) | Complete a split collective write using shared file pointer |
Write_shared(buf[, status]) | Write using shared file pointer |
f2py(arg) | |
py2f() |
Attributes Summary
amode | file access mode |
atomicity | |
group | file group |
info | file info |
size | file size |
Methods Documentation
Attributes Documentation
General graph topology intracommunicator
Methods Summary
Get_dims() | Return the number of nodes and edges |
Get_neighbors(rank) | Return list of neighbors of a process |
Get_neighbors_count(rank) | Return number of neighbors of a process |
Get_topo() | Return index and edges |
Attributes Summary
dims | number of nodes and edges |
edges | |
index | |
nedges | number of edges |
neighbors | |
nneighbors | number of neighbors |
nnodes | number of nodes |
topo | topology information |
Methods Documentation
Attributes Documentation
Generalized request handle
Methods Summary
Complete() | Notify that a user-defined request is complete |
Start(query_fn, free_fn, cancel_fn[, args, ...]) | Create and return a user-defined request |
Methods Documentation
Group of processes
Methods Summary
Compare(group1, group2) | Compare two groups |
Difference(group1, group2) | Produce a group from the difference of two existing groups |
Dup() | Duplicate a group |
Excl(ranks) | Produce a group by reordering an existing group and taking only unlisted members |
Free() | Free a group |
Get_rank() | Return the rank of this process in a group |
Get_size() | Return the size of a group |
Incl(ranks) | Produce a group by reordering an existing group and taking only listed members |
Intersection(group1, group2) | Produce a group as the intersection of two existing groups |
Range_excl(ranks) | Create a new group by excluding ranges of processes from an existing group |
Range_incl(ranks) | Create a new group from ranges of of ranks in an existing group |
Translate_ranks(group1, ranks1[, group2]) | Translate the ranks of processes in one group to those in another group |
Union(group1, group2) | Produce a group by combining two existing groups |
f2py(arg) | |
py2f() |
Attributes Summary
rank | rank of this process in group |
size | number of processes in group |
Methods Documentation
Attributes Documentation
Info object
Methods Summary
Create() | Create a new, empty info object |
Delete(key) | Remove a (key, value) pair from info |
Dup() | Duplicate an existing info object, creating a new object, with the same (key, value) pairs and the same ordering of keys |
Free() | Free a info object |
Get(key[, maxlen]) | Retrieve the value associated with a key |
Get_nkeys() | Return the number of currently defined keys in info |
Get_nthkey(n) | Return the nth defined key in info. |
Set(key, value) | Add the (key, value) pair to info, and overrides the value if a value for the same key was previously set |
clear() | info clear |
copy() | info copy |
f2py(arg) | |
get(key[, default]) | info get |
items() | info items |
keys() | info keys |
pop(key, *default) | info pop |
popitem() | info popitem |
py2f() | |
update([other]) | info update |
values() | info values |
Methods Documentation
Intercommunicator
Methods Summary
Get_remote_group() | Access the remote group associated with the inter-communicator |
Get_remote_size() | Intercommunicator remote size |
Merge([high]) | Merge intercommunicator |
Attributes Summary
remote_group | remote group |
remote_size | number of remote processes |
Methods Documentation
Attributes Documentation
Intracommunicator
Methods Summary
Accept(port_name[, info, root]) | Accept a request to form a new intercommunicator |
Cart_map(dims[, periods]) | Return an optimal placement for the calling process on the physical machine |
Connect(port_name[, info, root]) | Make a request to form a new intercommunicator |
Create_cart(dims[, periods, reorder]) | Create cartesian communicator |
Create_dist_graph(sources, degrees, destinations) | Create distributed graph communicator |
Create_dist_graph_adjacent(sources, destinations) | Create distributed graph communicator |
Create_graph(index, edges[, reorder]) | Create graph communicator |
Create_intercomm(local_leader, peer_comm, ...) | Create intercommunicator |
Exscan(sendbuf, recvbuf[, op]) | Exclusive Scan |
Graph_map(index, edges) | Return an optimal placement for the calling process on the physical machine |
Iexscan(sendbuf, recvbuf[, op]) | Inclusive Scan |
Iscan(sendbuf, recvbuf[, op]) | Inclusive Scan |
Scan(sendbuf, recvbuf[, op]) | Inclusive Scan |
Spawn(command[, args, maxprocs, info, root, ...]) | Spawn instances of a single MPI application |
Spawn_multiple(command[, args, maxprocs, ...]) | Spawn instances of multiple MPI applications |
exscan(sendobj[, op]) | Exclusive Scan |
scan(sendobj[, op]) | Inclusive Scan |
Methods Documentation
Matched message handle
Methods Summary
Iprobe(comm[, source, tag, status]) | Nonblocking test for a matched message |
Irecv(buf) | Nonblocking receive of matched message |
Probe(comm[, source, tag, status]) | Blocking test for a matched message |
Recv(buf[, status]) | Blocking receive of matched message |
f2py(arg) | |
iprobe(comm[, source, tag, status]) | Nonblocking test for a matched message |
irecv() | Nonblocking receive of matched message |
probe(comm[, source, tag, status]) | Blocking test for a matched message |
py2f() | |
recv([status]) | Blocking receive of matched message |
Methods Documentation
Operation object
Methods Summary
Create(function[, commute]) | Create a user-defined operation |
Free() | Free the operation |
Is_commutative() | Query reduction operations for their commutativity |
Reduce_local(inbuf, inoutbuf) | Apply a reduction operator to local data |
f2py(arg) | |
py2f() |
Attributes Summary
is_commutative | is commutative |
is_predefined | is a predefined operation |
Methods Documentation
Attributes Documentation
Pickle/unpickle Python objects
Methods Summary
dumps(obj[, buffer_callback]) | Serialize object to pickle data stream. |
loads(data[, buffers]) | Deserialize object from pickle data stream. |
Attributes Summary
PROTOCOL | pickle protocol |
Methods Documentation
Attributes Documentation
Persistent request handle
Methods Summary
Start() | Initiate a communication with a persistent request |
Startall(requests) | Start a collection of persistent requests |
Methods Documentation
Request handle
Methods Summary
Cancel() | Cancel a communication request |
Free() | Free a communication request |
Get_status([status]) | Non-destructive test for the completion of a request |
Test([status]) | Test for the completion of a send or receive |
Testall(requests[, statuses]) | Test for completion of all previously initiated requests |
Testany(requests[, status]) | Test for completion of any previously initiated request |
Testsome(requests[, statuses]) | Test for completion of some previously initiated requests |
Wait([status]) | Wait for a send or receive to complete |
Waitall(requests[, statuses]) | Wait for all previously initiated requests to complete |
Waitany(requests[, status]) | Wait for any previously initiated request to complete |
Waitsome(requests[, statuses]) | Wait for some previously initiated requests to complete |
cancel() | Cancel a communication request |
f2py(arg) | |
get_status([status]) | Non-destructive test for the completion of a request |
py2f() | |
test([status]) | Test for the completion of a send or receive |
testall(requests[, statuses]) | Test for completion of all previously initiated requests |
testany(requests[, status]) | Test for completion of any previously initiated request |
testsome(requests[, statuses]) | Test for completion of some previously initiated requests |
wait([status]) | Wait for a send or receive to complete |
waitall(requests[, statuses]) | Wait for all previously initiated requests to complete |
waitany(requests[, status]) | Wait for any previously initiated request to complete |
waitsome(requests[, statuses]) | Wait for some previously initiated requests to complete |
Methods Documentation
Status object
Methods Summary
Get_count([datatype]) | Get the number of top level elements |
Get_elements(datatype) | Get the number of basic elements in a datatype |
Get_error() | Get message error |
Get_source() | Get message source |
Get_tag() | Get message tag |
Is_cancelled() | Test to see if a request was cancelled |
Set_cancelled(flag) | Set the cancelled state associated with a status |
Set_elements(datatype, count) | Set the number of elements in a status |
Set_error(error) | Set message error |
Set_source(source) | Set message source |
Set_tag(tag) | Set message tag |
f2py(arg) | |
py2f() |
Attributes Summary
cancelled | cancelled state |
count | byte count |
error | |
source | |
tag |
Methods Documentation
NOTE:
NOTE:
Attributes Documentation
Topology intracommunicator
Methods Summary
Ineighbor_allgather(sendbuf, recvbuf) | Nonblocking Neighbor Gather to All |
Ineighbor_allgatherv(sendbuf, recvbuf) | Nonblocking Neighbor Gather to All Vector |
Ineighbor_alltoall(sendbuf, recvbuf) | Nonblocking Neighbor All-to-All |
Ineighbor_alltoallv(sendbuf, recvbuf) | Nonblocking Neighbor All-to-All Vector |
Ineighbor_alltoallw(sendbuf, recvbuf) | Nonblocking Neighbor All-to-All Generalized |
Neighbor_allgather(sendbuf, recvbuf) | Neighbor Gather to All |
Neighbor_allgatherv(sendbuf, recvbuf) | Neighbor Gather to All Vector |
Neighbor_alltoall(sendbuf, recvbuf) | Neighbor All-to-All |
Neighbor_alltoallv(sendbuf, recvbuf) | Neighbor All-to-All Vector |
Neighbor_alltoallw(sendbuf, recvbuf) | Neighbor All-to-All Generalized |
neighbor_allgather(sendobj) | Neighbor Gather to All |
neighbor_alltoall(sendobj) | Neighbor All to All Scatter/Gather |
Attributes Summary
degrees | number of incoming and outgoing neighbors |
indegree | number of incoming neighbors |
inedges | incoming neighbors |
inoutedges | incoming and outgoing neighbors |
outdegree | number of outgoing neighbors |
outedges | outgoing neighbors |
Methods Documentation
Attributes Documentation
Window handle
Methods Summary
Accumulate(origin, target_rank[, target, op]) | Accumulate data into the target process |
Allocate(size[, disp_unit, info, comm]) | Create an window object for one-sided communication |
Allocate_shared(size[, disp_unit, info, comm]) | Create an window object for one-sided communication |
Attach(memory) | Attach a local memory region |
Call_errhandler(errorcode) | Call the error handler installed on a window |
Compare_and_swap(origin, compare, result, ...) | Perform one-sided atomic compare-and-swap |
Complete() | Completes an RMA operations begun after an Win.Start() |
Create(memory[, disp_unit, info, comm]) | Create an window object for one-sided communication |
Create_dynamic([info, comm]) | Create an window object for one-sided communication |
Create_keyval([copy_fn, delete_fn, nopython]) | Create a new attribute key for windows |
Delete_attr(keyval) | Delete attribute value associated with a key |
Detach(memory) | Detach a local memory region |
Fence([assertion]) | Perform an MPI fence synchronization on a window |
Fetch_and_op(origin, result, target_rank[, ...]) | Perform one-sided read-modify-write |
Flush(rank) | Complete all outstanding RMA operations at the given target |
Flush_all() | Complete all outstanding RMA operations at all targets |
Flush_local(rank) | Complete locally all outstanding RMA operations at the given target |
Flush_local_all() | Complete locally all outstanding RMA opera- tions at all targets |
Free() | Free a window |
Free_keyval(keyval) | Free an attribute key for windows |
Get(origin, target_rank[, target]) | Get data from a memory window on a remote process. |
Get_accumulate(origin, result, target_rank) | Fetch-and-accumulate data into the target process |
Get_attr(keyval) | Retrieve attribute value by key |
Get_errhandler() | Get the error handler for a window |
Get_group() | Return a duplicate of the group of the communicator used to create the window |
Get_info() | Return the hints for a windows that are currently in use |
Get_name() | Get the print name associated with the window |
Lock(rank[, lock_type, assertion]) | Begin an RMA access epoch at the target process |
Lock_all([assertion]) | Begin an RMA access epoch at all processes |
Post(group[, assertion]) | Start an RMA exposure epoch |
Put(origin, target_rank[, target]) | Put data into a memory window on a remote process. |
Raccumulate(origin, target_rank[, target, op]) | Fetch-and-accumulate data into the target process |
Rget(origin, target_rank[, target]) | Get data from a memory window on a remote process. |
Rget_accumulate(origin, result, target_rank) | Accumulate data into the target process using remote memory access. |
Rput(origin, target_rank[, target]) | Put data into a memory window on a remote process. |
Set_attr(keyval, attrval) | Store attribute value associated with a key |
Set_errhandler(errhandler) | Set the error handler for a window |
Set_info(info) | Set new values for the hints associated with a window |
Set_name(name) | Set the print name associated with the window |
Shared_query(rank) | Query the process-local address for remote memory segments created with Win.Allocate_shared() |
Start(group[, assertion]) | Start an RMA access epoch for MPI |
Sync() | Synchronize public and private copies of the given window |
Test() | Test whether an RMA exposure epoch has completed |
Unlock(rank) | Complete an RMA access epoch at the target process |
Unlock_all() | Complete an RMA access epoch at all processes |
Wait() | Complete an RMA exposure epoch begun with Win.Post() |
f2py(arg) | |
py2f() | |
tomemory() | Return window memory buffer |
Attributes Summary
attrs | window attributes |
flavor | window create flavor |
group | window group |
info | window info |
model | window memory model |
name | window name |
Methods Documentation
Attributes Documentation
Memory buffer
Methods Summary
allocate(nbytes[, clear]) | Memory allocation |
fromaddress(address, nbytes[, readonly]) | Memory from address and size in bytes |
frombuffer(obj[, readonly]) | Memory from buffer-like object |
release() | Release the underlying buffer exposed by the memory object |
tobytes([order]) | Return the data in the buffer as a byte string |
toreadonly() | Return a readonly version of the memory object |
Attributes Summary
address | Memory address |
format | A string with the format of each element |
itemsize | The size in bytes of each element |
nbytes | Memory size (in bytes) |
obj | The underlying object of the memory |
readonly | Boolean indicating whether the memory is read-only |
Methods Documentation
Attributes Documentation
Exceptions
Exception([ierr]) | Exception class |
Exception class
Methods Summary
Get_error_class() | Error class |
Get_error_code() | Error code |
Get_error_string() | Error string |
Attributes Summary
error_class | error class |
error_code | error code |
error_string | error string |
Methods Documentation
Attributes Documentation
Functions
Add_error_class() | Add an error class to the known error classes |
Add_error_code(errorclass) | Add an error code to an error class |
Add_error_string(errorcode, string) | Associate an error string with an error class or errorcode |
Aint_add(base, disp) | Return the sum of base address and displacement |
Aint_diff(addr1, addr2) | Return the difference between absolute addresses |
Alloc_mem(size[, info]) | Allocate memory for message passing and RMA |
Attach_buffer(buf) | Attach a user-provided buffer for sending in buffered mode |
Close_port(port_name) | Close a port |
Compute_dims(nnodes, dims) | Return a balanced distribution of processes per coordinate direction |
Detach_buffer() | Remove an existing attached buffer |
Finalize() | Terminate the MPI execution environment |
Free_mem(mem) | Free memory allocated with Alloc_mem() |
Get_address(location) | Get the address of a location in memory |
Get_error_class(errorcode) | Convert an error code into an error class |
Get_error_string(errorcode) | Return the error string for a given error class or error code |
Get_library_version() | Obtain the version string of the MPI library |
Get_processor_name() | Obtain the name of the calling processor |
Get_version() | Obtain the version number of the MPI standard supported by the implementation as a tuple (version, subversion) |
Init() | Initialize the MPI execution environment |
Init_thread([required]) | Initialize the MPI execution environment |
Is_finalized() | Indicates whether Finalize has completed |
Is_initialized() | Indicates whether Init has been called |
Is_thread_main() | Indicate whether this thread called Init or Init_thread |
Lookup_name(service_name[, info]) | Lookup a port name given a service name |
Open_port([info]) | Return an address that can be used to establish connections between groups of MPI processes |
Pcontrol(level) | Control profiling |
Publish_name(service_name, port_name[, info]) | Publish a service name |
Query_thread() | Return the level of thread support provided by the MPI library |
Register_datarep(datarep, read_fn, write_fn, ...) | Register user-defined data representations |
Unpublish_name(service_name, port_name[, info]) | Unpublish a service name |
Wtick() | Return the resolution of Wtime |
Wtime() | Return an elapsed time on the calling processor |
get_vendor() | Infomation about the underlying MPI implementation |
Attributes
UNDEFINED | int UNDEFINED |
ANY_SOURCE | int ANY_SOURCE |
ANY_TAG | int ANY_TAG |
PROC_NULL | int PROC_NULL |
ROOT | int ROOT |
BOTTOM | Bottom BOTTOM |
IN_PLACE | InPlace IN_PLACE |
KEYVAL_INVALID | int KEYVAL_INVALID |
TAG_UB | int TAG_UB |
HOST | int HOST |
IO | int IO |
WTIME_IS_GLOBAL | int WTIME_IS_GLOBAL |
UNIVERSE_SIZE | int UNIVERSE_SIZE |
APPNUM | int APPNUM |
LASTUSEDCODE | int LASTUSEDCODE |
WIN_BASE | int WIN_BASE |
WIN_SIZE | int WIN_SIZE |
WIN_DISP_UNIT | int WIN_DISP_UNIT |
WIN_CREATE_FLAVOR | int WIN_CREATE_FLAVOR |
WIN_FLAVOR | int WIN_FLAVOR |
WIN_MODEL | int WIN_MODEL |
SUCCESS | int SUCCESS |
ERR_LASTCODE | int ERR_LASTCODE |
ERR_COMM | int ERR_COMM |
ERR_GROUP | int ERR_GROUP |
ERR_TYPE | int ERR_TYPE |
ERR_REQUEST | int ERR_REQUEST |
ERR_OP | int ERR_OP |
ERR_BUFFER | int ERR_BUFFER |
ERR_COUNT | int ERR_COUNT |
ERR_TAG | int ERR_TAG |
ERR_RANK | int ERR_RANK |
ERR_ROOT | int ERR_ROOT |
ERR_TRUNCATE | int ERR_TRUNCATE |
ERR_IN_STATUS | int ERR_IN_STATUS |
ERR_PENDING | int ERR_PENDING |
ERR_TOPOLOGY | int ERR_TOPOLOGY |
ERR_DIMS | int ERR_DIMS |
ERR_ARG | int ERR_ARG |
ERR_OTHER | int ERR_OTHER |
ERR_UNKNOWN | int ERR_UNKNOWN |
ERR_INTERN | int ERR_INTERN |
ERR_INFO | int ERR_INFO |
ERR_FILE | int ERR_FILE |
ERR_WIN | int ERR_WIN |
ERR_KEYVAL | int ERR_KEYVAL |
ERR_INFO_KEY | int ERR_INFO_KEY |
ERR_INFO_VALUE | int ERR_INFO_VALUE |
ERR_INFO_NOKEY | int ERR_INFO_NOKEY |
ERR_ACCESS | int ERR_ACCESS |
ERR_AMODE | int ERR_AMODE |
ERR_BAD_FILE | int ERR_BAD_FILE |
ERR_FILE_EXISTS | int ERR_FILE_EXISTS |
ERR_FILE_IN_USE | int ERR_FILE_IN_USE |
ERR_NO_SPACE | int ERR_NO_SPACE |
ERR_NO_SUCH_FILE | int ERR_NO_SUCH_FILE |
ERR_IO | int ERR_IO |
ERR_READ_ONLY | int ERR_READ_ONLY |
ERR_CONVERSION | int ERR_CONVERSION |
ERR_DUP_DATAREP | int ERR_DUP_DATAREP |
ERR_UNSUPPORTED_DATAREP | int ERR_UNSUPPORTED_DATAREP |
ERR_UNSUPPORTED_OPERATION | int ERR_UNSUPPORTED_OPERATION |
ERR_NAME | int ERR_NAME |
ERR_NO_MEM | int ERR_NO_MEM |
ERR_NOT_SAME | int ERR_NOT_SAME |
ERR_PORT | int ERR_PORT |
ERR_QUOTA | int ERR_QUOTA |
ERR_SERVICE | int ERR_SERVICE |
ERR_SPAWN | int ERR_SPAWN |
ERR_BASE | int ERR_BASE |
ERR_SIZE | int ERR_SIZE |
ERR_DISP | int ERR_DISP |
ERR_ASSERT | int ERR_ASSERT |
ERR_LOCKTYPE | int ERR_LOCKTYPE |
ERR_RMA_CONFLICT | int ERR_RMA_CONFLICT |
ERR_RMA_SYNC | int ERR_RMA_SYNC |
ERR_RMA_RANGE | int ERR_RMA_RANGE |
ERR_RMA_ATTACH | int ERR_RMA_ATTACH |
ERR_RMA_SHARED | int ERR_RMA_SHARED |
ERR_RMA_FLAVOR | int ERR_RMA_FLAVOR |
ORDER_C | int ORDER_C |
ORDER_FORTRAN | int ORDER_FORTRAN |
ORDER_F | int ORDER_F |
TYPECLASS_INTEGER | int TYPECLASS_INTEGER |
TYPECLASS_REAL | int TYPECLASS_REAL |
TYPECLASS_COMPLEX | int TYPECLASS_COMPLEX |
DISTRIBUTE_NONE | int DISTRIBUTE_NONE |
DISTRIBUTE_BLOCK | int DISTRIBUTE_BLOCK |
DISTRIBUTE_CYCLIC | int DISTRIBUTE_CYCLIC |
DISTRIBUTE_DFLT_DARG | int DISTRIBUTE_DFLT_DARG |
COMBINER_NAMED | int COMBINER_NAMED |
COMBINER_DUP | int COMBINER_DUP |
COMBINER_CONTIGUOUS | int COMBINER_CONTIGUOUS |
COMBINER_VECTOR | int COMBINER_VECTOR |
COMBINER_HVECTOR | int COMBINER_HVECTOR |
COMBINER_INDEXED | int COMBINER_INDEXED |
COMBINER_HINDEXED | int COMBINER_HINDEXED |
COMBINER_INDEXED_BLOCK | int COMBINER_INDEXED_BLOCK |
COMBINER_HINDEXED_BLOCK | int COMBINER_HINDEXED_BLOCK |
COMBINER_STRUCT | int COMBINER_STRUCT |
COMBINER_SUBARRAY | int COMBINER_SUBARRAY |
COMBINER_DARRAY | int COMBINER_DARRAY |
COMBINER_RESIZED | int COMBINER_RESIZED |
COMBINER_F90_REAL | int COMBINER_F90_REAL |
COMBINER_F90_COMPLEX | int COMBINER_F90_COMPLEX |
COMBINER_F90_INTEGER | int COMBINER_F90_INTEGER |
IDENT | int IDENT |
CONGRUENT | int CONGRUENT |
SIMILAR | int SIMILAR |
UNEQUAL | int UNEQUAL |
CART | int CART |
GRAPH | int GRAPH |
DIST_GRAPH | int DIST_GRAPH |
UNWEIGHTED | int UNWEIGHTED |
WEIGHTS_EMPTY | int WEIGHTS_EMPTY |
COMM_TYPE_SHARED | int COMM_TYPE_SHARED |
BSEND_OVERHEAD | int BSEND_OVERHEAD |
WIN_FLAVOR_CREATE | int WIN_FLAVOR_CREATE |
WIN_FLAVOR_ALLOCATE | int WIN_FLAVOR_ALLOCATE |
WIN_FLAVOR_DYNAMIC | int WIN_FLAVOR_DYNAMIC |
WIN_FLAVOR_SHARED | int WIN_FLAVOR_SHARED |
WIN_SEPARATE | int WIN_SEPARATE |
WIN_UNIFIED | int WIN_UNIFIED |
MODE_NOCHECK | int MODE_NOCHECK |
MODE_NOSTORE | int MODE_NOSTORE |
MODE_NOPUT | int MODE_NOPUT |
MODE_NOPRECEDE | int MODE_NOPRECEDE |
MODE_NOSUCCEED | int MODE_NOSUCCEED |
LOCK_EXCLUSIVE | int LOCK_EXCLUSIVE |
LOCK_SHARED | int LOCK_SHARED |
MODE_RDONLY | int MODE_RDONLY |
MODE_WRONLY | int MODE_WRONLY |
MODE_RDWR | int MODE_RDWR |
MODE_CREATE | int MODE_CREATE |
MODE_EXCL | int MODE_EXCL |
MODE_DELETE_ON_CLOSE | int MODE_DELETE_ON_CLOSE |
MODE_UNIQUE_OPEN | int MODE_UNIQUE_OPEN |
MODE_SEQUENTIAL | int MODE_SEQUENTIAL |
MODE_APPEND | int MODE_APPEND |
SEEK_SET | int SEEK_SET |
SEEK_CUR | int SEEK_CUR |
SEEK_END | int SEEK_END |
DISPLACEMENT_CURRENT | int DISPLACEMENT_CURRENT |
DISP_CUR | int DISP_CUR |
THREAD_SINGLE | int THREAD_SINGLE |
THREAD_FUNNELED | int THREAD_FUNNELED |
THREAD_SERIALIZED | int THREAD_SERIALIZED |
THREAD_MULTIPLE | int THREAD_MULTIPLE |
VERSION | int VERSION |
SUBVERSION | int SUBVERSION |
MAX_PROCESSOR_NAME | int MAX_PROCESSOR_NAME |
MAX_ERROR_STRING | int MAX_ERROR_STRING |
MAX_PORT_NAME | int MAX_PORT_NAME |
MAX_INFO_KEY | int MAX_INFO_KEY |
MAX_INFO_VAL | int MAX_INFO_VAL |
MAX_OBJECT_NAME | int MAX_OBJECT_NAME |
MAX_DATAREP_STRING | int MAX_DATAREP_STRING |
MAX_LIBRARY_VERSION_STRING | int MAX_LIBRARY_VERSION_STRING |
DATATYPE_NULL | Datatype DATATYPE_NULL |
UB | Datatype UB |
LB | Datatype LB |
PACKED | Datatype PACKED |
BYTE | Datatype BYTE |
AINT | Datatype AINT |
OFFSET | Datatype OFFSET |
COUNT | Datatype COUNT |
CHAR | Datatype CHAR |
WCHAR | Datatype WCHAR |
SIGNED_CHAR | Datatype SIGNED_CHAR |
SHORT | Datatype SHORT |
INT | Datatype INT |
LONG | Datatype LONG |
LONG_LONG | Datatype LONG_LONG |
UNSIGNED_CHAR | Datatype UNSIGNED_CHAR |
UNSIGNED_SHORT | Datatype UNSIGNED_SHORT |
UNSIGNED | Datatype UNSIGNED |
UNSIGNED_LONG | Datatype UNSIGNED_LONG |
UNSIGNED_LONG_LONG | Datatype UNSIGNED_LONG_LONG |
FLOAT | Datatype FLOAT |
DOUBLE | Datatype DOUBLE |
LONG_DOUBLE | Datatype LONG_DOUBLE |
C_BOOL | Datatype C_BOOL |
INT8_T | Datatype INT8_T |
INT16_T | Datatype INT16_T |
INT32_T | Datatype INT32_T |
INT64_T | Datatype INT64_T |
UINT8_T | Datatype UINT8_T |
UINT16_T | Datatype UINT16_T |
UINT32_T | Datatype UINT32_T |
UINT64_T | Datatype UINT64_T |
C_COMPLEX | Datatype C_COMPLEX |
C_FLOAT_COMPLEX | Datatype C_FLOAT_COMPLEX |
C_DOUBLE_COMPLEX | Datatype C_DOUBLE_COMPLEX |
C_LONG_DOUBLE_COMPLEX | Datatype C_LONG_DOUBLE_COMPLEX |
CXX_BOOL | Datatype CXX_BOOL |
CXX_FLOAT_COMPLEX | Datatype CXX_FLOAT_COMPLEX |
CXX_DOUBLE_COMPLEX | Datatype CXX_DOUBLE_COMPLEX |
CXX_LONG_DOUBLE_COMPLEX | Datatype CXX_LONG_DOUBLE_COMPLEX |
SHORT_INT | Datatype SHORT_INT |
INT_INT | Datatype INT_INT |
TWOINT | Datatype TWOINT |
LONG_INT | Datatype LONG_INT |
FLOAT_INT | Datatype FLOAT_INT |
DOUBLE_INT | Datatype DOUBLE_INT |
LONG_DOUBLE_INT | Datatype LONG_DOUBLE_INT |
CHARACTER | Datatype CHARACTER |
LOGICAL | Datatype LOGICAL |
INTEGER | Datatype INTEGER |
REAL | Datatype REAL |
DOUBLE_PRECISION | Datatype DOUBLE_PRECISION |
COMPLEX | Datatype COMPLEX |
DOUBLE_COMPLEX | Datatype DOUBLE_COMPLEX |
LOGICAL1 | Datatype LOGICAL1 |
LOGICAL2 | Datatype LOGICAL2 |
LOGICAL4 | Datatype LOGICAL4 |
LOGICAL8 | Datatype LOGICAL8 |
INTEGER1 | Datatype INTEGER1 |
INTEGER2 | Datatype INTEGER2 |
INTEGER4 | Datatype INTEGER4 |
INTEGER8 | Datatype INTEGER8 |
INTEGER16 | Datatype INTEGER16 |
REAL2 | Datatype REAL2 |
REAL4 | Datatype REAL4 |
REAL8 | Datatype REAL8 |
REAL16 | Datatype REAL16 |
COMPLEX4 | Datatype COMPLEX4 |
COMPLEX8 | Datatype COMPLEX8 |
COMPLEX16 | Datatype COMPLEX16 |
COMPLEX32 | Datatype COMPLEX32 |
UNSIGNED_INT | Datatype UNSIGNED_INT |
SIGNED_SHORT | Datatype SIGNED_SHORT |
SIGNED_INT | Datatype SIGNED_INT |
SIGNED_LONG | Datatype SIGNED_LONG |
SIGNED_LONG_LONG | Datatype SIGNED_LONG_LONG |
BOOL | Datatype BOOL |
SINT8_T | Datatype SINT8_T |
SINT16_T | Datatype SINT16_T |
SINT32_T | Datatype SINT32_T |
SINT64_T | Datatype SINT64_T |
F_BOOL | Datatype F_BOOL |
F_INT | Datatype F_INT |
F_FLOAT | Datatype F_FLOAT |
F_DOUBLE | Datatype F_DOUBLE |
F_COMPLEX | Datatype F_COMPLEX |
F_FLOAT_COMPLEX | Datatype F_FLOAT_COMPLEX |
F_DOUBLE_COMPLEX | Datatype F_DOUBLE_COMPLEX |
REQUEST_NULL | Request REQUEST_NULL |
MESSAGE_NULL | Message MESSAGE_NULL |
MESSAGE_NO_PROC | Message MESSAGE_NO_PROC |
OP_NULL | Op OP_NULL |
MAX | Op MAX |
MIN | Op MIN |
SUM | Op SUM |
PROD | Op PROD |
LAND | Op LAND |
BAND | Op BAND |
LOR | Op LOR |
BOR | Op BOR |
LXOR | Op LXOR |
BXOR | Op BXOR |
MAXLOC | Op MAXLOC |
MINLOC | Op MINLOC |
REPLACE | Op REPLACE |
NO_OP | Op NO_OP |
GROUP_NULL | Group GROUP_NULL |
GROUP_EMPTY | Group GROUP_EMPTY |
INFO_NULL | Info INFO_NULL |
INFO_ENV | Info INFO_ENV |
ERRHANDLER_NULL | Errhandler ERRHANDLER_NULL |
ERRORS_RETURN | Errhandler ERRORS_RETURN |
ERRORS_ARE_FATAL | Errhandler ERRORS_ARE_FATAL |
COMM_NULL | Comm COMM_NULL |
COMM_SELF | Intracomm COMM_SELF |
COMM_WORLD | Intracomm COMM_WORLD |
WIN_NULL | Win WIN_NULL |
FILE_NULL | File FILE_NULL |
pickle | Pickle pickle |
If MPI for Python been significant to a project that leads to an academic publication, please acknowledge that fact by citing the project.
You need to have the following software properly installed in order to build MPI for Python:
NOTE:
NOTE:
If you already have a working MPI (either if you installed it from sources or by using a pre-built package from your favourite GNU/Linux distribution) and the mpicc compiler wrapper is on your search path, you can use pip:
$ python -m pip install mpi4py
NOTE:
$ env MPICC=/path/to/mpicc python -m pip install mpi4py
WARNING:
$ python -m pip cache remove mpi4py
or ask pip to disable the cache:
$ python -m pip install --no-cache-dir mpi4py
The MPI for Python package is available for download at the project website generously hosted by GitHub. You can use curl or wget to get a release tarball.
$ curl -O https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz
$ wget https://github.com/mpi4py/mpi4py/releases/download/X.Y.Z/mpi4py-X.Y.Z.tar.gz
After unpacking the release tarball:
$ tar -zxf mpi4py-X.Y.Z.tar.gz $ cd mpi4py-X.Y.Z
the package is ready for building.
MPI for Python uses a standard distutils-based build system. However, some distutils commands (like build) have additional options:
If you use a MPI implementation providing a mpicc compiler wrapper (e.g., MPICH, Open MPI), it will be used for compilation and linking. This is the preferred and easiest way of building MPI for Python.
If mpicc is located somewhere in your search path, simply run the build command:
$ python setup.py build
If mpicc is not in your search path or the compiler wrapper has a different name, you can run the build command specifying its location:
$ python setup.py build --mpicc=/where/you/have/mpicc
Alternatively, you can provide all the relevant information about your MPI implementation by editing the file called mpi.cfg. You can use the default section [mpi] or add a new, custom section, for example [other_mpi] (see the examples provided in the mpi.cfg file as a starting point to write your own section):
[mpi] include_dirs = /usr/local/mpi/include libraries = mpi library_dirs = /usr/local/mpi/lib runtime_library_dirs = /usr/local/mpi/lib [other_mpi] include_dirs = /opt/mpi/include ... libraries = mpi ... library_dirs = /opt/mpi/lib ... runtime_library_dirs = /op/mpi/lib ... ...
and then run the build command, perhaps specifying you custom configuration section:
$ python setup.py build --mpi=other_mpi
After building, the package is ready for install.
If you have root privileges (either by log-in as the root user of by using sudo) and you want to install MPI for Python in your system for all users, just do:
$ python setup.py install
The previous steps will install the mpi4py package at standard location prefix/lib/pythonX.X/site-packages.
If you do not have root privileges or you want to install MPI for Python for your private use, just do:
$ python setup.py install --user
To quickly test the installation:
$ mpiexec -n 5 python -m mpi4py.bench helloworld Hello, World! I am process 0 of 5 on localhost. Hello, World! I am process 1 of 5 on localhost. Hello, World! I am process 2 of 5 on localhost. Hello, World! I am process 3 of 5 on localhost. Hello, World! I am process 4 of 5 on localhost.
If you installed from source, issuing at the command line:
$ mpiexec -n 5 python demo/helloworld.py
or (in the case of ancient MPI-1 implementations):
$ mpirun -np 5 python `pwd`/demo/helloworld.py
will launch a five-process run of the Python interpreter and run the test script demo/helloworld.py from the source distribution.
You can also run all the unittest scripts:
$ mpiexec -n 5 python test/runtests.py
or, if you have nose unit testing framework installed:
$ mpiexec -n 5 nosetests -w test
or, if you have py.test unit testing framework installed:
$ mpiexec -n 5 py.test test/
WARNING:
Some MPI-1 implementations (notably, MPICH 1) do require the actual command line arguments to be passed at the time MPI_Init() is called. In this case, you will need to use a re-built, MPI-enabled, Python interpreter binary executable. A basic implementation (targeting Python 2.X) of what is required is shown below:
#include <Python.h> #include <mpi.h> int main(int argc, char *argv[]) {
int status, flag;
MPI_Init(&argc, &argv);
status = Py_Main(argc, argv);
MPI_Finalized(&flag);
if (!flag) MPI_Finalize();
return status; }
The source code above is straightforward; compiling it should also be. However, the linking step is more tricky: special flags have to be passed to the linker depending on your platform. In order to alleviate you for such low-level details, MPI for Python provides some pure-distutils based support to build and install an MPI-enabled Python interpreter executable:
$ cd mpi4py-X.X.X $ python setup.py build_exe [--mpi=<name>|--mpicc=/path/to/mpicc] $ [sudo] python setup.py install_exe [--install-dir=$HOME/bin]
After the above steps you should have the MPI-enabled interpreter installed as prefix/bin/pythonX.X-mpi (or $HOME/bin/pythonX.X-mpi). Assuming that prefix/bin (or $HOME/bin) is listed on your PATH, you should be able to enter your MPI-enabled Python interactively, for example:
$ python2.7-mpi Python 2.7.8 (default, Nov 10 2014, 08:19:18) [GCC 4.9.2 20141101 (Red Hat 4.9.2-1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.executable '/usr/bin/python2.7-mpi' >>>
In the list below you have some executive instructions for building some of the open-source MPI implementations out there with support for shared/dynamic libraries on POSIX environments.
$ tar -zxf mpich-X.X.X.tar.gz $ cd mpich-X.X.X $ ./configure --enable-shared --prefix=/usr/local/mpich $ make $ make install
$ tar -zxf openmpi-X.X.X tar.gz $ cd openmpi-X.X.X $ ./configure --prefix=/usr/local/openmpi $ make all $ make install
$ tar -zxf mpich-X.X.X.tar.gz $ cd mpich-X.X.X $ ./configure --enable-sharedlib --prefix=/usr/local/mpich1 $ make $ make install
Perhaps you will need to set the LD_LIBRARY_PATH environment variable (using export, setenv or what applies to your system) pointing to the directory containing the MPI libraries . In case of getting runtime linking errors when running MPI programs, the following lines can be added to the user login shell script (.profile, .bashrc, etc.).
MPI_DIR=/usr/local/mpich export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH
MPI_DIR=/usr/local/openmpi export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH
MPI_DIR=/usr/local/mpich1 export LD_LIBRARY_PATH=$MPI_DIR/lib/shared:$LD_LIBRARY_PATH: export MPICH_USE_SHLIB=yes
WARNING:
Lisandro Dalcin
2022, Lisandro Dalcin
November 7, 2022 | 3.1 |