MPI_File_read_at_all_end(3) | Open MPI | MPI_File_read_at_all_end(3) |
MPI_File_read_at_all_end - Reads a file at explicitly specified offsets; ending part of a split collective routine (blocking).
#include <mpi.h> int MPI_File_read_at_all_end(MPI_File fh, void *buf, MPI_Status *status)
USE MPI ! or the older form: INCLUDE 'mpif.h' MPI_FILE_READ_AT_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR
USE mpi_f08 MPI_File_read_at_all_end(fh, buf, status, ierror) TYPE(MPI_File), INTENT(IN) :: fh TYPE(*), DIMENSION(..), ASYNCHRONOUS :: buf TYPE(MPI_Status) :: status INTEGER, OPTIONAL, INTENT(OUT) :: ierror
#include <mpi.h> void MPI::File::Read_at_all_end(void* buf, MPI::Status& status) void MPI::File::Read_at_all_end(void* buf)
MPI_File_read_at_all_end is a split collective routine that stores the number of elements actually read from the file associated with fh in status. MPI_File_read_at_all_end blocks until the operation initiated by MPI_File_read_at_all_begin completes. The data is taken out of those parts of the file specified by the current view. All other fields of status are undefined.
All the nonblocking collective routines for data access are "split" into two routines, each with _begin or _end as a suffix. These split collective routines are subject to the semantic rules described in Section 9.4.5 of the MPI-2 standard.
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
Before the error value is returned, the current MPI error handler is called. For MPI I/O function errors, the default error handler is set to MPI_ERRORS_RETURN. The error handler may be changed with MPI_File_set_errhandler; the predefined error handler MPI_ERRORS_ARE_FATAL may be used to make I/O errors fatal. Note that MPI does not guarantee that an MPI program can continue past an error.
May 26, 2022 | 4.1.4 |