PYFR(1) | PyFR | PYFR(1) |
pyfr - PyFR Documentation
Contents:
PyFR is an open-source Python based framework for solving advection-diffusion type problems on streaming architectures using the Flux Reconstruction approach of Huynh. The framework is designed to solve a range of governing systems on mixed unstructured grids containing various element types. It is also designed to target a range of hardware platforms via use of an in-built domain specific language derived from the Mako templating engine. The current release (PyFR 1.5.0) has the following capabilities:
To cite PyFR, please reference the following paper:
Development of PyFR is supported by the Engineering and Physical Sciences Research Council, Innovate UK, the European Commission, BAE Systems, Airbus, and the Air Force Office of Scientific Research. We are also grateful for hardware donations from Nvidia, Intel, and AMD.
High-order numerical methods for unstructured grids combine the superior accuracy of high-order spectral or finite difference methods with the geometrical flexibility of low-order finite volume or finite element schemes. The Flux Reconstruction (FR) approach unifies various high-order schemes for unstructured grids within a single framework. Additionally, the FR approach exhibits a significant degree of element locality, and is thus able to run efficiently on modern streaming architectures, such as Graphical Processing Units (GPUs). The aforementioned properties of FR mean it offers a promising route to performing affordable, and hence industrially relevant, scale-resolving simulations of hitherto intractable unsteady flows (involving separation, acoustics etc.) within the vicinity of real-world engineering geometries. An detailed overview of the FR approach is given in:
The linear stability of an FR schemes depends on the form of the correction function. Linear stability issues are discussed in:
The non-linear stability of an FR schemes depends on the location of the solution points. Non-linear stability issues are discussed in:
PyFR can be obtained here.
PyFR 1.5.0 has a hard dependency on Python 3.3+ and the following Python packages:
Note that due to a bug in numpy PyFR is not compatible with 32-bit Python distributions.
The CUDA backend targets NVIDIA GPUs with a compute capability of 2.0 or greater. The backend requires:
The MIC backend targets Intel Xeon Phi co-processors. The backend requires:
The OpenCL backend targets a range of accelerators including GPUs from AMD and NVIDIA. The backend requires:
The OpenMP backend targets multi-core CPUs. The backend requires:
To partition meshes for running in parallel it is also necessary to have one of the following partitioners installed:
To import CGNS meshes it is necessary to have the following installed:
Before running PyFR 1.5.0 it is first necessary to either install the software using the provided setup.py installer or add the root PyFR directory to PYTHONPATH using:
user@computer ~/PyFR$ export PYTHONPATH=.:$PYTHONPATH
To manage installation of Python dependencies we strongly recommend using pip and virtualenv.
PyFR 1.5.0 uses three distinct file formats:
The following commands are available from the pyfr program:
Example:
pyfr import mesh.msh mesh.pyfrm
Example:
pyfr partition 2 mesh.pyfrm solution.pyfrs .
pyfr run mesh.pyfrm configuration.ini
pyfr restart mesh.pyfrm solution.pyfrs
pyfr export mesh.pyfrm solution.pyfrs solution.vtu
pyfr can be run in parallel. To do so prefix pyfr with mpirun -n <cores/devices>. Note that the mesh must be pre-partitioned, and the number of cores or devices must be equal to the number of partitions.
The .ini configuration file parameterises the simulation. It is written in the INI format. Parameters are grouped into sections. The roles of each section and their associated parameters are described below.
Parameterises the backend with
Example:
[backend] precision = double rank-allocator = linear
Parameterises the CUDA backend with
Example:
[backend-cuda] device-id = round-robin gimmik-max-nnz = 512 mpi-type = standard block-1d = 64 block-2d = 128, 2
Parameterises the MIC backend with
Parameterises the OpenCL backend with
Example:
[backend-opencl] platform-id = 0 device-type = gpu device-id = local-rank gimmik-max-nnz = 512 local-size-1d = 16 local-size-2d = 128, 1
Parameterises the OpenMP backend with
Example:
[backend-openmp] cc = gcc cblas= example/path/libBLAS.dylib cblas-type = parallel
Sets constants used in the simulation with
float
float
Example:
[constants] gamma = 1.4 mu = 0.001 Pr = 0.72
Parameterises the solver with
Example:
[solver] system = navier-stokes order = 3 anti-alias = flux viscosity-correction = none shock-capturing = artificial-viscosity
Parameterises the time-integration scheme used by the solver with
where
std requires
where
pi only works with rk34 and rk45 and requires
dual requires
where
none requires
Example:
[solver-time-integrator] formulation = std scheme = rk45 controller = pi tstart = 0.0 tend = 10.0 dt = 0.001 atol = 0.00001 rtol = 0.00001 errest-norm = l2 safety-fact = 0.9 min-fact = 0.3 max-fact = 2.5
Parameterises the interfaces with
Example:
[solver-interfaces] riemann-solver = rusanov ldg-beta = 0.5 ldg-tau = 0.1
Parameterises the line interfaces with
Example:
[solver-interfaces-line] flux-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre
Parameterises the triangular interfaces with
Example:
[solver-interfaces-tri] flux-pts = williams-shunn quad-deg = 10 quad-pts = williams-shunn
Parameterises the quadrilateral interfaces with
Example:
[solver-interfaces-quad] flux-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre
Parameterises the triangular elements with
Example:
[solver-elements-tri] soln-pts = williams-shunn quad-deg = 10 quad-pts = williams-shunn
Parameterises the quadrilateral elements with
Example:
[solver-elements-quad] soln-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre
Parameterises the hexahedral elements with
Example:
[solver-elements-hex] soln-pts = gauss-legendre quad-deg = 10 quad-pts = gauss-legendre
Parameterises the tetrahedral elements with
Example:
[solver-elements-tet] soln-pts = shunn-ham quad-deg = 10 quad-pts = shunn-ham
Parameterises the prismatic elements with
Example:
[solver-elements-pri] soln-pts = williams-shunn~gauss-legendre quad-deg = 10 quad-pts = williams-shunn~gauss-legendre
Parameterises the pyramidal elements with
Example:
[solver-elements-pyr] soln-pts = gauss-legendre quad-deg = 10 quad-pts = witherden-vincent
Parameterises solution, space (x, y, [z]), and time (t) dependent source terms with
Example:
[solver-source-terms] rho = t rhou = x*y*sin(y) rhov = z*rho rhow = 1.0 E = 1.0/(1.0+x)
Parameterises artificial viscosity for shock capturing with
Example:
[solver-artificial-viscosity] max-artvisc = 0.01 s0 = 0.01 kappa = 5.0
Parameterises an exponential solution filter with
Example:
[soln-filter] nsteps = 10 alpha = 36.0 order = 16 cutoff = 1
Periodically write the solution to disk in the pyfrs format. Parameterised with
Example:
[soln-plugin-writer] dt-out = 0.01 basedir = . basename = files-{t:.2f} post-action = echo "Wrote file {soln} at time {t} for mesh {mesh}." post-action-mode = blocking
Periodically integrates the pressure and viscous stress on the boundary labelled name and writes out the resulting force vectors to a CSV file. Parameterised with
Example:
[soln-plugin-fluidforce-wing] nsteps = 10 file = wing-forces.csv header = true
Periodically checks the solution for NaN values. Parameterised with
Example:
[soln-plugin-nancheck] nsteps = 10
Periodically calculates the residual and writes it out to a CSV file. Parameterised with
Example:
[soln-plugin-residual] nsteps = 10 file = residual.csv header = true
Write time-step statistics out to a CSV file. Parameterised with
Example:
[soln-plugin-dtstats] flushsteps = 100 file = dtstats.csv header = true
Periodically samples specific points in the volume and writes them out to a CSV file. The plugin actually samples the solution point closest to each sample point, hence a slight discrepancy in the output sampling locations is to be expected. A nearest-neighbour search is used to locate the closest solution point to the sample point. The location process automatically takes advantage of scipy.spatial.cKDTree where available. Parameterised with
Example:
[soln-plugin-sampler] nsteps = 10 samp-pts = [(1.0, 0.7, 0.0), (1.0, 0.8, 0.0)] format = primative file = point-data.csv header = true
Time average quantities. Parameterised with
Example:
[soln-plugin-tavg] nsteps = 10 dt-out = 2.0 basedir = . basename = files-{t:06.2f} avg-p = p avg-p2 = p*p avg-vel = sqrt(u*u + v*v)
Parameterises constant, or if available space (x, y, [z]) and time (t) dependent, boundary condition labelled name in the .pyfrm file with
where
char-riem-inv requires
no-slp-isot-wall requires
sub-in-frv requires
sub-in-ftpttang requires
sub-out-fp requires
sup-in-fa requires
Example:
[soln-bcs-bcwallupper] type = no-slp-isot-wall cpTw = 10.0 u = 1.0
Parameterises space (x, y, [z]) dependent initial conditions with
Example:
[soln-ics] rho = 1.0 u = x*y*sin(y) v = z w = 1.0 p = 1.0/(1.0+x)
Proceed with the following steps to run a serial 2D Couette flow simulation on a mixed unstructured mesh:
pyfr import couette_flow_2d.msh couette_flow_2d.pyfrm
pyfr run -b cuda -p couette_flow_2d.pyfrm couette_flow_2d.ini
pyfr export couette_flow_2d.pyfrm couette_flow_2d-040.pyfrs couette_flow_2d-040.vtu -d 4
Proceed with the following steps to run a parallel 2D Euler vortex simulation on a structured mesh:
pyfr import euler_vortex_2d.msh euler_vortex_2d.pyfrm
pyfr partition 2 euler_vortex_2d.pyfrm .
mpirun -n 2 pyfr run -b cuda -p euler_vortex_2d.pyfrm euler_vortex_2d.ini
pyfr export euler_vortex_2d.pyfrm euler_vortex_2d-100.0.pyfrs euler_vortex_2d-100.0.vtu -d 4
The symbolic link pyfr.scripts.pyfr points to the script pyfr.scripts.main, which is where it all starts! Specifically, the function process_run calls the function _process_common, which in turn calls the function get_solver, returning an Integrator -- a composite of a Controller and a Stepper. The Integrator has a method named run, which is then called to run the simulation.
A Controller acts to advance the simulation in time. Specifically, a Controller has a method named advance_to which advances a System to a specified time. There are three types of Controller available in PyFR 1.5.0:
Types of Controller are related via the following inheritance diagram:
A Stepper acts to advance the simulation by a single time-step. Specifically, a Stepper has a method named step which advances a System by a single time-step. There are 11 types of Stepper available in PyFR 1.5.0:
Types of Stepper are related via the following inheritance diagram:
A System holds information/data for the system, including Elements, Interfaces, and the Backend with which the simulation is to run. A System has a method named rhs, which obtains the divergence of the flux (the 'right-hand-side') at each solution point. The method rhs invokes various kernels which have been pre-generated and loaded into queues. A System also has a method named _gen_kernels which acts to generate all the kernels required by a particular System. A kernel is an instance of a 'one-off' class with a method named run that implements the required kernel functionality. Individual kernels are produced by a kernel provider. PyFR 1.5.0 has various types of kernel provider. A Pointwise Kernel Provider produces point-wise kernels such as Riemann solvers and flux functions etc. These point-wise kernels are specified using an in-built platform-independent templating language derived from Mako, henceforth referred to as PyFR-Mako. There are two types of System available in PyFR 1.5.0:
Types of System are related via the following inheritance diagram:
An Elements holds information/data for a group of elements. There are two types of Elements available in PyFR 1.5.0:
Types of Elements are related via the following inheritance diagram:
An Interfaces holds information/data for a group of interfaces. There are four types of (non-boundary) Interfaces available in PyFR 1.5.0:
Types of (non-boundary) Interfaces are related via the following inheritance diagram:
A Backend holds information/data for a backend. There are four types of Backend available in PyFR 1.5.0:
Types of Backend are related via the following inheritance diagram:
A Pointwise Kernel Provider produces point-wise kernels. Specifically, a Pointwise Kernel Provider has a method named register, which adds a new method to an instance of a Pointwise Kernel Provider. This new method, when called, returns a kernel. A kernel is an instance of a 'one-off' class with a method named run that implements the required kernel functionality. The kernel functionality itself is specified using PyFR-Mako. Hence, a Pointwise Kernel Provider also has a method named _render_kernel, which renders PyFR-Mako into low-level platform-specific code. The _render_kernel method first sets the context for Mako (i.e. details about the Backend etc.) and then uses Mako to begin rendering the PyFR-Mako specification. When Mako encounters a pyfr:kernel an instance of a Kernel Generator is created, which is used to render the body of the pyfr:kernel. There are four types of Pointwise Kernel Provider available in PyFR 1.5.0:
Types of Pointwise Kernel Provider are related via the following inheritance diagram:
A Kernel Generator renders the PyFR-Mako in a pyfr:kernel into low-level platform-specific code. Specifically, a Kernel Generator has a method named render, which applies Backend specific regex and adds Backend specific 'boiler plate' code to produce the low-level platform-specific source -- which is compiled, linked, and loaded. There are four types of Kernel Generator available in PyFR 1.5.0:
Types of Kernel Generator are related via the following inheritance diagram:
PyFR-Mako kernels are specifications of point-wise functionality that can be invoked directly from within PyFR. They are opened with a header of the form:
<%pyfr:kernel name='kernel-name' ndim='data-dimensionality' [argument-name='argument-intent argument-attribute argument-data-type' ...]>
where
and are closed with a footer of the form:
</%pyfr:kernel>
PyFR-Mako macros are specifications of point-wise functionality that cannot be invoked directly from within PyFR, but can be embedded into PyFR-Mako kernels. PyFR-Mako macros can be viewed as building blocks for PyFR-mako kernels. They are opened with a header of the form:
<%pyfr:macro name='macro-name' params='[parameter-name, ...]'>
where
and are closed with a footer of the form:
</%pyfr:macro>
PyFR-Mako macros are embedded within a kernel using an expression of the following form:
${pyfr.expand('macro-name', ['parameter-name', ...])};
where
Basic functionality can be expressed using a restricted subset of the C programming language. Specifically, use of the following is allowed:
However, conditional if statements, as well as for/while loops, are not allowed.
Mako expression substitution can be used to facilitate PyFR-Mako kernel specification. A Python expression expression prescribed thus ${expression} is substituted for the result when the PyFR-Mako kernel specification is interpreted at runtime.
Example:
E = s[${ndims - 1}]
Mako conditionals can be used to facilitate PyFR-Mako kernel specification. Conditionals are opened with % if condition: and closed with % endif. Note that such conditionals are evaluated when the PyFR-Mako kernel specification is interpreted at runtime, they are not embedded into the low-level kernel.
Example:
% if ndims == 2:
fout[0][1] += t_xx; fout[1][1] += t_xy;
fout[0][2] += t_xy; fout[1][2] += t_yy;
fout[0][3] += u*t_xx + v*t_xy + ${-c['mu']*c['gamma']/c['Pr']}*T_x;
fout[1][3] += u*t_xy + v*t_yy + ${-c['mu']*c['gamma']/c['Pr']}*T_y; % endif
Mako loops can be used to facilitate PyFR-Mako kernel specification. Loops are opened with % for condition: and closed with % endfor. Note that such loops are unrolled when the PyFR-Mako kernel specification is interpreted at runtime, they are not embedded into the low-level kernel.
Example:
% for i in range(ndims):
rhov[${i}] = s[${i + 1}];
v[${i}] = invrho*rhov[${i}]; % endfor
Imperial College London
2013-2019, Imperial College London
March 5, 2019 | 1.5.0 |