Fortran Bindings¶
Main Functions¶
In general usage, the aim of CPL library is to provide a minimal set of functions to facilitate coupling. These include an initialisation, setup used on both CFD and MD side then a send and recieve command. There is also a helper function to provide
CPL init¶
- subroutine cpl_init(callingrealm, returned_realm_comm, ierror)¶
(cfd+md) Splits MPI_COMM_WORLD in both the CFD and MD code respectively and create intercommunicator between CFD and MD
Remarks
Assumes MPI has been initialised MPI_init and communicator MPI_COMM_WORLD exists and contains all processors in both CFD and MD regions
Synopsis
CPL_init(callingrealm, RETURNED_REALM_COMM, ierror)
Inputs
callingrealm
Should identify calling processor as either CFD_REALM (integer with value 1) or MD_REALM (integer with value 2).
Outputs
RETURNED_REALM_COMM
Communicator based on callingrealm value local to CFD or MD processor and resulting from the split of MPI_COMM_WORLD
ierror
Error flag
Example
program main_CFD use cpl, only : CPL_init, CPL_finalize use mpi implicit none integer :: rank, nprocs, ierr integer :: CFD_COMM integer, parameter :: CFD_realm=1 !Initialise MPI call MPI_Init(ierr) !Create MD Comm by spliting world call CPL_init(CFD_realm, CFD_COMM, ierr) !get local processor rank and print call MPI_comm_size(CFD_COMM, nprocs, ierr) call MPI_comm_rank(CFD_COMM, rank, ierr) print*, "CFD code processor ", rank+1, " of ", nprocs !No need for seperate CPL finalise as MPI finalise takes care of this call CPL_finalize(ierr) call MPI_comm_free(CFD_COMM,ierr) call MPI_finalize(ierr) end program main_CFD
program main_MD use cpl, only : CPL_init, CPL_finalize use mpi implicit none integer :: rank, nprocs, ierr integer :: MD_COMM integer, parameter :: MD_realm=2 !Initialise MPI call MPI_Init(ierr) !Create MD Comm by spliting world call CPL_init(MD_realm, MD_COMM, ierr) call MPI_comm_size(MD_COMM, nprocs, ierr) call MPI_comm_rank(MD_COMM, rank, ierr) print*, "MD code processor ", rank+1, " of ", nprocs !No need for seperate CPL finalise as MPI finalise takes care of this call CPL_finalize(ierr) call MPI_comm_free(MD_COMM,ierr) call MPI_finalize(ierr) end program main_MD
Errors
COUPLER_ERROR_REALM = 1 ! wrong realm value COUPLER_ERROR_ONE_REALM = 2 ! one realm missing COUPLER_ERROR_INIT = 3 ! initialisation error
- Parameters
callingrealm [integer,in] :: CFD_REALM=1 or MD_REALM=2
returned_realm_comm [integer,out]
ierror [integer,out]
- Use
mpi
CPL Setup MD¶
- subroutine cpl_setup_md(icomm_grid, xyzl, xyz_orig)¶
Initialisation routine for coupler module - Every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Remarks
Assumes CPL has been initialised CPL_init and communicator MD_REALM exists
Synopsis
cpl_setup_md(icomm_grid, xyzL, xyz_orig)
Inputs
icomm_grid
Communicator based on MD processor topology returned from a call to MPI_CART_CREATE.
xyzL
MD domain size.
xyz_orig
MD origin.
- Parameters
icomm_grid [integer,in]
xyzl (3) [real,in]
xyz_orig (3) [real,in]
- Use
mpi
- Call to
CPL Setup CFD¶
- subroutine cpl_setup_cfd(icomm_grid, xyzl, xyz_orig, ncxyz)¶
Initialisation routine for coupler module - Every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Remarks
Assumes CPL has been initialised CPL_init and communicator MD_REALM exists Synopsis
CPL_setup_cfd(icomm_grid, xyzL, xyz_orig, ncxyz)
Inputs
icomm_grid
Communicator based on CFD processor topology returned from a call to MPI_CART_CREATE.
xyzL
CFD domain size.
xyz_orig
CFD origin.
ncxyz
Number of CFD cells in global domain.
- Parameters
icomm_grid [integer,in]
xyzl (3) [real,in]
xyz_orig (3) [real,in]
ncxyz (3) [integer,in]
- Use
mpi
- Call to
CPL Recv¶
Note two interfaces exist, CPL_recv_full and CPL_recv_min
- subroutine cpl_recv_full(arecv, limits[, recv_flag])¶
Receive data from to local grid from the associated ranks from the other realm
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Synopsis
CPL_recv(arecv, limits, recv_flag)
Inputs
arecv
Array of data to recv. Should be a four dimensional array allocated using the number of cells on the current processor between the limits. Size should be be obtained from CPL_my_proc_portion(limits, portion).
limits
Limits in global cell coordinates, must be the same as corresponding send
Outputs
recv_flag
Returned flag which indicates success or failure of recv process
Example
call CPL_get_olap_limits(limits) call CPL_my_proc_portion(limits, portion) call CPL_get_no_cells(portion, Ncells) !Coupled Recieve allocate(A(3, Ncells(1), Ncells(2), Ncells(3))) call CPL_recv(A, limits, recv_flag)
- Parameters
arecv (,,*,*) [real,inout] :: Pre allocated array that recieves data
limits (6) [integer,in] :: Global cell indices with minimum and maximum values to recv
recv_flag [logical,out,] :: Flag set if processor has received data
- Use
mpi
,coupler_module
(md_realm()
,cfd_realm()
,rank_graph()
,error_abort()
,cpl_graph_comm()
,myid_graph()
,olap_mask()
,rank_world()
,realm()
,realm_name()
,iblock_realm()
,jblock_realm()
,kblock_realm()
,void()
,ierr()
,rank_intersect()
,cpl_realm_intersection_comm()
)- Called from
- Call to
cpl_cart_coords()
,cpl_proc_portion()
,cpl_my_proc_portion()
CPL Send¶
Note two interfaces exist, CPL_send_full and CPL_send_min
- subroutine cpl_send_full(asend, limits[, send_flag])¶
Send four dimensional array asend of data from all processors in the current realm with data between global cell array limits to the corresponding processors from the other realm.
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Synopsis
CPL_send(asend, limits, send_flag)
Inputs
asend
Array of data to send. Should be a four dimensional array allocated using the number of cells on the current processor between the limits. Size should be be obtained from CPL_my_proc_portion(limits, portion).
limits
Limits in global cell coordinates, must be the same as corresponding recieve
Outputs
send_flag
Returned flag which indicates success or failure of send process
Example
call CPL_get_olap_limits(limits) call CPL_my_proc_portion(limits, portion) call CPL_get_no_cells(portion, Ncells) !Coupled Send array allocate(A(3, Ncells(1), Ncells(2), Ncells(3))) do i =portion(1),portion(2) do j =portion(3),portion(4) do k =portion(5),portion(6) ii = i-portion(1)+1; jj = j-portion(3)+1; kk = k-portion(5)+1 A(1,ii,jj,kk) = dble(i) A(2,ii,jj,kk) = dble(j) A(3,ii,jj,kk) = dble(k) enddo enddo enddo call CPL_send(A, limits, send_flag) .. sectionauthor::Edward Smith ----------------------------------------------------------------------------
- Parameters
asend (,,*,*) [real,in] :: Array containing data to send
limits (6) [integer,in] :: Global cell indices with minimum and maximum values to send
send_flag [logical,out,] :: Flag set if processor has passed data
- Use
mpi
,coupler_module
(md_realm()
,cfd_realm()
,error_abort()
,cpl_graph_comm()
,myid_graph()
,olap_mask()
,rank_world()
,realm()
,ierr()
,void()
,cpl_setup_complete()
,realm_name()
,myid_realm()
,rootid_realm()
,rank_intersect()
,cpl_realm_intersection_comm()
)- Called from
- Call to
cpl_my_proc_portion()
,cpl_cart_coords()
,cpl_proc_portion()
,cpl_get_no_cells()
CPL Get Arrays¶
- subroutine cpl_get_arrays(recv_array, recv_size, send_array, send_size)¶
A helper function to get arrays of the required size for cells local to the current processor
Example
The first example shows the CFD side of the exchange, with send/recv arrays obtained from CPL_get_arrays
program main_CFD use cpl, only : CPL_Init, CPL_setup_cfd, CPL_send, & CPL_recv, CPL_get_arrays, CPL_finalize use mpi implicit none integer :: time, CFD_COMM, CART_COMM, ierr, CFD_realm=1 double precision, dimension(:,:,:,:), & allocatable :: send_array, recv_array call MPI_Init(ierr) call CPL_init(CFD_realm, CFD_COMM, ierr) call MPI_Cart_create(CFD_COMM, 3, (/1, 1, 1/), & (/.true.,.true.,.true./), & .true., CART_COMM, ierr) call CPL_setup_cfd(CART_COMM, (/1.d0, 1.d0, 1.d0/), & (/0.d0, 0.d0, 0.d0/), & (/32, 32, 32/)) call CPL_get_arrays(recv_array, 4, send_array, 1) do time=1,5 call CPL_recv(recv_array) print*, "CFD", time, recv_array(1,1,1,1) send_array(1,:,:,:) = 2.*time call CPL_send(send_array) enddo call CPL_finalize(ierr) call MPI_comm_free(CFD_COMM,ierr) call MPI_finalize(ierr) end program main_CFD
The corresponding MD code which matches this, note CPL_get_arrays send and recv arrays are swapped over (as send from CFD in recv on MD and vice versa)
program main_MD use cpl, only : CPL_Init, CPL_setup_md, CPL_send, & CPL_recv, CPL_get_arrays, CPL_finalize use mpi implicit none integer :: time, MD_COMM, CART_COMM, ierr, MD_realm=2 double precision, dimension(:,:,:,:), & allocatable :: send_array, recv_array call MPI_Init(ierr) call CPL_init(MD_realm, MD_COMM, ierr) call MPI_Cart_create(MD_COMM, 3, (/1, 1, 1/), & (/.true.,.true.,.true./), & .true., CART_COMM, ierr) call CPL_setup_md(CART_COMM, (/1.d0, 1.d0, 1.d0/), & (/0.d0, 0.d0, 0.d0/)) call CPL_get_arrays(recv_array, 1, send_array, 4) do time=1,5 send_array(1,:,:,:) = 5.*time call CPL_send(send_array) call CPL_recv(recv_array) print*, "MD", time, recv_array(1,1,1,1) enddo call CPL_finalize(ierr) call MPI_comm_free(MD_COMM,ierr) call MPI_finalize(ierr) end program main_MD
These are then both run together, either MPMD or port connect modes
- Parameters
precision [double]
recv_size [integer]
precision
send_size [integer]
- Call to
cpl_get_olap_limits()
,cpl_my_proc_portion()
,cpl_get_no_cells()
Full Listing¶
The complete list of module functions is included here.
Coupler Module¶
Contains all the setup subroutines and variables for a coupled run
Description
The COUPLER MODULE A single coupler module for both codes - this contains the same information on both md and cfd side and all variables. A fundamental design philosophy of CPL library is to ensure both codes have access to all topology and processor information (without needed to call costly global routines). This module contains all the variables, which define the mappings between processors, a hierachy of MPI communicators and all the subroutines to set them up. These variables include:
MPI communicators
Simulation realms
MPI processor IDs
Processor topologies
Processor cartesian coords
Global cell grid parameters
Processor cell ranges
Domain and cell dimensions
Positions of CFD grid lines
CFD to MD processor mapping
Simulation parameters
Error handling
The data in this module is protected so only setup routines in this module can change it
________/\\\\\\\\\__/\\\\\\\\\\\\\____/\\\_____________
_____/\\\////////__\/\\\/////////\\\_\/\\\_____________
___/\\\/___________\/\\\_______\/\\\_\/\\\_____________
__/\\\_____________\/\\\\\\\\\\\\\/__\/\\\_____________
_\/\\\_____________\/\\\/////////____\/\\\_____________
_\//\\\____________\/\\\_____________\/\\\_____________
__\///\\\__________\/\\\_____________\/\\\_____________
____\////\\\\\\\\\_\/\\\_____________\/\\\\\\\\\\\\\\\_
_______\/////////__\///______________\///////////////__
C P L - L I B R A R Y
Copyright (C) 2012-2022 Edward Smith
Author(s)
Edward Smith Novemeber 2011 to present
Eduardo Ramos Fernandez 2015 to 2018
David Trevelyan September 2012 to December 2015
Lucian Anton, November 2011
Setup routines which have access to coupler parameters
- CPL_create_comm (cfd+md) splits MPI_COMM_WORLD, create inter -
communicator between CFD and MD
- CPL_create_map (cfd+md) creates correspondence maps between
the CFD grid and MD domains
- CPL_cfd_adjust_domain (cfd) adjust CFD tomain to an integer number
FCC or similar MD initial layout
Also see coupler which contains all routines to exchange information
License
This file is part of CPL-Library.
CPL-Library is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
CPL-Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with CPL-Library. If not, see <http://www.gnu.org/licenses/>.
Quick access
- Variables
average_period
,boundary_algo
,cfd_icoord2olap_md_icoords
,cfd_jcoord2olap_md_jcoords
,cfd_kcoord2olap_md_kcoords
,cfd_realm
,cfdid_olap
,comm_style
,comm_style_gath_scat
,comm_style_send_recv
,constraint_algo
,constraint_cv
,constraint_cvflag
,constraint_flekkoy
,constraint_ncer
,constraint_off
,constraint_ot
,coord2rank_cfd
,coord2rank_md
,coupler_abort_on_request
,coupler_abort_send_cfd
,coupler_error_cart_comm
,coupler_error_continuum_force
,coupler_error_init
,coupler_error_input_file
,coupler_error_one_realm
,coupler_error_read_input
,coupler_error_realm
,coupler_error_setup_incomplete
,cpl_cart_comm
,cpl_cfd_bc_slice
,cpl_cfd_bc_x
,cpl_cfd_bc_y
,cpl_cfd_bc_z
,cpl_full_overlap
,cpl_graph_comm
,cpl_initialised
,cpl_inter_comm
,cpl_md_bc_slice
,cpl_olap_comm
,cpl_realm_comm
,cpl_realm_intersection_comm
,cpl_realm_olap_comm
,cpl_setup_complete
,cpl_world_comm
,density_cfd
,density_md
,dt_cfd
,dt_md
,dx
,dy
,dymax
,dymin
,dz
,iblock_realm
,icmax
,icmax_bnry
,icmax_cnst
,icmax_olap
,icmin
,icmin_bnry
,icmin_cnst
,icmin_olap
,icpmax_cfd
,icpmax_md
,icpmin_cfd
,icpmin_md
,jblock_realm
,jcmax
,jcmax_bnry
,jcmax_cnst
,jcmax_olap
,jcmin
,jcmin_bnry
,jcmin_cnst
,jcmin_olap
,jcpmax_cfd
,jcpmax_md
,jcpmin_cfd
,jcpmin_md
,kblock_realm
,kcmax
,kcmax_bnry
,kcmax_cnst
,kcmax_olap
,kcmin
,kcmin_bnry
,kcmin_cnst
,kcmin_olap
,kcpmax_cfd
,kcpmax_md
,kcpmin_cfd
,kcpmin_md
,maxgridsize
,md_cfd_match_cellsize
,md_realm
,myid_cart
,myid_graph
,myid_olap
,myid_realm
,myid_world
,ncx
,ncx_olap
,ncy
,ncy_olap
,ncz
,ncz_olap
,normal
,nproc_cfd
,nproc_md
,nproc_olap
,nproc_world
,npx_cfd
,npx_md
,npy_cfd
,npy_md
,npz_cfd
,npz_md
,nsteps_cfd
,nsteps_coupled
,nsteps_md
,olap_mask
,output_mode
,quiet
,rank2coord_cfd
,rank2coord_md
,rank_cart
,rank_cfdcart2rank_world
,rank_cfdrealm2rank_world
,rank_graph
,rank_graph2rank_world
,rank_inter2rank_world
,rank_intersect
,rank_mdcart2rank_world
,rank_mdrealm2rank_world
,rank_olap
,rank_olap2rank_realm
,rank_olap2rank_world
,rank_realm
,rank_world
,rank_world2rank_cfdcart
,rank_world2rank_cfdrealm
,rank_world2rank_graph
,rank_world2rank_inter
,rank_world2rank_mdcart
,rank_world2rank_mdrealm
,rank_world2rank_olap
,realm
,realm_name
,rootid_cart
,rootid_realm
,rootid_world
,save_period
,sendtype_cfd_to_md
,sendtype_md_to_cfd
,staggered_averages
,testval
,timestep_ratio
,void_bn
,x_orig_cfd
,x_orig_md
,xg
,xl_cfd
,xl_md
,xl_olap
,xll
,y_orig_cfd
,y_orig_md
,yg
,yl_cfd
,yl_md
,yl_olap
,yll
,z_orig_cfd
,z_orig_md
,zg
,zl_cfd
,zl_md
,zl_olap
,zll
- Routines
coupler_cfd_init()
,coupler_md_init()
,cpl_create_map()
,cpl_finalize()
,cpl_init()
,cpl_new_fileunit()
,cpl_rank_map()
,cpl_set_timing()
,cpl_setup_cfd()
,cpl_setup_md()
,cpl_write_header()
,error_abort_s()
,error_abort_si()
,get_new_fileunit()
,locate()
,messenger_lasterrorcheck()
,printf()
,read_coupler_input()
,set_output_mode()
,write_matrix()
,write_matrix_int()
Variables
- coupler_module/average_period [integer,protected/optional/default=1]¶
average period for averages ( it must come from CFD !!!)
- coupler_module/boundary_algo [integer,protected]¶
- coupler_module/cfd_icoord2olap_md_icoords (*,*) [integer,allocatable/protected]¶
- coupler_module/cfd_jcoord2olap_md_jcoords (*,*) [integer,allocatable/protected]¶
- coupler_module/cfd_kcoord2olap_md_kcoords (*,*) [integer,allocatable/protected]¶
- coupler_module/cfd_realm [integer,parameter=1]¶
CFD realm identifier
- coupler_module/cfdid_olap [integer,protected]¶
Connected to CFD processor
- coupler_module/comm_style [integer,protected]¶
- coupler_module/comm_style_gath_scat [integer,parameter=1]¶
- coupler_module/comm_style_send_recv [integer,parameter=0]¶
- coupler_module/constraint_algo [integer,protected]¶
- coupler_module/constraint_cv [integer,parameter=4]¶
- coupler_module/constraint_cvflag [integer,protected]¶
- coupler_module/constraint_flekkoy [integer,parameter=3]¶
- coupler_module/constraint_ncer [integer,parameter=2]¶
- coupler_module/constraint_off [integer,parameter=0]¶
- coupler_module/constraint_ot [integer,parameter=1]¶
- coupler_module/coord2rank_cfd (*,*,*) [integer,allocatable/protected]¶
- coupler_module/coord2rank_md (*,*,*) [integer,allocatable/protected]¶
- coupler_module/coupler_abort_on_request [integer,parameter=7]¶
used in request_abort
- coupler_module/coupler_abort_send_cfd [integer,parameter=8]¶
error in coupler_cfd_send
- coupler_module/coupler_error_cart_comm [integer,parameter=9]¶
Wrong comm value in CPL_Cart_coords
- coupler_module/coupler_error_continuum_force [integer,parameter=6]¶
the region in which the continuum constrain force is apply spans over two MD domains
- coupler_module/coupler_error_init [integer,parameter=3]¶
initialisation error
- coupler_module/coupler_error_input_file [integer,parameter=4]¶
wrong value in input file
- coupler_module/coupler_error_one_realm [integer,parameter=2]¶
one realm missing
- coupler_module/coupler_error_read_input [integer,parameter=5]¶
error in processing input file or data transfers
- coupler_module/coupler_error_realm [integer,parameter=1]¶
wrong realm value
- coupler_module/coupler_error_setup_incomplete [integer,parameter=10]¶
CPL_setup_md or CPL_setup_cfd
- coupler_module/cpl_cart_comm [integer,protected]¶
Comm w/cartesian topology for each realm;
- coupler_module/cpl_cfd_bc_slice [integer,protected]¶
(average MD values, not CFD)
- coupler_module/cpl_cfd_bc_x [integer,protected]¶
- coupler_module/cpl_cfd_bc_y [integer,protected]¶
- coupler_module/cpl_cfd_bc_z [integer,protected]¶
- coupler_module/cpl_full_overlap [logical,protected]¶
- coupler_module/cpl_graph_comm [integer,protected]¶
Comm w/ graph topolgy between locally olapg procs;
- coupler_module/cpl_initialised [logical,optional/default=.false.]¶
- coupler_module/cpl_inter_comm [integer,protected]¶
CFD/MD INTER communicator between realm comms;
- coupler_module/cpl_md_bc_slice [integer,protected]¶
- coupler_module/cpl_olap_comm [integer,protected]¶
Local comm between only overlapping MD/CFD procs;
- coupler_module/cpl_realm_comm [integer,protected]¶
INTRA communicators within MD/CFD realms;
- coupler_module/cpl_realm_intersection_comm [integer,protected]¶
Intersecting MD/CFD procs in world;
- coupler_module/cpl_realm_olap_comm [integer,protected]¶
Local CFD/MD in overlapping region;
- coupler_module/cpl_setup_complete [integer,optional/default=0]¶
- coupler_module/cpl_world_comm [integer,protected]¶
Both CFD and MD realms;
- coupler_module/density_cfd [real,protected]¶
- coupler_module/density_md [real,protected]¶
- coupler_module/dt_cfd [real,protected]¶
- coupler_module/dt_md [real,protected]¶
- coupler_module/dx [real,protected]¶
Cell size in x
- coupler_module/dy [real,protected]¶
Cell size in y
- coupler_module/dymax [real,protected]¶
- coupler_module/dymin [real,protected]¶
- coupler_module/dz [real,protected]¶
Cell size in z
- coupler_module/iblock_realm [integer,protected]¶
x Processor ID in Cartesian Coordinates
- coupler_module/icmax [integer,protected]¶
Whole simulation upper cell extents in x
- coupler_module/icmax_bnry [integer,protected]¶
Region used to get boundary condition upper cell extents in x
- coupler_module/icmax_cnst [integer,protected]¶
Constrained dynamics region upper cell extents in x
- coupler_module/icmax_olap [integer,protected]¶
Overlap region upper cell extents in x
- coupler_module/icmin [integer,protected]¶
Whole simulation lower cell extents in x
- coupler_module/icmin_bnry [integer,protected]¶
Region used to get boundary condition lower cell extents in x
- coupler_module/icmin_cnst [integer,protected]¶
Constrained dynamics region lower cell extents in x
- coupler_module/icmin_olap [integer,protected]¶
Overlap region lower cell extents in x
- coupler_module/icpmax_cfd (*) [integer,allocatable/protected]¶
- coupler_module/icpmax_md (*) [integer,allocatable/protected]¶
- coupler_module/icpmin_cfd (*) [integer,allocatable/protected]¶
- coupler_module/icpmin_md (*) [integer,allocatable/protected]¶
- coupler_module/ierr [integer]¶
- coupler_module/jblock_realm [integer,protected]¶
y Processor ID in Cartesian Coordinates
- coupler_module/jcmax [integer,protected]¶
Whole simulation upper cell extents in y
- coupler_module/jcmax_bnry [integer,protected]¶
Region used to get boundary condition upper cell extents in y
- coupler_module/jcmax_cnst [integer,protected]¶
Constrained dynamics region upper cell extents in y
- coupler_module/jcmax_olap [integer,protected]¶
Overlap region upper cell extents in y
- coupler_module/jcmin [integer,protected]¶
Whole simulation lower cell extents in y
- coupler_module/jcmin_bnry [integer,protected]¶
Region used to get boundary condition lower cell extents in y
- coupler_module/jcmin_cnst [integer,protected]¶
Constrained dynamics region lower cell extents in y
- coupler_module/jcmin_olap [integer,protected]¶
Overlap region lower cell extents in y
- coupler_module/jcpmax_cfd (*) [integer,allocatable/protected]¶
- coupler_module/jcpmax_md (*) [integer,allocatable/protected]¶
- coupler_module/jcpmin_cfd (*) [integer,allocatable/protected]¶
- coupler_module/jcpmin_md (*) [integer,allocatable/protected]¶
- coupler_module/kblock_realm [integer,protected]¶
z Processor ID in Cartesian Coordinates
- coupler_module/kcmax [integer,protected]¶
Whole simulation upper cell extents in z
- coupler_module/kcmax_bnry [integer,protected]¶
Region used to get boundary condition upper cell extents in z
- coupler_module/kcmax_cnst [integer,protected]¶
Constrained dynamics region upper cell extents in z
- coupler_module/kcmax_olap [integer,protected]¶
Overlap region upper cell extents in z
- coupler_module/kcmin [integer,protected]¶
Whole simulation lower cell extents in z
- coupler_module/kcmin_bnry [integer,protected]¶
Region used to get boundary condition lower cell extents in z
- coupler_module/kcmin_cnst [integer,protected]¶
Constrained dynamics region lower cell extents in z
- coupler_module/kcmin_olap [integer,protected]¶
Overlap region lower cell extents in z
- coupler_module/kcpmax_cfd (*) [integer,allocatable/protected]¶
- coupler_module/kcpmax_md (*) [integer,allocatable/protected]¶
- coupler_module/kcpmin_cfd (*) [integer,allocatable/protected]¶
- coupler_module/kcpmin_md (*) [integer,allocatable/protected]¶
- coupler_module/maxgridsize [integer,parameter=2097152]¶
Maximum size to store global grid
- coupler_module/md_cfd_match_cellsize [integer,protected]¶
- coupler_module/md_realm [integer,parameter=2]¶
MD realm identifier
- coupler_module/myid_cart [integer,protected]¶
Processor ID from 0 to nproc_cart-1;
- coupler_module/myid_graph [integer,protected]¶
Processor ID from 0 to nproc_graph-1;
- coupler_module/myid_olap [integer,protected]¶
Processor ID from 0 to nproc_olap-1;
- coupler_module/myid_realm [integer,protected]¶
Processor ID from 0 to nproc_realm-1;
- coupler_module/myid_world [integer,protected]¶
Processor ID from 0 to nproc_world-1;
- coupler_module/ncx [integer,protected]¶
Number of x cells in domain
- coupler_module/ncx_olap [integer,protected]¶
Number of cells in x in overlap region
- coupler_module/ncy [integer,protected]¶
Number of y cells in domain
- coupler_module/ncy_olap [integer,protected]¶
Number of cells in y in overlap region
- coupler_module/ncz [integer,protected]¶
Number of z cells in domain
- coupler_module/ncz_olap [integer,protected]¶
Number of cells in z in overlap region
- coupler_module/normal [integer,parameter=1]¶
- coupler_module/nproc_cfd [integer,protected]¶
Total number of processor in cfd
- coupler_module/nproc_md [integer,protected]¶
Total number of processor in md
- coupler_module/nproc_olap [integer,protected]¶
Total number of processor in overlap region
- coupler_module/nproc_world [integer,protected]¶
Total number of processor in world
- coupler_module/npx_cfd [integer,protected]¶
Number of processor in x in the cfd
- coupler_module/npx_md [integer,protected]¶
Number of processor in x in the md
- coupler_module/npy_cfd [integer,protected]¶
Number of processor in y in the cfd
- coupler_module/npy_md [integer,protected]¶
Number of processor in y in the md
- coupler_module/npz_cfd [integer,protected]¶
Number of processor in z in the cfd
- coupler_module/npz_md [integer,protected]¶
Number of processor in z in the md
- coupler_module/nsteps_cfd [integer,protected]¶
CFD input steps
- coupler_module/nsteps_coupled [integer,protected]¶
Total number of steps for coupled simulation
- coupler_module/nsteps_md [integer,protected]¶
MD input steps
- coupler_module/olap_mask (*) [logical,allocatable/protected]¶
Overlap mask specifying which processors overlap using world ranks
- coupler_module/output_mode [integer,optional/default=1]¶
- coupler_module/quiet [integer,parameter=0]¶
- coupler_module/rank2coord_cfd (*,*) [integer,allocatable/protected]¶
Array containing coordinates for each cartesian rank
- coupler_module/rank2coord_md (*,*) [integer,allocatable/protected]¶
- coupler_module/rank_cart [integer,protected]¶
Processor rank from 1 to nproc_cart;
- coupler_module/rank_cfdcart2rank_world (*) [integer,allocatable/protected]¶
- coupler_module/rank_cfdrealm2rank_world (*) [integer,allocatable/protected]¶
- coupler_module/rank_graph [integer,protected]¶
Processor rank from 1 to nproc_graph;
- coupler_module/rank_graph2rank_world (*) [integer,allocatable/protected]¶
- coupler_module/rank_inter2rank_world (*) [integer,allocatable/protected]¶
- coupler_module/rank_intersect [integer,protected]¶
Processor rank in intersection of overlaping proces;
- coupler_module/rank_mdcart2rank_world (*) [integer,allocatable/protected]¶
- coupler_module/rank_mdrealm2rank_world (*) [integer,allocatable/protected]¶
- coupler_module/rank_olap [integer,protected]¶
Processor rank from 1 to nproc_olap;
- coupler_module/rank_olap2rank_realm (*) [integer,allocatable/protected]¶
- coupler_module/rank_olap2rank_world (*) [integer,allocatable/protected]¶
- coupler_module/rank_realm [integer,protected]¶
Processor rank from 1 to nproc_realm;
- coupler_module/rank_world [integer,protected]¶
Processor rank from 1 to nproc_world;
- coupler_module/rank_world2rank_cfdcart (*) [integer,allocatable/protected]¶
- coupler_module/rank_world2rank_cfdrealm (*) [integer,allocatable/protected]¶
- coupler_module/rank_world2rank_graph (*) [integer,allocatable/protected]¶
- coupler_module/rank_world2rank_inter (*) [integer,allocatable/protected]¶
- coupler_module/rank_world2rank_mdcart (*) [integer,allocatable/protected]¶
- coupler_module/rank_world2rank_mdrealm (*) [integer,allocatable/protected]¶
- coupler_module/rank_world2rank_olap (*) [integer,allocatable/protected]¶
- coupler_module/realm [integer,protected]¶
- coupler_module/realm_name (2) [character,parameter=(/"cfd","md "/)]¶
- coupler_module/rootid_cart [integer,protected]¶
Root processor in each cart topology;
- coupler_module/rootid_realm [integer,protected]¶
Root processor in each realm;
- coupler_module/rootid_world [integer,protected]¶
Root processor in world;
- coupler_module/save_period [integer,protected/optional/default=10]¶
save period (corresponts to tplot in CFD)
- coupler_module/sendtype_cfd_to_md [integer,protected]¶
- coupler_module/sendtype_md_to_cfd [integer,protected]¶
- coupler_module/staggered_averages (3) [logical,protected/optional/default=(/.false.,.false.,.false./)]¶
- coupler_module/testval [integer,protected]¶
- coupler_module/timestep_ratio [integer,protected]¶
- coupler_module/void_bn [integer,parameter=-666]¶
- coupler_module/x_orig_cfd [real,protected]¶
x origin of the cfd domain
- coupler_module/x_orig_md [real,protected]¶
x origin of the md domain
- coupler_module/xg (*,*,*) [real,target/allocatable/protected]¶
3D grid points in x
- coupler_module/xl_cfd [real,protected]¶
x domain size of the cfd domain
- coupler_module/xl_md [real,protected]¶
x domain size of the md domain
- coupler_module/xl_olap [real,protected]¶
x overlap region size
- coupler_module/xll [real,protected]¶
- coupler_module/y_orig_cfd [real,protected]¶
y origin of the cfd domain
- coupler_module/y_orig_md [real,protected]¶
y origin of the md domain
- coupler_module/yg (*,*,*) [real,target/allocatable/protected]¶
3D grid points in y
- coupler_module/yl_cfd [real,protected]¶
y domain size of the cfd domain
- coupler_module/yl_md [real,protected]¶
y domain size of the md domain
- coupler_module/yl_olap [real,protected]¶
y overlap region size
- coupler_module/yll [real,protected]¶
- coupler_module/z_orig_cfd [real,protected]¶
z origin of the cfd domain
- coupler_module/z_orig_md [real,protected]¶
z origin of the md domain
- coupler_module/zg (*,*,*) [real,target/allocatable/protected]¶
3D grid points in z
- coupler_module/zl_cfd [real,protected]¶
z domain size of the cfd domain
- coupler_module/zl_md [real,protected]¶
z domain size of the md domain
- coupler_module/zl_olap [real,protected]¶
z overlap region size
- coupler_module/zll [real,protected]¶
Subroutines and functions
- subroutine coupler_module/cpl_init(callingrealm, returned_realm_comm, ierror)¶
(cfd+md) Splits MPI_COMM_WORLD in both the CFD and MD code respectively and create intercommunicator between CFD and MD
Remarks
Assumes MPI has been initialised MPI_init and communicator MPI_COMM_WORLD exists and contains all processors in both CFD and MD regions
Synopsis
CPL_init(callingrealm, RETURNED_REALM_COMM, ierror)
Inputs
callingrealm
Should identify calling processor as either CFD_REALM (integer with value 1) or MD_REALM (integer with value 2).
Outputs
RETURNED_REALM_COMM
Communicator based on callingrealm value local to CFD or MD processor and resulting from the split of MPI_COMM_WORLD
ierror
Error flag
Example
program main_CFD use cpl, only : CPL_init, CPL_finalize use mpi implicit none integer :: rank, nprocs, ierr integer :: CFD_COMM integer, parameter :: CFD_realm=1 !Initialise MPI call MPI_Init(ierr) !Create MD Comm by spliting world call CPL_init(CFD_realm, CFD_COMM, ierr) !get local processor rank and print call MPI_comm_size(CFD_COMM, nprocs, ierr) call MPI_comm_rank(CFD_COMM, rank, ierr) print*, "CFD code processor ", rank+1, " of ", nprocs !No need for seperate CPL finalise as MPI finalise takes care of this call CPL_finalize(ierr) call MPI_comm_free(CFD_COMM,ierr) call MPI_finalize(ierr) end program main_CFD
program main_MD use cpl, only : CPL_init, CPL_finalize use mpi implicit none integer :: rank, nprocs, ierr integer :: MD_COMM integer, parameter :: MD_realm=2 !Initialise MPI call MPI_Init(ierr) !Create MD Comm by spliting world call CPL_init(MD_realm, MD_COMM, ierr) call MPI_comm_size(MD_COMM, nprocs, ierr) call MPI_comm_rank(MD_COMM, rank, ierr) print*, "MD code processor ", rank+1, " of ", nprocs !No need for seperate CPL finalise as MPI finalise takes care of this call CPL_finalize(ierr) call MPI_comm_free(MD_COMM,ierr) call MPI_finalize(ierr) end program main_MD
Errors
COUPLER_ERROR_REALM = 1 ! wrong realm value COUPLER_ERROR_ONE_REALM = 2 ! one realm missing COUPLER_ERROR_INIT = 3 ! initialisation error
- Parameters
callingrealm [integer,in] :: CFD_REALM=1 or MD_REALM=2
returned_realm_comm [integer,out]
ierror [integer,out]
- Use
mpi
- subroutine coupler_module/cpl_finalize(ierr)¶
Free all coupling COMMS, deallocate all arrays and set CPL_initialised flag to zero
- Parameters
ierr [integer,out] :: Error flag
- Use
mpi
(mpi_comm_null()
,mpi_barrier()
,mpi_comm_world()
)
- subroutine coupler_module/read_coupler_input()¶
Read COUPLER.in with keyword inputs, which looks for a word and then reads an expected set of arguments on next lines e.g. if you had the keyword KEYWORD followed by 2 integer argumets this would be in the form
KEYWORD 1 2
This routine is called behind the scenes and attempts to read the file located under cpl/COUPLER.in
- Called from
- Call to
- subroutine coupler_module/cpl_write_header(header_filename)¶
Writes header information to specified filename in the format Variable description ; variable name ; variable
Synopsis
CPL_write_header (header_filename)
header_filename
File name to write header to
Input/Output
NONE
Output
NONE
- Parameters
header_filename [character,in]
- Called from
- subroutine coupler_module/cpl_setup_cfd(icomm_grid, xyzl, xyz_orig, ncxyz)¶
Initialisation routine for coupler module - Every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Remarks
Assumes CPL has been initialised CPL_init and communicator MD_REALM exists Synopsis
CPL_setup_cfd(icomm_grid, xyzL, xyz_orig, ncxyz)
Inputs
icomm_grid
Communicator based on CFD processor topology returned from a call to MPI_CART_CREATE.
xyzL
CFD domain size.
xyz_orig
CFD origin.
ncxyz
Number of CFD cells in global domain.
- Parameters
icomm_grid [integer,in]
xyzl (3) [real,in]
xyz_orig (3) [real,in]
ncxyz (3) [integer,in]
- Use
mpi
- Call to
- subroutine coupler_module/coupler_cfd_init(icomm_grid, icoord, npxyz_cfd, xyzl, xyz_orig, ncxyz, ijkcmax, ijkcmin, itmin, itmax, jtmin, jtmax, ktmin, ktmax, xgrid, ygrid, zgrid)¶
Initialisation routine for coupler module to be called from CFD side This is the full interface which is called behind the scenes by CPL_setup_CFD so every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Synopsis
coupler_cfd_init(icomm_grid,icoord,npxyz_cfd,xyzL,ncxyz, ijkcmax,ijkcmin,iTmin,iTmax,jTmin,jTmax,kTmin,kTmax,xgrid,ygrid,zgrid)
Input
icomm_grid
The MPI communicator setup by the MPI_CART_CREATE command in the CFD region (integer)
icoord
The three coordinate for each rank in the domain (integer array nproc by 3)
npxyz_cfd
Number of processors in each cartesian dimension (integer array 3)
xyzL
Size of domain in each cartesian dimension (dp real array 3)
ncxyz
Global number of cells in each cartesian dimension (integer array 3)
ijkcmax
Global maximum cell in each cartesian dimension (integer array 3)
ijkcmin
Global minimum cell in each cartesian dimension (integer array 3)
iTmin
Local minimum cell for each rank (integer array no. procs in x)
iTmax
Local maximum cell for each rank (integer array no. procs in x)
jTmin
Local minimum cell for each rank (integer array no. procs in y)
jTmax
Local maximum cell for each rank (integer array no. procs in y)
kTmin
Local minimum cell for each rank (integer array no. procs in z)
kTmax
Local maximum cell for each rank (integer array no. procs in z)
xgrid
Array of cell vertices in the x direction (no. cells in x + 1 by no. cells in y + 1, no. cells in z+1)
ygrid
Array of cell vertices in the y direction (no. cells in x + 1 by no. cells in y + 1, no. cells in z+1)
zgrid
Array of cell vertices in the z direction (no. cells in x + 1 by no. cells in y + 1, no. cells in z+1)
Input/Output
NONE
Output
NONE
- Parameters
icomm_grid [integer,in]
icoord (,) [integer,in]
npxyz_cfd (3) [integer,in]
xyzl (3) [real,in]
xyz_orig (3) [real,in]
ncxyz (3) [integer,in]
ijkcmax (3) [integer,in]
ijkcmin (3) [integer,in]
itmin (*) [integer,in]
itmax (*) [integer,in]
jtmin (*) [integer,in]
jtmax (*) [integer,in]
ktmin (*) [integer,in]
ktmax (*) [integer,in]
xgrid (,,*) [real,in]
ygrid (,,*) [real,in]
zgrid (,,*) [real,in]
- Use
mpi
- Called from
- Call to
- subroutine coupler_module/cpl_setup_md(icomm_grid, xyzl, xyz_orig)¶
Initialisation routine for coupler module - Every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Remarks
Assumes CPL has been initialised CPL_init and communicator MD_REALM exists
Synopsis
cpl_setup_md(icomm_grid, xyzL, xyz_orig)
Inputs
icomm_grid
Communicator based on MD processor topology returned from a call to MPI_CART_CREATE.
xyzL
MD domain size.
xyz_orig
MD origin.
- Parameters
icomm_grid [integer,in]
xyzl (3) [real,in]
xyz_orig (3) [real,in]
- Use
mpi
- Call to
- subroutine coupler_module/coupler_md_init(icomm_grid, icoord, npxyz_md, globaldomain, xyz_orig)¶
Initialisation routine for coupler module to be called from MD side This is the full interface which is called behind the scenes by CPL_setup_MD so every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Synopsis
coupler_md_init(icomm_grid, icoord, npxyz_md, globaldomain)
Input
icomm_grid
The MPI communicator setup by the MPI_CART_CREATE command in the CFD region (integer)
icoord
The three coordinate for each rank in the domain (integer array nproc by 3)
npxyz_md
Number of processors in each cartesian dimension (integer array 3)
globaldomain
Size of domain in each cartesian dimension (dp real array 3)
density
Density of the CFD simulation (dp_real)
Input/Output
NONE
Output
NONE
- Parameters
icomm_grid [integer,in]
icoord (,) [integer,in]
npxyz_md (3) [integer,in]
globaldomain (3) [real,in]
xyz_orig (3) [real,in]
- Use
mpi
- Called from
- Call to
- subroutine coupler_module/cpl_set_timing(initialstep, nsteps, dt)¶
- Parameters
nsteps [integer,in] ::
Number of steps in MD simulation.
initialstep [integer,in] ::
Initial steps in MD simulation.
dt [real,in] ::
Timestep in MD simulation.
- Use
mpi
- subroutine coupler_module/cpl_create_map()¶
Establish for all MD processors the mapping (if any) to coupled CFD processors
Steps include:
check_config_feasibility()
get_md_cell_ranges()
get_overlap_blocks()
set_overlap_topology()
- Use
mpi
- Called from
- subroutine coupler_module/cpl_rank_map(comm, rank_bn, nproc, comm2world, world2comm, ierr)¶
Get COMM map for current communicator and relationship to world rank used to link to others in the coupler hierachy
Synopsis
CPL_rank_map(COMM, rank, comm2world, world2comm, ierr)
Input
comm
communicator with cartesian structure (handle)
Output
rank
rank of a process within group of comm (integer) NOTE - fortran convention rank=1 to nproc
nproc
number of processes within group of comm (integer) comm2world Array of size nproc_world which for element at world_rank has local rank in COMM
world2comm
Array of size nproc_COMM which for element at for local ranks in COMM has world rank
ierr
error flag
- Parameters
comm [integer,in]
rank_bn [integer,out]
nproc [integer,out]
comm2world (*) [integer,out,allocatable]
world2comm (*) [integer,out,allocatable]
ierr [integer,out]
- Use
mpi
- Called from
- subroutine coupler_module/locate(fileid, keyword, have_data)¶
Locate keyword argument in input
- Parameters
fileid [integer,in] :: File unit number
keyword [character,in] :: Input keyword
have_data [logical,out] :: Flag: input found
- Called from
- subroutine coupler_module/error_abort_s([msg])¶
Error handling subroutines calling MPI ABORT
- Parameters
msg [character,in,]
- Use
mpi
,iso_fortran_env
(error_unit()
)
- subroutine coupler_module/error_abort_si(msg, i)¶
Error handling subroutines calling MPI ABORT
- Parameters
msg [character,in]
i [integer,in]
- Use
mpi
,iso_fortran_env
(error_unit()
)
- subroutine coupler_module/messenger_lasterrorcheck()¶
Use MPI last Error system
- Use
mpi
,iso_fortran_env
(error_unit()
)
- subroutine coupler_module/printf(buf[, dplaces_in])¶
- Parameters
buf (*) [real,in]
dplaces_in [integer,in,]
- subroutine coupler_module/write_matrix_int(a, varname, fh)¶
- Parameters
a (,) [integer]
varname [character]
fh [integer]
- subroutine coupler_module/write_matrix(a, varname, fh)¶
- Parameters
a (,) [real]
varname [character]
fh [integer]
- function coupler_module/cpl_new_fileunit()¶
- Return
f [integer]
- subroutine coupler_module/set_output_mode(mode)¶
- Parameters
mode [integer,in]
- function coupler_module/get_new_fileunit()¶
- Return
f [integer]
Coupler Methods Module¶
All the subroutines and functions which would be called during a run, such as CPL_Send and CPL_recv
Description
Routines accessible from application ( molecular or continuum ) after the name, in parenthesis, is the realm in which each routine must be called
CPL_send (cfd+md) sends grid data exchanged between realms ( generic interface)
CPL_recv (cfd+md) receives data exchanged between realms ( generic interface)
CPL_cfd_get (cfd) returns coupler internal parameters for CFD realm
CPL_get (md) returns coupler internal parameters
CPL_md_get_save_period (md) auxiliary used for testing [depricated]
CPL_md_get_average_period (md) returns average period of BC [depricated]
CPL_md_get_md_per_cfd_dt (md) returns the number of step MD does for each CFD step [depricated]
Also see coupler module which contains all routines to setup the simulation and all the variables defining the setup topology.
________/\\\\\\\\\__/\\\\\\\\\\\\\____/\\\_____________
_____/\\\////////__\/\\\/////////\\\_\/\\\_____________
___/\\\/___________\/\\\_______\/\\\_\/\\\_____________
__/\\\_____________\/\\\\\\\\\\\\\/__\/\\\_____________
_\/\\\_____________\/\\\/////////____\/\\\_____________
_\//\\\____________\/\\\_____________\/\\\_____________
__\///\\\__________\/\\\_____________\/\\\_____________
____\////\\\\\\\\\_\/\\\_____________\/\\\\\\\\\\\\\\\_
_______\/////////__\///______________\///////////////__
C P L - L I B R A R Y
Copyright (C) 2012-2022 Edward Smith
Author(s)
Edward Smith Novemeber 2011 to present
Eduardo Ramos Fernandez 2015 to 2018
David Trevelyan September 2012 to December 2015
Lucian Anton, November 2011
License
This file is part of CPL-Library.
CPL-Library is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
CPL-Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with CPL-Library. If not, see <http://www.gnu.org/licenses/>.
Quick access
- Routines
coupler_md_get_average_period()
,coupler_md_get_md_steps_per_cfd_dt()
,coupler_md_get_save_period()
,cpl_cart_coords()
,cpl_cfd2md()
,cpl_cfd_dt()
,cpl_comm_style()
,cpl_gather()
,cpl_get()
,cpl_get_arrays()
,cpl_get_bnry_limits()
,cpl_get_cnst_limits()
,cpl_get_no_cells()
,cpl_get_olap_limits()
,cpl_get_rank()
,cpl_is_proc_inside()
,cpl_map_cell2coord()
,cpl_map_cfd2md_coord()
,cpl_map_coord2cell()
,cpl_map_glob2loc_cell()
,cpl_map_md2cfd_coord()
,cpl_md2cfd()
,cpl_meshgrid()
,cpl_my_proc_extents()
,cpl_my_proc_portion()
,cpl_olap_extents()
,cpl_overlap()
,cpl_proc_extents()
,cpl_proc_portion()
,cpl_realm()
,cpl_recv_full()
,cpl_recv_min()
,cpl_scatter()
,cpl_send_full()
,cpl_send_min()
,cpl_swaphalos()
,establish_halo_cells()
,establish_surface_cells()
,heaviside()
,is_cell_inside()
,is_coord_inside()
,mpi_errorcheck()
,test_python()
,updatefaces()
Variables
Subroutines and functions
- subroutine coupler/cpl_gather(gatherarray, npercell, limits, recvarray)¶
Perform gather operation on CPL_OLAP_COMM communicator. The CFD processor is the root process. The gathered data is effectively “slotted” into the correct part of the recvarray, and is intented for use in providing the CFD simulation boundary conditions with data obtained from the MD simulation.
Synopsis
CPL_gather(gatherarray,npercell,limits,recvarray)
Inputs
gatherarray - Assumed shape array of data to be gathered from each MD processor
in the overlap communicator. Must be size 0 on CFD processor.
limits - Integer array of length 6, specifying the global cell extents of the
region to be gathered, is the same on ALL processors.
npercell - number of data points per cell to be gathered (integer)
Note: should be the same as size(gatherarray(1)) for MD processor. E.G. npercell = 3 for gathering 3D velocities.
Inputs/Output
recvarray - The array in which the gathered values are to be stored on the CFD
processor. The only values to be changed in recvarray are:
- Parameters
recvarray (,,*,*) [real,inout] :: limits(1):limits(2),limits(3):limits(4),limits(5):limits(6))
- Output
NONE
- Parameters
gatherarray (,,*,*) [real,in]
npercell [integer,in]
limits (6) [integer,in]
- Use
mpi
,coupler_module
- Call to
- subroutine coupler/cpl_scatter(scatterarray, npercell, limits, recvarray)¶
Scatter cell-wise data from CFD processor to corresponding MD processors on the overlap communicator CPL_OLAP_COMM.
Synopsis
CPL_scatter(scatterarray,npercell,limits,recvarray)
Inputs
scatterarray
assumed shape array of data to be scattered (real(kind(0.d0)))
limits
integer array of length 6, specifying the global cell extents of the region to be scattered, is the same on all processors.
npercell
number of data points per cell to be scattered (integer). Note: should be the same as size(scatterarray(1)) for CFD proc
- Inputs/Output
recvarray
the array in which the scattered values are stored on the MD processors.
- Output
NONE
- Parameters
scatterarray (,,*,*) [real,in]
npercell [integer,in]
limits (6) [integer,in]
recvarray (,,*,*) [real,inout]
- Use
coupler_module
,mpi
- Call to
- subroutine coupler/cpl_send_full(asend, limits, send_flag)¶
Send four dimensional array asend of data from all processors in the current realm with data between global cell array limits to the corresponding processors from the other realm.
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Synopsis
CPL_send(asend, limits, send_flag)
Inputs
asend
Array of data to send. Should be a four dimensional array allocated using the number of cells on the current processor between the limits. Size should be be obtained from CPL_my_proc_portion(limits, portion).
limits
Limits in global cell coordinates, must be the same as corresponding recieve
Outputs
send_flag
Returned flag which indicates success or failure of send process
Example
call CPL_get_olap_limits(limits) call CPL_my_proc_portion(limits, portion) call CPL_get_no_cells(portion, Ncells) !Coupled Send array allocate(A(3, Ncells(1), Ncells(2), Ncells(3))) do i =portion(1),portion(2) do j =portion(3),portion(4) do k =portion(5),portion(6) ii = i-portion(1)+1; jj = j-portion(3)+1; kk = k-portion(5)+1 A(1,ii,jj,kk) = dble(i) A(2,ii,jj,kk) = dble(j) A(3,ii,jj,kk) = dble(k) enddo enddo enddo call CPL_send(A, limits, send_flag) .. sectionauthor::Edward Smith ----------------------------------------------------------------------------
- Parameters
asend (,,*,*) [real,in] :: Array containing data to send
limits (6) [integer,in] :: Global cell indices with minimum and maximum values to send
send_flag [logical,out] :: Flag set if processor has passed data
- Use
mpi
,coupler_module
(md_realm()
,cfd_realm()
,error_abort()
,cpl_graph_comm()
,myid_graph()
,olap_mask()
,rank_world()
,realm()
,ierr()
,void()
,cpl_setup_complete()
,realm_name()
,myid_realm()
,rootid_realm()
,rank_intersect()
,cpl_realm_intersection_comm()
)- Called from
- Call to
cpl_my_proc_portion()
,cpl_cart_coords()
,cpl_proc_portion()
,cpl_get_no_cells()
- subroutine coupler/cpl_send_min(asend[, send_flag])¶
- Parameters
asend (,,*,*) [real,in] :: Array containing data to send
send_flag [logical,out,] :: Flag set if processor has passed data
- Use
- Call to
cpl_get_cnst_limits()
,cpl_get_bnry_limits()
,cpl_send_full()
- subroutine coupler/cpl_recv_full(arecv, limits, recv_flag)¶
Receive data from to local grid from the associated ranks from the other realm
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Synopsis
CPL_recv(arecv, limits, recv_flag)
Inputs
arecv
Array of data to recv. Should be a four dimensional array allocated using the number of cells on the current processor between the limits. Size should be be obtained from CPL_my_proc_portion(limits, portion).
limits
Limits in global cell coordinates, must be the same as corresponding send
Outputs
recv_flag
Returned flag which indicates success or failure of recv process
Example
call CPL_get_olap_limits(limits) call CPL_my_proc_portion(limits, portion) call CPL_get_no_cells(portion, Ncells) !Coupled Recieve allocate(A(3, Ncells(1), Ncells(2), Ncells(3))) call CPL_recv(A, limits, recv_flag)
- Parameters
arecv (,,*,*) [real,inout] :: Pre allocated array that recieves data
limits (6) [integer,in] :: Global cell indices with minimum and maximum values to recv
recv_flag [logical,out] :: Flag set if processor has received data
- Use
mpi
,coupler_module
(md_realm()
,cfd_realm()
,rank_graph()
,error_abort()
,cpl_graph_comm()
,myid_graph()
,olap_mask()
,rank_world()
,realm()
,realm_name()
,iblock_realm()
,jblock_realm()
,kblock_realm()
,void()
,ierr()
,rank_intersect()
,cpl_realm_intersection_comm()
)- Called from
- Call to
cpl_cart_coords()
,cpl_proc_portion()
,cpl_my_proc_portion()
- subroutine coupler/cpl_recv_min(arecv[, recv_flag])¶
- Parameters
arecv (,,*,*) [real,out]
recv_flag [logical,out,] :: Flag set if processor has passed data
- Use
- Call to
cpl_get_bnry_limits()
,cpl_get_cnst_limits()
,cpl_recv_full()
- subroutine coupler/cpl_swaphalos(a)¶
Swap halos with adjacent processors on current realm cart comm
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Synopsis
CPL_swaphalos(A)
Inputs
A
Array of data to swaphalo. Should be a four dimensional array allocated using the number of cells including one halos on the current processor. Size should be be obtained from CPL_my_proc_portion(limits, portion) with an extra halo on each side. This halo should have been used to collect overflow on current processor.
Outputs
A
Array of data with outer halo obtained from adjacent processes.
Example
call CPL_get(icmax_olap=icmax_olap) call CPL_get_olap_limits(limits) call CPL_my_proc_portion(limits, portion) call CPL_get_no_cells(portion, Ncells) Setup an increasing number in each cell in 2D allocate(A(1, Ncells(1)+2, Ncells(2)+2, Ncells(3)+2)) A = 0.0d0 do i=2,Ncells(1)+1 do j=2,Ncells(2)+1 do k=2,Ncells(3)+1 ii = i + portion(1) jj = j + portion(3) A(1,i,j,k) = ii + jj*icmax_olap enddo enddo enddo call CPL_swaphalo(A)
- Parameters
a (,,*,*) [real,inout] :: 4D Array of data with empty outer halo.
- Use
- Call to
- subroutine coupler/updatefaces(a, n1, n2, n3, nresults, ixyz, icomm_grid)¶
updatefaces – Facilitate the MPI based exchange of data Update face halo cells by passing to neighbours
- Parameters
a (,,*,*) [real,inout]
n1 [integer,in]
n2 [integer,in]
n3 [integer,in]
nresults [integer,in]
ixyz [integer]
icomm_grid [integer,in]
- Use
mpi
- Called from
- subroutine coupler/establish_surface_cells(ncells, nhb, surfacecells)¶
Establish and store indices of cells which are on the outer domain but not in the halo
- Parameters
ncells (3) [integer,in] :: Number of cells in x,y and z
nhb (3) [integer,in] :: Number of halos in x,y and z
surfacecells (,) [integer,out,allocatable] :: N by 3 array of Surface cells
- subroutine coupler/establish_halo_cells(ncells, halocells)¶
Establish and store indices of cells which are in the halo
- Parameters
ncells (3) [integer,in] :: Number of cells in x,y and z
halocells (,) [integer,out,allocatable] :: N by 3 array of halo cells
- Called from
- function coupler/heaviside(x)¶
- Parameters
x [integer,in]
- Return
heaviside [real]
- Called from
- subroutine coupler/cpl_proc_extents(coord, realm, extents[, ncells])¶
Gets maximum and minimum cells for processor coordinates
Synopsis
CPL_proc_extents(coord,realm,extents,ncells)
Inputs
coord
processor cartesian coordinate (3 x integer)
realm
cfd_realm (1) or md_realm (2) (integer)
- Inputs/Output
NONE
Output
extents - Six components array which defines processor extents
xmin,xmax,ymin,ymax,zmin,zmax (6 x integer)
- Parameters
ncells [integer,out,] ::
- optional)
number of cells on processor (integer)
coord (3) [integer,in]
realm [integer,in]
extents (6) [integer,out]
- Use
mpi
,coupler_module
(md_realm()
,cfd_realm()
,icpmin_md()
,icpmax_md()
,jcpmin_md()
,jcpmax_md()
,kcpmin_md()
,kcpmax_md()
,icpmin_cfd()
,icpmax_cfd()
,jcpmin_cfd()
,jcpmax_cfd()
,kcpmin_cfd()
,kcpmax_cfd()
,error_abort()
)- Called from
- subroutine coupler/cpl_olap_extents(coord, realm, extents[, ncells])¶
Get maximum and minimum cells for current communicator within the overlapping region only
Synopsis
CPL_olap_extents(coord,realm,extents,ncells)
Inputs
coord
processor cartesian coordinate (3 x integer)
realm
cfd_realm (1) or md_realm (2) (integer)
- Inputs/Output
NONE
Output
extents
Six components array which defines processor extents within the overlap region only: xmin,xmax,ymin,ymax,zmin,zmax (6 x integer)
- Parameters
ncells [integer,out,] ::
- optional)
number of cells on processor (integer)
coord (3) [integer,in]
realm [integer,in]
extents (6) [integer,out]
- Use
mpi
,coupler_module
(md_realm()
,cfd_realm()
,icpmin_md()
,icpmax_md()
,jcpmin_md()
,jcpmax_md()
,kcpmin_md()
,kcpmax_md()
,icpmin_cfd()
,icpmax_cfd()
,jcpmin_cfd()
,jcpmax_cfd()
,kcpmin_cfd()
,kcpmax_cfd()
,icmin_olap()
,icmax_olap()
,jcmin_olap()
,jcmax_olap()
,kcmin_olap()
,kcmax_olap()
,error_abort()
)
- subroutine coupler/cpl_proc_portion(coord, realm, limits, portion[, ncells])¶
Get maximum and minimum cell indices, i.e. the ‘portion’, of the input cell extents ‘limits’ that is contributed by the processor specified by processor coord.
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Note: limits(6) and portion(6) are of the form: (xmin,xmax,ymin,ymax,zmin,zmax)
Synopsis
CPL_proc_portion(coord, realm, limits, portion, ncells)
Inputs
coord
processor cartesian coordinate (3 x integer)
realm
cfd_realm (1) or md_realm (2) (integer)
limits
Array of cell extents that specify the input region.
Outputs
portion - Array of cell extents that define the local processor’s
contribution to the input region ‘limits’.
- Parameters
ncells [integer,out,] ::
- optional)
number of cells in portion (integer)
coord (3) [integer,in]
realm [integer,in]
limits (6) [integer,in]
portion (6) [integer,out]
- Use
mpi
,coupler_module
(void()
)- Called from
cpl_scatter()
,cpl_send_full()
,cpl_recv_full()
,cpl_my_proc_portion()
- Call to
- subroutine coupler/cpl_my_proc_portion(limits, portion)¶
Get maximum and minimum cell indices, i.e. the ‘portion’, of the input cell extents ‘limits’ that is contributed by calling processor.
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Note: limits(6) and portion(6) are of the form: (xmin,xmax,ymin,ymax,zmin,zmax)
Synopsis
CPL_my_proc_portion(limits, portion)
Inputs
limits
Array of cell extents that specify the input region.
Outputs
portion - Array of cell extents that define the local processor’s
part of the input region ‘limits’.
- Parameters
limits (6) [integer,in]
portion (6) [integer,out]
- Use
coupler_module
(rank_cart()
,realm()
,md_realm()
,cfd_realm()
,rank2coord_cfd()
,rank2coord_md()
)- Called from
cpl_send_full()
,cpl_recv_full()
,cpl_get_arrays()
,rwrite_arrays()
- Call to
- subroutine coupler/cpl_my_proc_extents(extents)¶
- Parameters
extents (6) [integer,out]
- Use
coupler_module
(rank_cart()
,realm()
,md_realm()
,cfd_realm()
,rank2coord_cfd()
,rank2coord_md()
)- Call to
- subroutine coupler/cpl_cart_coords(comm, rank_bn, realm, maxdims, coords, ierr)¶
Determines process coords in appropriate realm’s cartesian topology given a rank in any communicator
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Synopsis
CPL_Cart_coords(COMM, rank, realm, maxdims, coords, ierr)
Inputs
COMM
communicator with cartesian structure (handle)
rank
rank of a process within group of comm (integer) NOTE fortran convention rank=1 to nproc
realm
cfd_realm (1) or md_realm (2) (integer)
maxdims
length of vector coords in the calling program (integer)
Outputs
coords
integer array (of size ndims) containing the Cartesian coordinates of specified process (integer)
ierr
error flag
- Parameters
comm [integer,in]
rank_bn [integer,in]
realm [integer,in]
maxdims [integer,in]
coords (maxdims) [integer,out]
ierr [integer,out]
- Use
coupler_module
(cpl_world_comm()
,cpl_realm_comm()
,cpl_inter_comm()
,cpl_cart_comm()
,cpl_olap_comm()
,cpl_graph_comm()
,cpl_realm_intersection_comm()
,md_realm()
,cfd_realm()
,rank_world2rank_mdcart()
,rank_world2rank_cfdcart()
,rank_mdrealm2rank_world()
,rank_cfdrealm2rank_world()
,rank_olap2rank_world()
,rank_graph2rank_world()
,rank2coord_cfd()
,rank2coord_md()
,coupler_error_cart_comm()
,void()
,nproc_cfd()
,nproc_md()
,error_abort()
)- Called from
- subroutine coupler/cpl_get_rank(comm, rank_bn)¶
Return rank of current processor in specified COMM
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Synopsis
CPL_get_rank(COMM, rank)
Inputs
COMM - communicator with cartesian structure (handle)
Outputs
rank - rank of a process within group of comm (integer)
NOTE fortran convention rank=1 to nproc
- Parameters
comm [integer,in]
rank_bn [integer,out]
- Use
coupler_module
(cpl_world_comm()
,cpl_realm_comm()
,cpl_inter_comm()
,cpl_cart_comm()
,cpl_olap_comm()
,cpl_graph_comm()
,cpl_realm_intersection_comm()
,rank_world()
,rank_realm()
,rank_cart()
,rank_olap()
,rank_graph()
,error_abort()
)
- function coupler/cpl_overlap()¶
Check if current processor is in the overlap region
Synopsis
CPL_olap_check()
Inputs Parameters
NONE
Returns
CPL_olap_check
True if calling processor is in the overlap region and false otherwise
- Return
p [logical]
- Use
- Called from
- subroutine coupler/cpl_get([icmax_olap[, icmin_olap[, jcmax_olap[, jcmin_olap[, kcmax_olap[, kcmin_olap[, density_cfd[, density_md[, dt_cfd[, dt_md[, dx[, dy[, dz[, ncx[, ncy[, ncz[, xg[, yg[, zg[, xl_md[, xl_cfd[, yl_md[, yl_cfd[, zl_md[, zl_cfd[, npx_md[, npy_md[, npz_md[, npx_cfd[, npy_cfd[, npz_cfd[, constraint_algo[, constraint_cvflag[, constraint_ot[, constraint_ncer[, constraint_flekkoy[, constraint_off[, constraint_cv[, icmin_cnst[, icmax_cnst[, jcmin_cnst[, jcmax_cnst[, kcmin_cnst[, kcmax_cnst[, md_cfd_match_cellsize[, staggered_averages[, cpl_cfd_bc_slice[, cpl_md_bc_slice[, nsteps_md[, nsteps_cfd[, nsteps_coupled[, cpl_cfd_bc_x[, cpl_cfd_bc_y[, cpl_cfd_bc_z[, timestep_ratio[, comm_style[, sendtype_cfd_to_md[, sendtype_md_to_cfd]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]])¶
Wrapper to retrieve (read only) parameters from the coupler_module Note - this ensures all variable in the coupler are protected from corruption by either CFD or MD codes
Synopsis
CPL_get([see coupler_module])
Input
NONE
Output
See below
- Parameters
icmax_olap [integer,out,]
icmin_olap [integer,out,]
jcmax_olap [integer,out,]
jcmin_olap [integer,out,]
kcmax_olap [integer,out,]
kcmin_olap [integer,out,]
density_cfd [real,out,]
density_md [real,out,]
dt_cfd [real,out,]
dt_md [real,out,]
dx [real,out,]
dy [real,out,]
dz [real,out,]
ncx [integer,out,]
ncy [integer,out,]
ncz [integer,out,]
xg (,,*) [real,out,allocatable]
yg (,,*) [real,out,allocatable]
zg (,,*) [real,out,allocatable]
xl_md [real,out,]
xl_cfd [real,out,]
yl_md [real,out,]
yl_cfd [real,out,]
zl_md [real,out,]
zl_cfd [real,out,]
npx_md [integer,out,]
npy_md [integer,out,]
npz_md [integer,out,]
npx_cfd [integer,out,]
npy_cfd [integer,out,]
npz_cfd [integer,out,]
constraint_algo [integer,out,]
constraint_cvflag [integer,out,]
constraint_ot [integer,out,]
constraint_ncer [integer,out,]
constraint_flekkoy [integer,out,]
constraint_off [integer,out,]
constraint_cv [integer,out,]
icmin_cnst [integer,out,]
icmax_cnst [integer,out,]
jcmin_cnst [integer,out,]
jcmax_cnst [integer,out,]
kcmin_cnst [integer,out,]
kcmax_cnst [integer,out,]
md_cfd_match_cellsize [integer,out,]
staggered_averages (3) [logical,out,]
cpl_cfd_bc_slice [integer,out,]
cpl_md_bc_slice [integer,out,]
nsteps_md [integer,out,]
nsteps_cfd [integer,out,]
nsteps_coupled [integer,out,]
cpl_cfd_bc_x [integer,out,]
cpl_cfd_bc_y [integer,out,]
cpl_cfd_bc_z [integer,out,]
timestep_ratio [integer,out,]
comm_style [integer,out,]
sendtype_cfd_to_md [integer,out,]
sendtype_md_to_cfd [integer,out,]
- Use
coupler_module
(icmax_olap_()
=>icmax_olap_()
,icmin_olap_()
=>icmin_olap_()
,jcmax_olap_()
=>jcmax_olap_()
,jcmin_olap_()
=>jcmin_olap_()
,kcmax_olap_()
=>kcmax_olap_()
,kcmin_olap_()
=>kcmin_olap_()
,density_cfd_()
=>density_cfd_()
,density_md_()
=>density_md_()
,dt_cfd_()
=>dt_cfd_()
,dt_md_()
=>dt_md_()
,dx_()
=>dx_()
,dy_()
=>dy_()
,dz_()
=>dz_()
,ncx_()
=>ncx_()
,ncy_()
=>ncy_()
,ncz_()
=>ncz_()
,xl_md_()
=>xl_md_()
,yl_md_()
=>yl_md_()
,zl_md_()
=>zl_md_()
,xl_cfd_()
=>xl_cfd_()
,yl_cfd_()
=>yl_cfd_()
,zl_cfd_()
=>zl_cfd_()
,npx_md_()
=>npx_md_()
,npy_md_()
=>npy_md_()
,npz_md_()
=>npz_md_()
,npx_cfd_()
=>npx_cfd_()
,npy_cfd_()
=>npy_cfd_()
,npz_cfd_()
=>npz_cfd_()
,xg_()
=>xg_()
,yg_()
=>yg_()
,zg_()
=>zg_()
,timestep_ratio_()
=>timestep_ratio_()
,md_cfd_match_cellsize_()
=>md_cfd_match_cellsize_()
,staggered_averages_()
=>staggered_averages_()
,constraint_algo_()
=>constraint_algo_()
,constraint_cvflag_()
=>constraint_cvflag_()
,constraint_ot_()
=>constraint_ot_()
,constraint_ncer_()
=>constraint_ncer_()
,constraint_flekkoy_()
=>constraint_flekkoy_()
,constraint_cv_()
=>constraint_cv_()
,constraint_off_()
=>constraint_off_()
,icmin_cnst_()
=>icmin_cnst_()
,icmax_cnst_()
=>icmax_cnst_()
,jcmin_cnst_()
=>jcmin_cnst_()
,jcmax_cnst_()
=>jcmax_cnst_()
,kcmin_cnst_()
=>kcmin_cnst_()
,kcmax_cnst_()
=>kcmax_cnst_()
,cpl_cfd_bc_slice_()
=>cpl_cfd_bc_slice_()
,cpl_md_bc_slice_()
=>cpl_md_bc_slice_()
,nsteps_md_()
=>nsteps_md_()
,nsteps_cfd_()
=>nsteps_cfd_()
,nsteps_coupled_()
=>nsteps_coupled_()
,cpl_cfd_bc_x_()
=>cpl_cfd_bc_x_()
,cpl_cfd_bc_y_()
=>cpl_cfd_bc_y_()
,cpl_cfd_bc_z_()
=>cpl_cfd_bc_z_()
,comm_style_()
=>comm_style_()
,comm_style_send_recv_()
=>comm_style_send_recv_()
,comm_style_gath_scat_()
=>comm_style_gath_scat_()
,sendtype_cfd_to_md_()
=>sendtype_cfd_to_md_()
,sendtype_md_to_cfd_()
=>sendtype_md_to_cfd_()
)- Called from
- function coupler/cpl_comm_style()¶
- Return
cpl_comm_style [integer]
- Use
- function coupler/cpl_realm()¶
- Return
cpl_realm [integer]
- Use
- subroutine coupler/cpl_meshgrid(xg, yg, zg)¶
- Parameters
xg (,,*) [real,out,allocatable]
yg (,,*) [real,out,allocatable]
zg (,,*) [real,out,allocatable]
- Call to
- function coupler/cpl_map_md2cfd_coord(coord_md, coord_cfd)¶
Map global MD position to global CFD coordinate frame
- Parameters
coord_md (3) [real,in]
coord_cfd (3) [real,out]
- Return
valid_coord [logical]
- Use
coupler_module
(xl_md()
,xg()
,icmin_olap()
,icmax_olap()
,yl_md()
,yg()
,jcmin_olap()
,jcmax_olap()
,zl_md()
,zg()
,kcmin_olap()
,kcmax_olap()
,x_orig_md()
,y_orig_md()
,z_orig_md()
,x_orig_cfd()
,y_orig_cfd()
,z_orig_cfd()
,xl_cfd()
,yl_cfd()
,zl_cfd()
)- Called from
- Call to
- function coupler/cpl_map_cfd2md_coord(coord_cfd, coord_md)¶
Map global CFD position in global MD coordinate frame
- Parameters
coord_cfd (3) [real,in]
coord_md (3) [real,out]
- Return
valid_coord [logical]
- Use
coupler_module
(xl_md()
,xg()
,icmin_olap()
,icmax_olap()
,yl_md()
,yg()
,jcmin_olap()
,jcmax_olap()
,zl_md()
,zg()
,kcmin_olap()
,kcmax_olap()
,x_orig_md()
,y_orig_md()
,z_orig_md()
,x_orig_cfd()
,y_orig_cfd()
,z_orig_cfd()
,xl_cfd()
,yl_cfd()
,zl_cfd()
)- Called from
- Call to
- function coupler/cpl_md2cfd(coord_md)¶
- Parameters
coord_md (3) [real,in]
- Return
coord_cfd (3) [real]
- Use
coupler_module
(error_abort()
)- Call to
- function coupler/cpl_cfd2md(coord_cfd)¶
- Parameters
coord_cfd (3) [real,in]
- Return
coord_md (3) [real]
- Use
coupler_module
(error_abort()
)- Call to
- subroutine coupler/cpl_map_cell2coord(i, j, k, coord_xyz)¶
- Parameters
i [integer,in]
j [integer,in]
k [integer,in]
coord_xyz (3) [real,out]
- Use
coupler_module
(xg()
,yg()
,zg()
,realm()
,ncx()
,ncy()
,ncz()
,md_realm()
,cfd_realm()
,maxgridsize()
,error_abort()
)- Called from
- Call to
cpl_get_olap_limits()
,is_cell_inside()
,cpl_map_cfd2md_coord()
- function coupler/cpl_map_coord2cell(x, y, z, cell_ijk)¶
- Parameters
x [real,in]
y [real,in]
z [real,in]
cell_ijk (3) [integer,out]
- Return
ret [logical]
- Use
coupler_module
(dx()
,dy()
,dz()
,icmin_olap()
,jcmin_olap()
,kcmin_olap()
,icmax_olap()
,jcmax_olap()
,kcmax_olap()
)- Call to
cpl_map_cell2coord()
,cpl_get_olap_limits()
,is_cell_inside()
- subroutine coupler/cpl_get_no_cells(limits, no_cells)¶
- Parameters
limits (6) [integer,in]
no_cells (3) [integer,out]
- Called from
cpl_send_full()
,cpl_get_arrays()
,rwrite_arrays()
- function coupler/cpl_map_glob2loc_cell(limits, glob_cell, loc_cell)¶
- Parameters
limits (6) [integer,in]
glob_cell (3) [integer,in]
loc_cell (3) [integer,out]
- Return
ret [logical]
- Use
coupler_module
(void()
,error_abort()
)- Call to
- subroutine coupler/cpl_get_olap_limits(limits)¶
Get limits of overlap region as cell indices in global coordinate system, these are as specified in the input file (COUPLER.in).
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate. - Note: limits(6) are of the form: (xmin,xmax,ymin,ymax,zmin,zmax)
Synopsis
CPL_get_olap_limits(limits)
Outputs
limits
Array of cell extents that specify the overlap region.
- Parameters
limits (6) [integer,out]
- Use
coupler_module
(icmin_olap()
,icmax_olap()
,jcmin_olap()
,jcmax_olap()
,kcmin_olap()
,kcmax_olap()
)- Called from
cpl_map_cell2coord()
,cpl_map_coord2cell()
,cpl_get_arrays()
- subroutine coupler/cpl_get_cnst_limits(limits)¶
Get limits of constraint region as cell indices in global coordinate system, these are as specified in the input file (COUPLER.in) and must be in the overlap region.
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Note: limits(6) are of the form: (xmin,xmax,ymin,ymax,zmin,zmax)
Synopsis
CPL_get_cnst_limits(limits)
Outputs
limits
Array of cell extents that specify the constrained region.
- Parameters
limits (6) [integer,out]
- Use
coupler_module
(icmin_cnst()
,icmax_cnst()
,jcmin_cnst()
,jcmax_cnst()
,kcmin_cnst()
,kcmax_cnst()
)- Called from
- subroutine coupler/cpl_get_bnry_limits(limits)¶
Get limits of boundary region as cell indices in global coordinate system, these are as specified in the input file (COUPLER.in) and must be in the overlap region.
Remarks
Assumes the coupler has been initialised with CPL_init and topological mapping has been setup using either CPL_setup_md or CPL_setup_cfd as appropriate.
Note: limits(6) are of the form: (xmin,xmax,ymin,ymax,zmin,zmax)
Synopsis
CPL_get_bnry_limits(limits)
Outputs
limits
Array of cell extents that specify the boundary region.
- Parameters
limits (6) [integer,out]
- Use
coupler_module
(icmin_bnry()
,icmax_bnry()
,jcmin_bnry()
,jcmax_bnry()
,kcmin_bnry()
,kcmax_bnry()
)- Called from
- subroutine coupler/cpl_get_arrays(recv_array, recv_size, send_array, send_size)¶
A helper function to get arrays of the required size for cells local to the current processor
Example
The first example shows the CFD side of the exchange, with send/recv arrays obtained from CPL_get_arrays
program main_CFD use cpl, only : CPL_Init, CPL_setup_cfd, CPL_send, & CPL_recv, CPL_get_arrays, CPL_finalize use mpi implicit none integer :: time, CFD_COMM, CART_COMM, ierr, CFD_realm=1 double precision, dimension(:,:,:,:), & allocatable :: send_array, recv_array call MPI_Init(ierr) call CPL_init(CFD_realm, CFD_COMM, ierr) call MPI_Cart_create(CFD_COMM, 3, (/1, 1, 1/), & (/.true.,.true.,.true./), & .true., CART_COMM, ierr) call CPL_setup_cfd(CART_COMM, (/1.d0, 1.d0, 1.d0/), & (/0.d0, 0.d0, 0.d0/), & (/32, 32, 32/)) call CPL_get_arrays(recv_array, 4, send_array, 1) do time=1,5 call CPL_recv(recv_array) print*, "CFD", time, recv_array(1,1,1,1) send_array(1,:,:,:) = 2.*time call CPL_send(send_array) enddo call CPL_finalize(ierr) call MPI_comm_free(CFD_COMM,ierr) call MPI_finalize(ierr) end program main_CFD
The corresponding MD code which matches this, note CPL_get_arrays send and recv arrays are swapped over (as send from CFD in recv on MD and vice versa)
program main_MD use cpl, only : CPL_Init, CPL_setup_md, CPL_send, & CPL_recv, CPL_get_arrays, CPL_finalize use mpi implicit none integer :: time, MD_COMM, CART_COMM, ierr, MD_realm=2 double precision, dimension(:,:,:,:), & allocatable :: send_array, recv_array call MPI_Init(ierr) call CPL_init(MD_realm, MD_COMM, ierr) call MPI_Cart_create(MD_COMM, 3, (/1, 1, 1/), & (/.true.,.true.,.true./), & .true., CART_COMM, ierr) call CPL_setup_md(CART_COMM, (/1.d0, 1.d0, 1.d0/), & (/0.d0, 0.d0, 0.d0/)) call CPL_get_arrays(recv_array, 1, send_array, 4) do time=1,5 send_array(1,:,:,:) = 5.*time call CPL_send(send_array) call CPL_recv(recv_array) print*, "MD", time, recv_array(1,1,1,1) enddo call CPL_finalize(ierr) call MPI_comm_free(MD_COMM,ierr) call MPI_finalize(ierr) end program main_MD
These are then both run together, either MPMD or port connect modes
- Parameters
precision [double]
recv_size [integer]
precision
send_size [integer]
- Call to
cpl_get_olap_limits()
,cpl_my_proc_portion()
,cpl_get_no_cells()
- function coupler/cpl_is_proc_inside(region)¶
- Parameters
region (6) [integer,in]
- Return
res [logical]
- Use
coupler_module
(void()
)
- function coupler/is_cell_inside(cell, limits)¶
- Parameters
cell (3) [integer,in]
limits (6) [integer,in]
- Return
res [logical]
- Use
coupler_module
(void()
)- Called from
cpl_map_cell2coord()
,cpl_map_coord2cell()
,cpl_map_glob2loc_cell()
- function coupler/is_coord_inside(coord, coord_limits)¶
- Parameters
coord (3) [real,in]
coord_limits (6) [real,in]
- Return
res [logical]
- Called from
- function coupler/coupler_md_get_save_period()¶
- Return
p [integer]
- Use
- function coupler/coupler_md_get_average_period()¶
- Return
p [integer]
- Use
- function coupler/coupler_md_get_md_steps_per_cfd_dt()¶
- Return
p [integer]
- Use
- function coupler/cpl_cfd_dt()¶
- Return
p [real]
- Use
- subroutine coupler/test_python(integer_p, double_p, bool_p, integer_pptr, double_pptr)¶
- Parameters
integer_p [integer,in]
double_p [real,in]
bool_p [logical,in]
integer_pptr (*) [integer]
double_pptr (*) [real]
- subroutine coupler/mpi_errorcheck(ierr)¶
- Parameters
ierr [integer,in]
- Use
mpi