Python Bindings¶
- class cplpy.CPL[source]¶
- init(calling_realm)[source]¶
(cfd+md) Splits MPI_COMM_WORLD in both the CFD and MD code respectively and create intercommunicator between CFD and MD
Remarks
Assumes MPI has been initialised MPI_init and communicator MPI_COMM_WORLD exists and contains all processors in both CFD and MD regions
Synopsis
CPL.init(callingrealm)
Inputs
callingrealm
Should identify calling processor as either CFD_REALM (integer with value 1) or MD_REALM (integer with value 2).
Outputs
RETURNED_REALM_COMM
Communicator based on callingrealm value local to CFD or MD processor and resulting from the split of MPI_COMM_WORLD
Example
from mpi4py import MPI from cplpy import CPL comm = MPI.COMM_WORLD CPL = CPL() MD_COMM = CPL.init(CPL.MD_REALM) nprocs = MD_COMM.Get_size() rank = MD_COMM.Get_rank() print(("MD code processor "+ str(rank+1) + " of " + str(nprocs))) CPL.finalize() MPI.Finalize()
Errors
COUPLER_ERROR_REALM = 1 wrong realm value COUPLER_ERROR_ONE_REALM = 2 one realm missing COUPLER_ERROR_INIT = 3 ! initialisation error
- setup_cfd(icomm_grid, xyzL, xyz_orig, ncxyz)[source]¶
Initialisation routine for coupler module - Every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Remarks
Assumes CPL has been initialised CPL.init and communicator MD_REALM exists
Synopsis
CPL.setup_cfd(icomm_grid, xyzL, xyz_orig, ncxyz)
Inputs
icomm_grid
Communicator based on CFD processor topology returned from a call to MPI_CART_CREATE.
xyzL
CFD domain size.
xyz_orig
CFD origin.
ncxyz
Number of CFD cells in global domain.
- setup_md(icomm_grid, xyzL, xyz_orig)[source]¶
Initialisation routine for coupler module - Every variable is sent and stored to ensure both md and cfd region have an identical list of parameters
Remarks
Assumes CPL has been initialised CPL.init and communicator MD_REALM exists
Synopsis
CPL.md_init(icomm_grid, xyzL, xyz_orig)
Inputs
icomm_grid
Communicator based on MD processor topology returned from a call to MPI_CART_CREATE.
xyzL
MD domain size.
xyz_orig
MD origin.
- proc_portion(coord, realm, limits)[source]¶
Get maximum and minimum cell indices, i.e. the ‘portion’, of the input cell extents ‘limits’ that is contributed by the processor specified by processor coord.
Remarks
Assumes the coupler has been initialised with CPL.init and topological mapping has been setup using either CPL.setup_md or CPL.setup_cfd as appropriate. - Note: limits(6) and portion(6) are of the form: (xmin,xmax,ymin,ymax,zmin,zmax)
Synopsis
cpl.proc_portion(coord, realm, limits)
Inputs
coord
processor cartesian coordinate, list or numpy array of 3 integers
realm
cfd_realm (1) or md_realm (2) (integer)
limits
Array of cell extents that specify the input region, list or numpy array of 6 integers
Outputs
- portion
Array of cell extents that define the local processor’s contribution to the input region ‘limits’, numpy array of 6 integers
- my_proc_portion(limits)[source]¶
Get maximum and minimum cell indices, i.e. the ‘portion’ on calling process.
Remarks
Assumes the coupler has been initialised with CPL.init and topological mapping has been setup using either CPL.setup_md or CPL.setup_cfd as appropriate. - Note: limits(6) and portion(6) are of the form: (xmin,xmax,ymin,ymax,zmin,zmax)
Synopsis
CPL.my_proc_portion(limits)
Inputs
limits
Array of cell extents that specify the input region, list or numpy array of 6 integers
Outputs
- portion
Array of cell extents that define the local processor’s contribution to the input region ‘limits’, numpy array of 6 integers
- send(asend, limits=None)[source]¶
Send four dimensional array asend of data from all processors in the current realm with data between global cell array limits to the corresponding processors from the other realm.
Remarks
Assumes the coupler has been initialised with CPL.init and topological mapping has been setup using either CPL.setup_md or CPL.setup_cfd as appropriate.
Synopsis
CPL.send(asend, limits=None)
Inputs
asend
Array of data to send. Should be a four dimensional Numpy array allocated using the number of cells on the current processor between the limits. For example, if overlap limits are 8 cells, between cells 0 and 7 split over 2 procesors, the first processor will have from 0 to 3 and the second from 4 to 7. This should be be obtained from CPL.my_proc_portion(limits, portion) to allocate a Numpy array, or allocated using the helper function CPL.get_arrays with appropriate sizes.
.
limits [Optional]
Optional arguments limits specify if global limits of overlap region not used. These are in the global cell coordinates, and must match the corresponding recieve.
Outputs
send_flag
Returned flag which indicates success or failure of send process
Example
This example links with the CPL.recv examples
from mpi4py import MPI from cplpy import CPL import numpy as np comm = MPI.COMM_WORLD CPL = CPL() nsteps = 1 dt = 0.2 density = 0.8 # Parameters of the cpu topology (cartesian grid) NPx = 2 NPy = 2 NPz = 1 NProcs = NPx*NPy*NPz # Parameters of the mesh topology (cartesian grid) ncxyz = np.array([64, 18, 64], order='F', dtype=np.int32) xyzL = np.array([10.0, 10.0, 10.0], order='F', dtype=np.float64) xyz_orig = np.array([0.0, 0.0, 0.0], order='F', dtype=np.float64) # Create communicators and check that number of processors is consistent CFD_COMM = CPL.init(CPL.CFD_REALM) nprocs_realm = CFD_COMM.Get_size() if (nprocs_realm != NProcs): print("ERROR: Non-coherent number of processors.") comm.Abort(errorcode=1) cart_comm = CFD_COMM.Create_cart([NPx, NPy, NPz]) CPL.setup_cfd(cart_comm, xyzL, xyz_orig, ncxyz) cart_rank = cart_comm.Get_rank() olap_limits = CPL.get_olap_limits() portion = CPL.my_proc_portion(olap_limits) [ncxl, ncyl, nczl] = CPL.get_no_cells(portion) send_array = np.zeros((3, ncxl, ncyl, nczl), order='F', dtype=np.float64) for i in range(0, ncxl): for j in range(0, ncyl): for k in range(0, nczl): ii = i + portion[0] jj = j + portion[2] kk = k + portion[4] send_array[0, i, j, k] = ii send_array[1, i, j, k] = jj send_array[2, i, j, k] = kk ierr = CPL.send(send_array, olap_limits) MPI.COMM_WORLD.Barrier() CFD_COMM.Free() cart_comm.Free() CPL.finalize() MPI.Finalize()
- recv(arecv, limits=None)[source]¶
Receive data from to local grid from the associated ranks from the other realm
Remarks
Assumes the coupler has been initialised with CPL.init and topological mapping has been setup using either CPL.setup_md or CPL.setup_cfd as appropriate.
Synopsis
CPL.recv(arecv, limits=None)
Inputs
arecv
Array of data to recv. Should be a four dimensional Numpy array allocated using the number of cells on the current processor between the limits. For example, if overlap limits are 8 cells, between cells 0 and 7 split over 2 procesors, the first processor will have from 0 to 3 and the second from 4 to 7. This should be be obtained from CPL.my_proc_portion(limits, portion) to allocate a Numpy array, or allocated using the helper function CPL.get_arrays with appropriate sizes.
limits [Optional]
Limits in global cell coordinates, must be the same as corresponding send command.
Outputs
recv_flag
Returned flag which indicates success or failure of recv process
Example
This example links with the CPL.send examples
from mpi4py import MPI from cplpy import CPL import numpy as np def read_input(filename): with open(filename, 'r') as f: content = f.read() dic = {} for i in content.split('\n'): if i.find("!") != -1: name = i.split("!")[1] value = i.split('!')[0].replace(' ', '') dic[name] = value return dic comm = MPI.COMM_WORLD #comm.Barrier() CPL = CPL() # Parameters of the cpu topology (cartesian grid) dt = 0.1 NPx = 4 NPy = 2 NPz = 2 NProcs = NPx*NPy*NPz npxyz = np.array([NPx, NPy, NPz], order='F', dtype=np.int32) # Domain topology xyzL = np.array([10.0, 10.0, 10.0], order='F', dtype=np.float64) xyz_orig = np.array([0.0, 0.0, 0.0], order='F', dtype=np.float64) # Create communicators and check that number of processors is consistent MD_COMM = CPL.init(CPL.MD_REALM) nprocs_realm = MD_COMM.Get_size() if (nprocs_realm != NProcs): print("Non-coherent number of processes") comm.Abort(errorcode=1) cart_comm = MD_COMM.Create_cart([NPx, NPy, NPz]) CPL.setup_md(cart_comm, xyzL, xyz_orig) # recv test olap_limits = CPL.get_olap_limits() portion = CPL.my_proc_portion(olap_limits) [ncxl, ncyl, nczl] = CPL.get_no_cells(portion) recv_array = np.zeros((3, ncxl, ncyl, nczl), order='F', dtype=np.float64) recv_array, ierr = CPL.recv(recv_array, olap_limits) no_error = True if CPL.overlap(): rank = MD_COMM.Get_rank() for i in range(0, ncxl): for j in range(0, ncyl): for k in range(0, nczl): ii = i + portion[0] jj = j + portion[2] kk = k + portion[4] if (float(ii) - recv_array[0, i, j, k]) > 1e-8: print("ERROR -- portion in x: %d %d " % (portion[0], portion[1]) + " MD rank: %d " % rank + " cell id: %d " % ii + " recv_array: %f" % recv_array[0, i, j, k]) no_error = False if (float(jj) - recv_array[1, i, j, k]) > 1e-8: print("ERROR -- portion in y: %d %d " % (portion[2], portion[3]) + " MD rank: %d " % rank + " cell id: %d " % jj + " recv_array: %f" % recv_array[1, i, j, k]) no_error = False if (float(kk) - recv_array[2, i, j, k]) > 1e-8: print("ERROR -- portion in z: %d %d " % (portion[4], portion[5]) + " MD rank: %d " % rank + " cell id: %d " % kk + " recv_array: %f" % recv_array[2, i, j, k]) no_error = False MD_COMM.Barrier() if CPL.overlap() and no_error: print ("MD -- " + "(rank={:2d}".format(rank) + ") CELLS HAVE BEEN RECEIVED CORRECTLY.\n") MPI.COMM_WORLD.Barrier() #Free comms and finalise MD_COMM.Free() cart_comm.Free() CPL.finalize() MPI.Finalize()
- get_arrays(recv_size, send_size)[source]¶
Return recv array and send array based on constraint/boundary sizes
Example
A minimal example is possible using CPL.get_arrays, which shows paired sending and recv commands
#!/usr/bin/env python from mpi4py import MPI from cplpy import CPL comm = MPI.COMM_WORLD CPL = CPL() MD_COMM = CPL.init(CPL.MD_REALM) CPL.setup_md(MD_COMM.Create_cart([1, 1, 1]), xyzL=[1.0, 1.0, 1.0], xyz_orig=[0.0, 0.0, 0.0]) recv_array, send_array = CPL.get_arrays(recv_size=1, send_size=4) for time in range(5): send_array[0,:,:,:] = 5.*time CPL.send(send_array) recv_array, ierr = CPL.recv(recv_array) print(("MD", time, recv_array[0,0,0,0])) CPL.finalize() MPI.Finalize()
#!/usr/bin/env python from mpi4py import MPI from cplpy import CPL comm = MPI.COMM_WORLD CPL = CPL() CFD_COMM = CPL.init(CPL.CFD_REALM) CPL.setup_cfd(CFD_COMM.Create_cart([1, 1, 1]), xyzL=[1.0, 1.0, 1.0], xyz_orig=[0.0, 0.0, 0.0], ncxyz=[32, 32, 32]) recv_array, send_array = CPL.get_arrays(recv_size=4, send_size=1) for time in range(5): recv_array, ierr = CPL.recv(recv_array) print(("CFD", time, recv_array[0,0,0,0])) send_array[0,:,:,:] = 2.*time CPL.send(send_array) CPL.finalize() MPI.Finalize()