abstract:darwin:app_dev:prog_env

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
abstract:darwin:app_dev:prog_env [2020-11-19 17:15] – external edit 127.0.0.1abstract:darwin:app_dev:prog_env [2022-08-30 10:11] (current) – [Introduction] anita
Line 6: Line 6:
 There are two memory models for computing: distributed-memory and shared-memory. In the former, the message passing interface (%%MPI%%) is employed in programs to communicate between processors that use their own memory address space. In the latter, open multiprocessing (OMP) programming techniques are employed for multiple threads (light weight processes) to access memory in a common address space. When your job spans several compute nodes, you must use an MPI model. There are two memory models for computing: distributed-memory and shared-memory. In the former, the message passing interface (%%MPI%%) is employed in programs to communicate between processors that use their own memory address space. In the latter, open multiprocessing (OMP) programming techniques are employed for multiple threads (light weight processes) to access memory in a common address space. When your job spans several compute nodes, you must use an MPI model.
  
-Distributed memory systems use single-program multiple-data (SPMD) and multiple-program multiple-data (MPMD) programming paradigms. In the SPMD paradigm, each processor loads the same program image and executes and operates on data in its own address space (different data). It is the usual mechanism for MPI code: a single executable is available on each node (through a globally accessible file system such as $WORKDIR or ''/lustre/scratch''), and launched on each node (through the MPI wrapper command, **mpirun**).+Distributed memory systems use single-program multiple-data (SPMD) and multiple-program multiple-data (MPMD) programming paradigms. In the SPMD paradigm, each processor loads the same program image and executes and operates on data in its own address space (different data). It is the usual mechanism for MPI code: a single executable is available on each node (through a globally accessible file system such as $WORKDIR), and launched on each node (through the MPI wrapper command, **mpirun**).
  
 The shared-memory programming model is used on Symmetric Multi-Processor (SMP) nodes such as a single typical compute node (20 or 24 cores, 64 GB memory). The programming paradigm for this memory model is called Parallel Vector Processing (PVP) or Shared-Memory Parallel Programming (SMPP). The former name is derived from the fact that vectorizable loops are often employed as the primary structure for parallelization. The main point of SMPP computing is that all of the processors in the same node share data in a single memory subsystem. There is no need for explicit messaging between processors as with MPI coding. The shared-memory programming model is used on Symmetric Multi-Processor (SMP) nodes such as a single typical compute node (20 or 24 cores, 64 GB memory). The programming paradigm for this memory model is called Parallel Vector Processing (PVP) or Shared-Memory Parallel Programming (SMPP). The former name is derived from the fact that vectorizable loops are often employed as the primary structure for parallelization. The main point of SMPP computing is that all of the processors in the same node share data in a single memory subsystem. There is no need for explicit messaging between processors as with MPI coding.
Line 20: Line 20:
  ===== Compiling code =====  ===== Compiling code =====
 <note important> <note important>
-Fortran, C, C++, Java and Matlab programs should be compiled on the login node, however if lengthy compiles are required or you want to schedule a job for compilation, you must use the ''devel'' partition with ''salloc'' or ''sbatch'' to make sure you are allocated a compute node with the development tools, libraries, etc. which are needed for compilers. **//All resulting executables should only be run on the compute nodes.//**+Fortran, C, C++, Java and Matlab programs should be compiled on the login node, however if lengthy compiles or extensive resources needed, you may need to schedule a job for compilation using ''salloc'' or ''sbatch'' which will be [[abstract:darwin:runjobs:accounting|billed]] to your allocation. **//All resulting executables should only be run on the compute nodes.//**
 </note> </note>
  
Line 167: Line 167:
 == Commercial libraries == == Commercial libraries ==
  
-  * [[http://developer.amd.com/libraries/acml/pages/default.aspx|ACML]]: AMD's Core Math Library (See [[http://developer.amd.com/libraries/acml/onlinehelp/Documents/BestLibrary.html#BestLibrary|AMD'guide on library selection]].)+  * [[https://developer.amd.com/amd-aocl/|AOCL]]: AMD Optimizing CPU Libraries (See [[https://developer.amd.com/wp-content/resources/57404_User_Guide_AMD_AOCL_v3.2_GA.pdf|AMD'AOCL User Guide]].) AOCL is the successor to ACML.
   * [[http://www.roguewave.com/products/imsl|IMSL]]: RogueWave's mathematical and statistical libraries   * [[http://www.roguewave.com/products/imsl|IMSL]]: RogueWave's mathematical and statistical libraries
   * [[http://software.intel.com/en-us/articles/intel-mkl/?utm_source=google&utm_medium=cpc&utm_term=intel_mkl&utm_content=dpd_us_hpc_mkl& utm_campaign=DIV_US_DPD_%28S%29|MKL]]: Intel's Math Kernel Library   * [[http://software.intel.com/en-us/articles/intel-mkl/?utm_source=google&utm_medium=cpc&utm_term=intel_mkl&utm_content=dpd_us_hpc_mkl& utm_campaign=DIV_US_DPD_%28S%29|MKL]]: Intel's Math Kernel Library
  • abstract/darwin/app_dev/prog_env.1605824118.txt.gz
  • Last modified: 2020-11-19 17:15
  • by 127.0.0.1