training:training

This is an old revision of the document!


Training

The training materials are organized by topics and applications in alphabetical order.

Topics

A three-part lecture series on how to compile and run serial, array, OpenMP and MPI jobs on an HPC cluster.

How to compile and run serial and array jobs (slides) (video)

How to compile and run OpenMP jobs (slides) (video)

How to compile and run MPI jobs (slides) (video)

  • Makefiles: Getting it right on Mills (slides) (video)

The National Institute for Computational Sciences (NICS) offers a High-Performance Computing (HPC) Seminar Series every Tuesday and Thursday. It is a joint effort between different leadership organizations (NICS, JICS, OLCF, XSEDE) to increase HPC awareness among the academic community. Different topics will be introduced starting with the most basic and building up to more advanced aspects in HPC. All sessions are recorded and made available for download. Reviewing past sessions may prove beneficial as a good background for future ones. The calendar of topics and download links, along with instructions on how to join the sessions online, are available on the NICS HPC Seminar Series page.

Resources explaining how to use, develop, and optimize OpenMP applications for clusters.

A three-part lecture series covering the basics concepts of parallelism and parallelism on HPC clusters.

Topics covered include thinking in parallel, Flynn’s taxonomy, types of parallelism, parallelism basics,
design patterns for parallel programs and using GNU gprof. (slides) (video - part 1) (video - part 2)

Topics covered include OpenMP and MPI programming models.(slides) (video)

Topics covered include more on MPI, vectorization, OpenACC and OpenCL. (slides) (video)

A three-part lecture series covering the basics concepts of profiling and tuning on HPC clusters.

Topics covered include profiling tools Intel® ISAT and Intel® Vtune™ Amplifier XE Evolution. (slides) (video)

Topics covered include performance counters, profiling tools such as PAPI, TAU, HPCToolkit, PerfExpert, and high performance parallel libraries such as BLAS, LAPACK, ATLAS, Intel MKL and ACML (AMD core math library). (slides) (video)

Topics covered include autotuning, dependence analysis and loop transformation including a demonstration of ISAT (Intel Software Autotuning Tool) visualized with Gnuplot. (slides) (video)

The Virtual School of Computational Science and Engineering (VSCSE) continues to be a popular choice for graduate students, post- doctoral students and professionals from academia, government and industry to gain the skills they need to leverage the power of cutting-edge computational resources. Courses are delivered simultaneously at multiple locations across the country using high-definition videoconferencing technology. UD participates as a host site, along with other prominent universities, to offer these multi-day virtual courses organized by VSCSE at UD's videoconferencing studios.

  • XSEDE: Update from UD's Campus Champion (slides) (video)
  • training/training.1521751491.txt.gz
  • Last modified: 2018-03-22 16:44
  • by anita