software:java:caviness

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
software:java:caviness [2018-08-10 16:51] anitasoftware:java:caviness [2021-04-27 16:21] (current) – external edit 127.0.0.1
Line 1: Line 1:
 ====== Projects in Java on Caviness ====== ====== Projects in Java on Caviness ======
  
-Using the basic Java example below, you can follow the steps and apply to your ''filename.java'' Remember Unix is case sensitive so it is very important for the filename ''HelloWorld.java'' match the class name defined in the file as ''HelloWorld''+Below is a basic Java example and steps you can follow and apply to your ''filename.java'' Remember Unix is case sensitive so it is very important for the filename ''HelloWorld.java'' match the class name defined in the file as ''HelloWorld''
  
 <file java HelloWorld.java> <file java HelloWorld.java>
Line 13: Line 13:
 </file> </file>
  
-<note important>All compiling for Java must be done on a compute node which means you must set your ''workgroup'' followed by ''qlogin'' to work on a compute node. Remember to use ''VALET'' to select the appropriate version of Java.</note>+Check the version of the java compiler and java available on Caviness by using
  
-Check the versions of java development available on your cluster by using +<code bash> 
- +$ javac -version
-<code>+
 $ java -version $ java -version
 </code> </code>
  
-and determine if this is acceptable for your java application to compile and create the ''HelloWorld.class'' file.  The following example is based on the user ''traine'' in workgroup ''it_css'' on Caviness using commands to create a directory ''/work/it_css/traine/java'' to store all the files associated with this example, ''salloc'' to a compute node, compile and test ''HellowWorld'', and lastly exit the compute node and return to the head node.+and determine if this is acceptable for your java application to compile and create the ''HelloWorld.class'' file.  The following example is based on the user ''traine'' in workgroup ''it_css'' on Caviness utilizing the directory ''/work/it_css/traine/java'' to store all the files associated with this example, and compiling and testing ''HellowWorld'' on the login (headnode.
  
-<code>  +<code bash>  
-[farber ~]$ workgroup -g it_css +[traine@login00 ~]$ workgroup -g it_css 
-[(it_css:traine)@farber ~]$ cd /home/work/it_css/traine/java +[(it_css:traine)@login00 ~]$ javac -version 
-[(it_css:traine)@farber java]$ cat HelloWorld.java+javac 1.8.0_161 
 +[(it_css:traine)@login00 ~]$ java -version 
 +openjdk version "1.8.0_161" 
 +OpenJDK Runtime Environment (build 1.8.0_161-b14) 
 +OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) 
 +[(it_css:traine)@login00 ~]$ cd $WORKDIR/traine/java 
 +[(it_css:traine)@login00 java]$ cat HelloWorld.java
 public class HelloWorld public class HelloWorld
 { {
Line 34: Line 39:
   }   }
 } }
-[(it_css:traine)@farber java]$ qlogin +[(it_css:traine)@login00 java]$ javac HelloWorld.java 
-Your job 1392265 ("QLOGIN") has been submitted +[(it_css:traine)@login00 java]$ ls
-waiting for interactive job to be scheduled ... +
-Your interactive job 1392265 has been successfully scheduled. +
-Establishing /opt/shared/univa/local/qlogin_ssh session to host n040 ... +
-[(it_css:traine)@n040 java]$ vpkg_devrequire openjdk/1.8.0 +
-Adding package `openjdk/1.8.0` to your environment +
-[(it_css:traine)@n040 java]$ javac HelloWorld.java +
-[(it_css:traine)@n040 java]$ ls+
 HelloWorld.class  HelloWorld.java HelloWorld.class  HelloWorld.java
-[(it_css:traine)@n040 java]$ java HelloWorld+[(it_css:traine)@login00 java]$ java HelloWorld
 Hello, World! Hello, World!
-[(it_css:traine)@n040 java]$ exit +[(it_css:traine)@login00 java]$ 
-[(it_css:traine)@farber java]$ +
 </code> </code>
  
-Once we are back on the head node, you will need a job submission script to run your java job.  For this simple example, copy ''serial.qs'' from ''/opt/templates/gridengine'' on Farber, name it ''submit.qs'', and modify it to run the ''HelloWorld'' executable.+If you want to compile on a compute node, use ''salloc --partition=devel'' to make sure you are allocated compute node with the development tools, libraries, etc. which are needed for compilers.  It is a good idea to use a compute node especially for lengthy compiles or those requiring multiple threads to reduce the compilation time.  The following example is the same as above except the compile and test ''HelloWorld'' is done on a compute node ''r00n56'' based on the job allocated from ''salloc --partition=devel''
 + 
 +<code bash>  
 +[traine@login00 ~]$ workgroup -g it_css 
 +[(it_css:traine)@login00 ~]$ salloc --partition=devel 
 +salloc: Pending job allocation 7299417 
 +salloc: job 7299417 queued and waiting for resources 
 +salloc: job 7299417 has been allocated resources 
 +salloc: Granted job allocation 7299417 
 +salloc: Waiting for resource configuration 
 +salloc: Nodes r00n56 are ready for job 
 +[traine@r00n56 ~]$ javac -version 
 +javac 1.8.0_161 
 +[traine@r00n56 ~]$ java -version 
 +openjdk version "1.8.0_161" 
 +OpenJDK Runtime Environment (build 1.8.0_161-b14) 
 +OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) 
 +[traine@r00n56 ~]$ cd $WORKDIR/traine/java 
 +[traine@r00n56 java]$ cat HelloWorld.java 
 +public class HelloWorld 
 +
 +  public static void main(String[] args) 
 +  { 
 +    System.out.println("Hello, World!"); 
 +  } 
 +
 +[traine@r00n56 java]$ javac HelloWorld.java 
 +[traine@r00n56 java]$ ls 
 +HelloWorld.class  HelloWorld.java 
 +[traine@r00n56 java]$ java HelloWorld 
 +Hello, World! 
 +[traine@r00n56 java]$ exit 
 +exit 
 +salloc: Relinquishing job allocation 7299417 
 +[(it_css:traine)@login00 ~]$ 
 +</code> 
 + 
 +If you want to run your java job in batch, then you will need a job submission script.  For this simple example, copy ''serial.qs'' from ''/opt/shared/templates/slurm/generic'' on Caviness, name it ''submit.qs'', and modify it to run the ''HelloWorld'' executable. Keep in mind the standard partition is available to everyone which means your job could get preempted (killed) in order to make room for workgroup-specific resources.  For testing purposes only, the partition has been modified from ''standard'' to ''devel'' for this example. Please read about all of the [[abstract:caviness:runjobs:queues|partitions]] to make sure your job is scheduled appropriately based on the limits defined for each partition. If you want to use the ''standard'' partition, please read about how to handle your job if it is preempted by using [[abstract:caviness:runjobs:schedule_jobs#Handling-System-Signals-aka- Checkpointing|Checkpointing]].
  
 <file bash submit.qs> <file bash submit.qs>
 +#!/bin/bash -l
 # #
-Template:  Basic Serial Job +Sections of this script that can/should be edited are delimited by a 
-#  +[EDIT] tag.  All Slurm job options are denoted by a line that starts 
-Change the following to #$ and set the amount of memory you need +with "#SBATCH " followed by flags that would otherwise be passed on 
-per-slot if you're getting out-of-memory errors using the +# the command line.  Slurm job options can easily be disabled in a 
-default: +script by inserting a space in the prefix, e.g. "# SLURM " and 
--l m_mem_free=2G+reenabled by deleting that space.
 # #
-If you want an email message to be sent to you when your job ultimately +This is a batch job template for a program using a single processor 
-finishesedit the -M line to have your email address and change the +# core/thread (a serial job). 
-next two lines to start with #$ instead of just +
-# -m eas +#SBATCH --ntasks=1 
-# -my_address@mail.server.com+
 +# [EDIT] All jobs have memory limits imposed.  The default is 1 GB per 
 +#        CPU allocated to the job.  The default can be overridden either 
 +#        with a per-node value (--mem) or a per-CPU value (--mem-per-cpu) 
 +#        with unitless values in MB and the suffixes K|M|G|T denoting 
 +#        kibi, mebi, gibi, and tebibyte units.  Delete the space between 
 +#        the "#" and the word SBATCH to enable one of them: 
 +
 +# SBATCH --mem=8G 
 +# SBATCH --mem-per-cpu=1024M 
 +
 +# [EDIT] Each node in the cluster has local scratch disk of some sort 
 +#        that is always mounted as /tmp.  Per-job and per-step temporary 
 +#        directories are automatically created and destroyed by the 
 +#        auto_tmpdir plugin in the /tmp filesystem.  To ensure a minimum 
 +#        amount of free space on /tmp when your job is scheduled, the 
 +       --tmp option can be used; it has the same behavior unit-wise as 
 +#        --mem and --mem-per-cpu.  Delete the space between the "#" and the 
 +#        word SBATCH to enable: 
 +
 +# SBATCH --tmp=1T 
 +
 +# [EDIT] It can be helpful to provide a descriptive (terse) name for 
 +#        the job: 
 +
 +#SBATCH --job-name=java_serial_job 
 +
 +# [EDIT] The partition determines which nodes can be used and with what 
 +#        maximum runtime limitsetc.  Partition limits can be displayed 
 +#        with the "sinfo --summarize" command. 
 +
 +#SBATCH --partition=devel 
 +
 +# [EDIT] The maximum runtime for the job; a single integer is interpreted 
 +#        as a number of seconds, otherwise use the format 
 +
 +#          d-hh:mm:ss 
 +
 +#        Jobs default to the maximum runtime limit of the chosen partition 
 +#        if this option is omitted. 
 +
 +#SBATCH --time=0-00:20:00 
 +
 +# [EDIT] By default SLURM sends the job's stdout to the file "slurm-<jobid>.out" 
 +#        and the job's stderr to the file "slurm-<jobid>.err" in the working 
 +       directory.  Override by deleting the space between the "#" and the 
 +#        word SBATCH on the following lines; see the man page for sbatch for 
 +#        special tokens that can be used in the filenames: 
 +
 +# SBATCH --output=%x-%j.out 
 +# SBATCH --error=%x-%j.out 
 +
 +# [EDIT] Slurm can send emails to you when a job transitions through various 
 +       states: NONE, BEGIN, END, FAIL, REQUEUE, ALL, TIME_LIMIT, 
 +       TIME_LIMIT_50, TIME_LIMIT_80, TIME_LIMIT_90, ARRAY_TASKS.  One or more 
 +       of these flags (separated by commas) are permissible for the 
 +#        --mail-type flag.  You MUST set your mail address using --mail-user 
 +       for messages to get off the cluster. 
 +
 +# SBATCH --mail-user='my_address@udel.edu' 
 +# SBATCH --mail-type=END,FAIL,TIME_LIMIT_90 
 +
 +# [EDIT] By default we DO NOT want to send the job submission environment 
 +#        to the compute node when the job runs. 
 +
 +#SBATCH --export=NONE
 # #
  
-Add vpkg_require commands after this line+# 
-vpkg_require openjdk/1.8.0+# Do general job environment setup
 +
 +/opt/shared/slurm/templates/libexec/common.sh
  
-Now append all of your shell commands necessary to run your program +# 
-after this line:+# [EDIT] Add your script statements hereafter, or execute a script or program 
 +       using the srun command. 
 +#
 java HelloWorld java HelloWorld
 </file> </file>
  
-Now submit the job using ''qsub submit.qs'' on the head node. The example below shows this process incuding commands to monitor the status of job and view results of the job run+Now submit the job using ''sbatch submit.qs'' on the head node after you make sure you are in your workgroup. The example below shows the output from the job submission and how to view the results of the job run.
- +
-<code> +
-(it_css:traine)@farber java]$ qsub submit.qs +
-Your job 1392329 ("submit.qs") has been submitted +
-[(it_css:traine)@farber java]$ qstat +
-job-ID     prior   name       user         state submit/start at     queue                          jclass                         slots ja-task-ID +
------------------------------------------------------------------------------------------------------------------------------------------------- +
-   1392329 0.00000 submit.qs  traine        qw    12/04/2017 16:54:28                                                                   1 +
-[(it_css:traine)@farber java]$ ls +
-HelloWorld.class  HelloWorld.java  submit.qs  submit.qs.o1392329 +
-[(it_css:traine)@farber java]$ qstat +
-[(it_css:traine)@farber java]$ cat submit.qs.o1392329 +
- +
-[CGROUPS] UD Grid Engine cgroup setup commencing +
-[CGROUPS] Setting 1073741824 bytes (vmem none bytes) on n038 (master) +
-[CGROUPS]   with 1 core = +
-[CGROUPS] done.+
  
-Adding package `openjdk/1.8.0` to your environment+<code bash> 
 +[traine@login00 ~]$ workgroup -g it_css 
 +[(it_css:traine)@login00 java]$ sbatch submit.qs 
 +Submitted batch job 1231                                                                   1 
 +[(it_css:traine)@login00 java]$ ls 
 +HelloWorld.class  HelloWorld.java  slurm-1231.out  submit.qs 
 +[(it_css:traine)@login00 java]$ cat slurm-1231.out
 Hello, World! Hello, World!
-[(it_css:traine)@farber java]$+[(it_css:traine)@login00 java]$
 </code>  </code> 
  
-<note tip>Please review the templates for job submission scripts in ''/opt/templates/gridengine'' on Farber and Mills If you do not specify any resources, by default you will get 1 core and 1GB of memory (a simple serial job) on Farber.</note>+<note tip>Please review the templates for job submission scripts in ''/opt/shared/templates'' on Caviness. There are ''README.md'' files in each subdirectory to explain the use of these templates. If you do not specify any resources, by default you will get the standard partition, with 1 core and 1GB of memory (a simple serial job) and 20 minutes runtime on Caviness.</note>
  
  • software/java/caviness.1533934271.txt.gz
  • Last modified: 2018-08-10 16:51
  • by anita