technical:recipes:gnnunlock

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
technical:recipes:gnnunlock [2024-07-01 13:44] – created freytechnical:recipes:gnnunlock [2024-07-05 11:15] (current) – [Build the C++ training program] frey
Line 77: Line 77:
   * ''makefile.nomkl'' uses GNU C++ and embedded linear algebra functionality that may or may not be parallelized with OpenMP   * ''makefile.nomkl'' uses GNU C++ and embedded linear algebra functionality that may or may not be parallelized with OpenMP
  
-Since we wish to use Intel oneAPI compilers and MKL, the ''makefile'' will be used in slightly altered form.  A patch file is supplied for this purpose -- download ''makefile.oneapi.patch'' and copy it to the ''${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}/GNNUnlock/GraphSAINT'' directory created in this recipe:+Since we wish to use Intel oneAPI compilers and MKL, the ''makefile'' will be used in slightly altered form.  A patch file is supplied for this purpose -- download {{ :technical:recipes:makefile.oneapi.patch |makefile.oneapi.patch}} and copy it to the ''${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}/GNNUnlock/GraphSAINT'' directory created in this recipe:
  
-<file diff makefile.oneapi.patch>+<code diff>
 --- A/makefile 2024-07-01 10:09:30.696062752 -0400 --- A/makefile 2024-07-01 10:09:30.696062752 -0400
 +++ B/makefile 2024-07-01 10:12:32.685255418 -0400 +++ B/makefile 2024-07-01 10:12:32.685255418 -0400
Line 94: Line 94:
  _DEPS=global.h optm.h # global dependencies  _DEPS=global.h optm.h # global dependencies
  DEPS=$(patsubst %,$(IDIR)/%,$(_DEPS))  DEPS=$(patsubst %,$(IDIR)/%,$(_DEPS))
-</file>+</code>
  
 The patch gets applied in the source directory: The patch gets applied in the source directory:
Line 103: Line 103:
 [frey@login01.caviness ipdps19_cpp]$ make [frey@login01.caviness ipdps19_cpp]$ make
 [frey@login01.caviness ipdps19_cpp]$ install train "${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}/bin/ipdps19-train" [frey@login01.caviness ipdps19_cpp]$ install train "${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}/bin/ipdps19-train"
-[frey@login01.caviness ipdps19_cpp]$ popd 
-[frey@login01.caviness GraphSAINT]$ popd 
-[frey@login01.caviness GNNUnlock]$ 
 </code> </code>
  
 The compiled program is installed in the ''bin'' directory for the virtualenv as ''ipdps19-train''; when the virtualenv is activated, the program can be executed with the bare command ''ipdps19-train''. The compiled program is installed in the ''bin'' directory for the virtualenv as ''ipdps19-train''; when the virtualenv is activated, the program can be executed with the bare command ''ipdps19-train''.
  
-At this point the working directory has been returned to the GNNUnlock repository. +Note the value of two environment variables used in this recipe before exiting and proceeding to the next section:
- +
-===== VALET package definition ===== +
- +
-Before going any further, a VALET package definition file should be created to facilitate the use of GNNUnlock in the future.  Since this recipe has created the virtualenv in the user's home directory, it makes sense to create the VALET package definition file therein, as well.  For other installation locations (like workgroup storage) an alternative location may be appropriate for the package definition file. +
- +
-I first note the value of several environment variables used in this recipe:+
  
 <code bash> <code bash>
-[frey@login01.caviness GNNUnlock]$ echo $GNNUNLOCK_PREFIX+[frey@login01.caviness ipdps19_cpp]$ echo $GNNUNLOCK_PREFIX
 /home/1001/sw/gnnunlock /home/1001/sw/gnnunlock
  
-[frey@login01.caviness GNNUnlock]$ echo $GNNUNLOCK_VERSION+[frey@login01.caviness ipdps19_cpp]$ echo $GNNUNLOCK_VERSION
 2024.07.01 2024.07.01
 </code> </code>
  
-Recall that ''intel-oneapi/2024'' was added to the environment at the beginning of this recipe:  that is the sole dependency associated with this GNNUnlock version.  The VALET package definition file created at ''~/.valet/gnnunlock.vpkg_yaml'' would look like thiswith the appropriate value of ''$GNNUNLOCK_PREFIX'' substituted for ''«GNNUNLOCK_PREFIX»'' etc.:+===== VALET package definition ===== 
 + 
 +Before going any further, a VALET package definition file should be created to facilitate the use of GNNUnlock in the future.  Since this recipe has created the virtualenv in the user's home directory, it makes sense to create the VALET package definition file therein, as well.  For other installation locations (like workgroup storage) an alternative location may be appropriate for the package definition file. 
 + 
 +Recall that ''intel-oneapi/2024'' was added to the environment at the beginning of this recipe:  that is the sole dependency associated with this GNNUnlock version.  The VALET package definition file created at ''~/.valet/gnnunlock.vpkg_yaml'' would look like this (with the appropriate value of ''$GNNUNLOCK_PREFIX'' substituted for ''«GNNUNLOCK_PREFIX»''etc.):
  
 <file yaml gnnunlock.vpkg_yaml> <file yaml gnnunlock.vpkg_yaml>
Line 149: Line 144:
  
 </file> </file>
 +
 +The package can be added to the environment of a new login shell:
 +
 +<code bash>
 +[frey@login00.caviness ~]$ vpkg_require gnnunlock/2024.07.01
 +Adding dependency `binutils/2.35` to your environment
 +Adding dependency `gcc/12.1.0` to your environment
 +Adding dependency `intel-oneapi/2024.0.1.46` to your environment
 +Adding package `gnnunlock/2024.07.01` to your environment
 +</code>
 +
 +The C++ training program is available as expected where it was installed:
 +
 +<code bash>
 +[frey@login00.caviness ~]$ which ipdps19-train 
 +~/sw/gnnunlock/2024.07.01/bin/ipdps19-train
 +</code>
 +
 +The GNNUnlock and GraphSAINT repositories are easily referenced using the ''GNNUNLOCK_DIR'' and ''GRAPHSAINT_DIR'' variables set by the VALET package definition:
 +
 +<code bash>
 +[frey@login00.caviness ~]$ cd $GRAPHSAINT_DIR
 +[frey@login00.caviness GraphSAINT]$ pwd
 +/home/1001/sw/gnnunlock/2024.07.01/GNNUnlock/GraphSAINT
 +
 +[frey@login00.caviness ~]$ cd $GNNUNLOCK_DIR
 +[frey@login00.caviness GNNUNLOCK_DIR]$ pwd
 +/home/1001/sw/gnnunlock/2024.07.01/GNNUnlock
 +</code>
 +
 +At this point the shell is in the appropriate working directory for the GNNUnlock example.
 +
 +===== Examples =====
 +
 +<WRAP center round important 60%>
 +The use of a login node in this recipe is purely for illustrative purposes.  Computational work should be performed on a compute node and not on a login node.
 +</WRAP>
 +
 +==== TensorFlow and Python ====
 +
 +The GNNUnlock repository includes example circuit data that must be transformed to a graph format before GNNUnlock can be executed.  The directions in the GNNUnlock documentation can be followed:
 +
 +<code bash>
 +[frey@login01.caviness GNNUnlock]$ mkdir -p Netlist_to_graph/Graphs_datasets/anti_sat_iscas_c7552
 +[frey@login01.caviness GNNUnlock]$ pushd Netlist_to_graph/Graphs_datasets/anti_sat_iscas_c7552/
 +[frey@login01.caviness anti_sat_iscas_c7552]$ cp ../../Parsers/graph_parser.py .
 +[frey@login01.caviness anti_sat_iscas_c7552]$ perl ../../Parsers/AntiSAT_bench_to_graph.pl -i ../../Circuits_datasets/ANTI_SAT_DATASET_c7552 > log.txt
 +Can't locate /Users/guest1/Desktop/GNNUnlock_Master/Netlist_to_graph/Parsers/theCircuit.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at ../../Parsers/AntiSAT_bench_to_graph.pl line 6.
 +</code>
 +
 +The documentation //did state// that line 6 of that Perl script must be modified, but rather than changing it to the absolute path at which ''theCircuit.pm'' exists, a relative path and symbolic link will be leveraged.  First, edit ''../../Parsers/AntiSAT_bench_to_graph.pl'' and change line 6 to read:
 +
 +<code perl>
 +require "./theCircuit.pm";
 +</code>
 +
 +This instructs Perl to read the module file ''theCircuit.pm'' from the current working directory; a symbolic link in that working directory completes the fixup:
 +
 +<code bash>
 +[frey@login00.caviness anti_sat_iscas_c7552]$ ln -s ../../Parsers/theCircuit.pm .
 +[frey@login01.caviness anti_sat_iscas_c7552]$ perl ../../Parsers/AntiSAT_bench_to_graph.pl -i ../../Circuits_datasets/ANTI_SAT_DATASET_c7552 > log.txt
 +                      AntiSAT_bench_to_graph.pl
 +                 Version 1.7  Released on 2021/02/09
 +                    Lilas Alrahis <lma387@nyu.edu>
 +                        NYUAD, Abu Dhabi, UAE
 +
 +           'perl AntiSAT_bench_to_graph.pl -help' for help
 +
 +
 +Program completed in 443 sec without error.
 +</code>
 +
 +The same "trick" with a relative path and symbolic link can be used in the ''SFLL_Verilog_to_graph.pl'' Perl script.  Finally, the Python graph parser is run on the data in the working directory:
 +
 +<code bash>
 +[frey@login01.caviness anti_sat_iscas_c7552]$ python graph_parser.py
 +</code>
 +
 +At long last, the GraphSAINT program can be used to train with the graph data.
 +
 +<WRAP center round important 60%>
 +All execution of GraphSAINT code (in both the GraphSAINT and GNNUnlock documentation) must be made from the GraphSAINT repository directory.
 +</WRAP>
 +
 +<code bash>
 +[frey@login00.caviness anti_sat_iscas_c7552]$ cd $GRAPHSAINT_DIR
 +[frey@login00.caviness GraphSAINT]$ python -m graphsaint.tensorflow_version.train \
 +    --data_prefix ../Netlist_to_graph/Graphs_datasets/anti_sat_iscas_c7552 \
 +    --train_config ../DATE21.yml
 +</code>
 +
 +Circa 40 iterations into the training, the program was actively-occupying around 3.5 GiB of memory and utilizing all 36 cores in the node:
 +
 +<code>
 +  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                            
 + 2893 frey      20   0 30.941g 3.479g  93744 S  3599  1.4  26:03.21 train.py
 +</code>
 +
 +Memory usage appears to continually increase as training proceeds, so users are encouraged to benchmark and properly-budget memory requests for GNNUnlock jobs.
 +
 +==== C++ train ====
 +
 +The C++ training program was tested with Reddit data available in the [[https://drive.google.com/open?id=1hKG5Op7Ohwr1QDSNsyzx1Wc-oswe78ac|Google drive]] referenced by the documentation.  The ''reddit'' directory is downloaded as a ZIP archive and should be copied into a directory named ''data_cpp'' on Caviness -- in this recipe the ZIP file was uploaded to the user's home directory:
 +
 +<code bash>
 +[frey@login00.caviness ~]$ mkdir ~/sw/gnnunlock/data_cpp
 +[frey@login00.caviness ~]$ mv ~/reddit-20240701T143527Z-001.zip ~/sw/gnnunlock/data_cpp
 +[frey@login00.caviness ~]$ cd ~/sw/gnnunlock/data_cpp
 +[frey@login00.caviness data_cpp]$ unzip reddit-20240701T143527Z-001.zip
 +[frey@login00.caviness data_cpp]$ ls -l reddit
 +total 1236252
 +-rw-r--r-- 1 frey everyone   92855352 Jan 20  2020 adj_full_indices.bin
 +-rw-r--r-- 1 frey everyone     931864 Jan 20  2020 adj_full_indptr.bin
 +-rw-r--r-- 1 frey everyone   43012952 Jan 20  2020 adj_train_indices.bin
 +-rw-r--r-- 1 frey everyone     931864 Jan 20  2020 adj_train_indptr.bin
 +-rw-r--r-- 1 frey everyone         44 Jan 20  2020 dims.bin
 +-rw-r--r-- 1 frey everyone 1121959440 Jan 20  2020 feats_norm_col.bin
 +-rw-r--r-- 1 frey everyone   76412520 Jan 20  2020 labels_col.bin
 +-rw-r--r-- 1 frey everyone     221336 Jan 20  2020 node_test.bin
 +-rw-r--r-- 1 frey everyone     615728 Jan 20  2020 node_train.bin
 +-rw-r--r-- 1 frey everyone      94796 Jan 20  2020 node_val.bin
 +</code>
 +
 +Training must be effected from the ''data_cpp'' directory.  In this example, just 5 iterations will be executed on 4 threads:
 +
 +<code bash>
 +[frey@login00.caviness ~]$ vpkg_require gnnunlock/2024.07.01
 +Adding dependency `binutils/2.35` to your environment
 +Adding dependency `gcc/12.1.0` to your environment
 +Adding dependency `intel-oneapi/2024.0.1.46` to your environment
 +Adding package `gnnunlock/2024.07.01` to your environment
 +
 +[frey@login00.caviness data_cpp]$ ipdps19-train reddit 5 4 softmax
 +OMP: Info #277: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
 +============
 +ITERATION 0
 +============
 +Sampling 4 subgraphs.
 +Thread 0 doubling from 207000 to 414000.
 +Thread 3 doubling from 207000 to 414000.
 +Thread 1 doubling from 207000 to 414000.
 +Thread 2 doubling from 207000 to 414000.
 +thread 0 finish in 113ms while pre use 4ms and post use 91ms.
 +thread 2 finish in 155ms while pre use 6ms and post use 118ms.
 +thread 1 finish in 159ms while pre use 7ms and post use 122ms.
 +thread 3 finish in 159ms while pre use 6ms and post use 123ms.
 +Sampling: total time 0.16187406s.
 +Training itr 0 f1_mic: 0.034096, f1_mac: 0.019856
 +============
 +ITERATION 1
 +============
 +Training itr 1 f1_mic: 0.206164, f1_mac: 0.050644
 +============
 +ITERATION 2
 +============
 +Training itr 2 f1_mic: 0.233685, f1_mac: 0.061633
 +============
 +ITERATION 3
 +============
 +Training itr 3 f1_mic: 0.253775, f1_mac: 0.060568
 +============
 +ITERATION 4
 +============
 +Sampling 4 subgraphs.
 +Thread 3 doubling from 207000 to 414000.
 +Thread 1 doubling from 207000 to 414000.
 +Thread 0 doubling from 207000 to 414000.
 +Thread 2 doubling from 207000 to 414000.
 +thread 2 finish in 109ms while pre use 1ms and post use 89ms.
 +thread 3 finish in 110ms while pre use 2ms and post use 92ms.
 +thread 1 finish in 111ms while pre use 2ms and post use 92ms.
 +thread 0 finish in 111ms while pre use 3ms and post use 92ms.
 +Sampling: total time 0.11241198s.
 +Training itr 4 f1_mic: 0.297525, f1_mac: 0.080492
 +--------------------
 +DENSE time: 0.451507
 +SPARSE time: 0.226233
 +RELU time: 0.037294
 +NORM time: 0.069778
 +LOOKUP time: 0.096633
 +BIAS time: 0.006502
 +MASK time: 0.002519
 +REDUCE time: 0.004366
 +SIGMOID time: 0.000000
 +SOFTMAX time: 0.000000
 +--------------------
 +Testing f1_mic: 0.365237, f1_mac: 0.107992
 +</code>
 +
 +The OMP warning indicates that the C++ code uses an OpenMP API that was part of an older OpenMP standard; the function in question still works as expected, but is likely to be removed in future releases of OpenMP.  The developer is basically being warned to update his/her source code.
  • technical/recipes/gnnunlock.1719855864.txt.gz
  • Last modified: 2024-07-01 13:44
  • by frey