Installing GNNUnlock on Caviness
The GNNUnlock Python code (available on GitHub) makes use of the GraphSaint Python code (available on GitHub) which makes use of TensorFlow and its underlying dependencies (like numpy, scipy). GNNUnlock is pure Python code (with two Perl helpers) but GraphSAINT includes compiled components. The recipe is thus:
- Create a Python virtual environment with the required TensorFlow dependencies
- Compile the GraphSAINT binary components in the virtual environment
- Optionally compile the GraphSAINT C++ training program
- Create a VALET package definition to manage the GNNUnlock virtual environment(s)
In the resulting virtual environment the following tasks can be performed:
- Download e.g. Reddit training data from Google drive and process with the C++ training program
- Convert the example GNNUnlock circuits data to graph format for GNNUnlock
- Perform GNNUnlock training with converted circuits data
Create the virtual environment
The GNNUnlock C++ program is best-compiled using Intel compilers and the MKL library. Intel oneAPI includes conda for virtual environment tasks so it will be used for the creation of the virtual environment, etc.
In this recipe a versioned software directory hierarchy will be created for GNNUnlock in the user's home directory. This procedure can be modified to install to an alternative location by altering the value assigned to the GNNUNLOCK_PREFIX
environment variable. Since GNNUnlock has no releases, branches, or tags present in GitHub, the date of download is adopted as the version.
[frey@login01.caviness ~]$ GNNUNLOCK_PREFIX=~/sw/gnnunlock [frey@login01.caviness ~]$ GNNUNLOCK_VERSION=2024.07.01 [frey@login01.caviness ~]$ vpkg_require intel-oneapi/2024 [frey@login01.caviness ~]$ rm -rf ~/.conda/cache [frey@login01.caviness ~]$ conda create --prefix "${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}" \ --override-channels --channel intel --channel anaconda \ python'>=3.6.8' \ tensorflow'=1.15.2' \ cython'>=0.29.2' \ numpy'>=1.14.3' \ scipy'>=1.1.0' \ scikit-learn'>=0.19.1' \ pyyaml'>=3.12' [frey@login01.caviness ~]$ conda activate "${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}"
The conda cache is removed to prevent existing downloaded packages from interfering with what's available online and to keep the user's home directory from growing too large. Restricting the virtualenv creation to only the intel and anaconda channels keeps the package-solving easier and biases toward consuming Intel-based packages that are likely very well-optimized for Caviness' hardware.
Clone the source code
The source repositories for both GNNUnlock and GraphSAINT will be cloned into the virtualenv directory itself, starting with GNNUnlock:
[frey@login01.caviness ~]$ cd "${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}" [frey@login01.caviness 2024.07.01]$ git clone https://github.com/DfX-NYUAD/GNNUnlock.git [frey@login01.caviness 2024.07.01]$ cd GNNUnlock
The examples presented in the GNNUnlock documentation assume that GraphSAINT has been cloned as a sub-directory of the GNNUnlock directory:
[frey@login01.caviness GNNUnlock]$ git clone https://github.com/GraphSAINT/GraphSAINT.git [frey@login01.caviness GNNUnlock]$ pushd GraphSAINT [frey@login01.caviness GraphSAINT]$
At this point the GraphSAINT repository is the current working directory.
Build GraphSAINT binary components
GraphSAINT includes several binary (cython) components that must be compiled in the current virtualenv. The GraphSAINT (and GNNUnlock) documentation provide the necessary command:
[frey@login01.caviness GraphSAINT]$ python graphsaint/setup.py build_ext --inplace
The binary components are installed in the graphsaint
directory itself, where the Python code expects to find them.
Build the C++ training program
The ipdps19_cpp
sub-directory contains the source code for the C++ training program. There are two make files:
makefile
uses Intel (pre-oneAPI) C++ and the MKL library for linear algebramakefile.nomkl
uses GNU C++ and embedded linear algebra functionality that may or may not be parallelized with OpenMP
Since we wish to use Intel oneAPI compilers and MKL, the makefile
will be used in slightly altered form. A patch file is supplied for this purpose – download makefile.oneapi.patch and copy it to the ${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}/GNNUnlock/GraphSAINT
directory created in this recipe:
--- A/makefile 2024-07-01 10:09:30.696062752 -0400 +++ B/makefile 2024-07-01 10:12:32.685255418 -0400 @@ -1,8 +1,8 @@ -CC=icc +CC=icpx IDIR=./include ODIR=./obj -LIBS=-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -CFLAGS=-I${IDIR} -I${MKLROOT}/include -fopenmp -pthread -Wall -O3 --std=c++11 +LIBS= +CFLAGS=-I${IDIR} -qmkl=parallel -qopenmp -pthread -Wall -O3 --std=c++11 _DEPS=global.h optm.h # global dependencies DEPS=$(patsubst %,$(IDIR)/%,$(_DEPS))
The patch gets applied in the source directory:
[frey@login01.caviness GraphSAINT]$ pushd ipdps19_cpp [frey@login01.caviness ipdps19_cpp]$ patch -p1 < ../makefile.oneapi.patch [frey@login01.caviness ipdps19_cpp]$ make [frey@login01.caviness ipdps19_cpp]$ install train "${GNNUNLOCK_PREFIX}/${GNNUNLOCK_VERSION}/bin/ipdps19-train"
The compiled program is installed in the bin
directory for the virtualenv as ipdps19-train
; when the virtualenv is activated, the program can be executed with the bare command ipdps19-train
.
Note the value of two environment variables used in this recipe before exiting and proceeding to the next section:
[frey@login01.caviness ipdps19_cpp]$ echo $GNNUNLOCK_PREFIX /home/1001/sw/gnnunlock [frey@login01.caviness ipdps19_cpp]$ echo $GNNUNLOCK_VERSION 2024.07.01
VALET package definition
Before going any further, a VALET package definition file should be created to facilitate the use of GNNUnlock in the future. Since this recipe has created the virtualenv in the user's home directory, it makes sense to create the VALET package definition file therein, as well. For other installation locations (like workgroup storage) an alternative location may be appropriate for the package definition file.
Recall that intel-oneapi/2024
was added to the environment at the beginning of this recipe: that is the sole dependency associated with this GNNUnlock version. The VALET package definition file created at ~/.valet/gnnunlock.vpkg_yaml
would look like this (with the appropriate value of $GNNUNLOCK_PREFIX
substituted for «GNNUNLOCK_PREFIX»
, etc.):
- gnnunlock.vpkg_yaml
gnnunlock: prefix: «GNNUNLOCK_PREFIX» description: "Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking" url: "https://github.com/DfX-NYUAD/GNNUnlock" versions: "«GNNUNLOCK_VERSION»": description: sources cloned from github 2024 July 01 dependencies: - intel-oneapi/2024 actions: - action: source script: sh: intel-python.sh - variable: GNNUNLOCK_DIR value: ${VALET_PATH_PREFIX}/GNNUnlock - variable: GRAPHSAINT_DIR value: ${VALET_PATH_PREFIX}/GNNUnlock/GraphSAINT
The package can be added to the environment of a new login shell:
[frey@login00.caviness ~]$ vpkg_require gnnunlock/2024.07.01 Adding dependency `binutils/2.35` to your environment Adding dependency `gcc/12.1.0` to your environment Adding dependency `intel-oneapi/2024.0.1.46` to your environment Adding package `gnnunlock/2024.07.01` to your environment
The C++ training program is available as expected where it was installed:
[frey@login00.caviness ~]$ which ipdps19-train ~/sw/gnnunlock/2024.07.01/bin/ipdps19-train
The GNNUnlock and GraphSAINT repositories are easily referenced using the GNNUNLOCK_DIR
and GRAPHSAINT_DIR
variables set by the VALET package definition:
[frey@login00.caviness ~]$ cd $GRAPHSAINT_DIR [frey@login00.caviness GraphSAINT]$ pwd /home/1001/sw/gnnunlock/2024.07.01/GNNUnlock/GraphSAINT [frey@login00.caviness ~]$ cd $GNNUNLOCK_DIR [frey@login00.caviness GNNUNLOCK_DIR]$ pwd /home/1001/sw/gnnunlock/2024.07.01/GNNUnlock
At this point the shell is in the appropriate working directory for the GNNUnlock example.
Examples
The use of a login node in this recipe is purely for illustrative purposes. Computational work should be performed on a compute node and not on a login node.
TensorFlow and Python
The GNNUnlock repository includes example circuit data that must be transformed to a graph format before GNNUnlock can be executed. The directions in the GNNUnlock documentation can be followed:
[frey@login01.caviness GNNUnlock]$ mkdir -p Netlist_to_graph/Graphs_datasets/anti_sat_iscas_c7552 [frey@login01.caviness GNNUnlock]$ pushd Netlist_to_graph/Graphs_datasets/anti_sat_iscas_c7552/ [frey@login01.caviness anti_sat_iscas_c7552]$ cp ../../Parsers/graph_parser.py . [frey@login01.caviness anti_sat_iscas_c7552]$ perl ../../Parsers/AntiSAT_bench_to_graph.pl -i ../../Circuits_datasets/ANTI_SAT_DATASET_c7552 > log.txt Can't locate /Users/guest1/Desktop/GNNUnlock_Master/Netlist_to_graph/Parsers/theCircuit.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at ../../Parsers/AntiSAT_bench_to_graph.pl line 6.
The documentation did state that line 6 of that Perl script must be modified, but rather than changing it to the absolute path at which theCircuit.pm
exists, a relative path and symbolic link will be leveraged. First, edit ../../Parsers/AntiSAT_bench_to_graph.pl
and change line 6 to read:
require "./theCircuit.pm";
This instructs Perl to read the module file theCircuit.pm
from the current working directory; a symbolic link in that working directory completes the fixup:
[frey@login00.caviness anti_sat_iscas_c7552]$ ln -s ../../Parsers/theCircuit.pm . [frey@login01.caviness anti_sat_iscas_c7552]$ perl ../../Parsers/AntiSAT_bench_to_graph.pl -i ../../Circuits_datasets/ANTI_SAT_DATASET_c7552 > log.txt AntiSAT_bench_to_graph.pl Version 1.7 Released on 2021/02/09 Lilas Alrahis <lma387@nyu.edu> NYUAD, Abu Dhabi, UAE 'perl AntiSAT_bench_to_graph.pl -help' for help Program completed in 443 sec without error.
The same "trick" with a relative path and symbolic link can be used in the SFLL_Verilog_to_graph.pl
Perl script. Finally, the Python graph parser is run on the data in the working directory:
[frey@login01.caviness anti_sat_iscas_c7552]$ python graph_parser.py
At long last, the GraphSAINT program can be used to train with the graph data.
All execution of GraphSAINT code (in both the GraphSAINT and GNNUnlock documentation) must be made from the GraphSAINT repository directory.
[frey@login00.caviness anti_sat_iscas_c7552]$ cd $GRAPHSAINT_DIR [frey@login00.caviness GraphSAINT]$ python -m graphsaint.tensorflow_version.train \ --data_prefix ../Netlist_to_graph/Graphs_datasets/anti_sat_iscas_c7552 \ --train_config ../DATE21.yml
Circa 40 iterations into the training, the program was actively-occupying around 3.5 GiB of memory and utilizing all 36 cores in the node:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2893 frey 20 0 30.941g 3.479g 93744 S 3599 1.4 26:03.21 train.py
Memory usage appears to continually increase as training proceeds, so users are encouraged to benchmark and properly-budget memory requests for GNNUnlock jobs.
C++ train
The C++ training program was tested with Reddit data available in the Google drive referenced by the documentation. The reddit
directory is downloaded as a ZIP archive and should be copied into a directory named data_cpp
on Caviness – in this recipe the ZIP file was uploaded to the user's home directory:
[frey@login00.caviness ~]$ mkdir ~/sw/gnnunlock/data_cpp [frey@login00.caviness ~]$ mv ~/reddit-20240701T143527Z-001.zip ~/sw/gnnunlock/data_cpp [frey@login00.caviness ~]$ cd ~/sw/gnnunlock/data_cpp [frey@login00.caviness data_cpp]$ unzip reddit-20240701T143527Z-001.zip [frey@login00.caviness data_cpp]$ ls -l reddit total 1236252 -rw-r--r-- 1 frey everyone 92855352 Jan 20 2020 adj_full_indices.bin -rw-r--r-- 1 frey everyone 931864 Jan 20 2020 adj_full_indptr.bin -rw-r--r-- 1 frey everyone 43012952 Jan 20 2020 adj_train_indices.bin -rw-r--r-- 1 frey everyone 931864 Jan 20 2020 adj_train_indptr.bin -rw-r--r-- 1 frey everyone 44 Jan 20 2020 dims.bin -rw-r--r-- 1 frey everyone 1121959440 Jan 20 2020 feats_norm_col.bin -rw-r--r-- 1 frey everyone 76412520 Jan 20 2020 labels_col.bin -rw-r--r-- 1 frey everyone 221336 Jan 20 2020 node_test.bin -rw-r--r-- 1 frey everyone 615728 Jan 20 2020 node_train.bin -rw-r--r-- 1 frey everyone 94796 Jan 20 2020 node_val.bin
Training must be effected from the data_cpp
directory. In this example, just 5 iterations will be executed on 4 threads:
[frey@login00.caviness ~]$ vpkg_require gnnunlock/2024.07.01 Adding dependency `binutils/2.35` to your environment Adding dependency `gcc/12.1.0` to your environment Adding dependency `intel-oneapi/2024.0.1.46` to your environment Adding package `gnnunlock/2024.07.01` to your environment [frey@login00.caviness data_cpp]$ ipdps19-train reddit 5 4 softmax OMP: Info #277: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead. ============ ITERATION 0 ============ Sampling 4 subgraphs. Thread 0 doubling from 207000 to 414000. Thread 3 doubling from 207000 to 414000. Thread 1 doubling from 207000 to 414000. Thread 2 doubling from 207000 to 414000. thread 0 finish in 113ms while pre use 4ms and post use 91ms. thread 2 finish in 155ms while pre use 6ms and post use 118ms. thread 1 finish in 159ms while pre use 7ms and post use 122ms. thread 3 finish in 159ms while pre use 6ms and post use 123ms. Sampling: total time 0.16187406s. Training itr 0 f1_mic: 0.034096, f1_mac: 0.019856 ============ ITERATION 1 ============ Training itr 1 f1_mic: 0.206164, f1_mac: 0.050644 ============ ITERATION 2 ============ Training itr 2 f1_mic: 0.233685, f1_mac: 0.061633 ============ ITERATION 3 ============ Training itr 3 f1_mic: 0.253775, f1_mac: 0.060568 ============ ITERATION 4 ============ Sampling 4 subgraphs. Thread 3 doubling from 207000 to 414000. Thread 1 doubling from 207000 to 414000. Thread 0 doubling from 207000 to 414000. Thread 2 doubling from 207000 to 414000. thread 2 finish in 109ms while pre use 1ms and post use 89ms. thread 3 finish in 110ms while pre use 2ms and post use 92ms. thread 1 finish in 111ms while pre use 2ms and post use 92ms. thread 0 finish in 111ms while pre use 3ms and post use 92ms. Sampling: total time 0.11241198s. Training itr 4 f1_mic: 0.297525, f1_mac: 0.080492 -------------------- DENSE time: 0.451507 SPARSE time: 0.226233 RELU time: 0.037294 NORM time: 0.069778 LOOKUP time: 0.096633 BIAS time: 0.006502 MASK time: 0.002519 REDUCE time: 0.004366 SIGMOID time: 0.000000 SOFTMAX time: 0.000000 -------------------- Testing f1_mic: 0.365237, f1_mac: 0.107992
The OMP warning indicates that the C++ code uses an OpenMP API that was part of an older OpenMP standard; the function in question still works as expected, but is likely to be removed in future releases of OpenMP. The developer is basically being warned to update his/her source code.