You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/01/08 19:10:24 UTC

[GitHub] jerrin92 opened a new issue #9350: Trouble building with mkl from source

jerrin92 opened a new issue #9350: Trouble building with mkl from source
URL: https://github.com/apache/incubator-mxnet/issues/9350
 
 
   We are trying to build mxnet from source using mkl, however even though all the required environment variables are added in regards to ld_library and path variables, mxnet still complains that the components are not visible.
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   ('Version      :', '2.7.11')
   ('Compiler     :', 'GCC 4.8.5 20150623 (Red Hat 4.8.5-16)')
   ('Build        :', ('default', 'Jan  4 2018 11:21:44'))
   ('Arch         :', ('64bit', 'ELF'))
   ------------Pip Info-----------
   ('Version      :', '9.0.1')
   ('Directory    :', '/N/u/jerkatta/Carbonate/.local/lib/python2.7/site-packages/pip')
   ----------MXNet Info-----------
   No MXNet installed.
   ----------System Info----------
   ('Platform     :', 'Linux-3.10.0-693.11.1.el7.x86_64-x86_64-with-redhat-7.4-Maipo')
   ('system       :', 'Linux')
   ('node         :', 'e1.carbonate.uits.iu.edu')
   ('release      :', '3.10.0-693.11.1.el7.x86_64')
   ('version      :', '#1 SMP Fri Oct 27 05:39:05 EDT 2017')
   ----------Hardware Info----------
   ('machine      :', 'x86_64')
   ('processor    :', 'x86_64')
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                40
   On-line CPU(s) list:   0-39
   Thread(s) per core:    1
   Core(s) per socket:    20
   Socket(s):             2
   NUMA node(s):          2
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 85
   Model name:            Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
   Stepping:              4
   CPU MHz:               2400.000
   BogoMIPS:              4800.00
   Virtualization:        VT-x
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              1024K
   L3 cache:              28160K
   NUMA node0 CPU(s):     0-19
   NUMA node1 CPU(s):     20-39
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0139 sec, LOAD: 0.5586 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0077 sec, LOAD: 0.0735 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0386 sec, LOAD: 0.3379 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0154 sec, LOAD: 0.0689 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0593 sec, LOAD: 0.0883 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0812 sec, LOAD: 0.6052 sec.
   
   ```
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   ```
   #-------------------------------------------------------------------------------
   #  Template configuration for compiling mxnet
   #
   #  If you want to change the configuration, please use the following
   #  steps. Assume you are on the root directory of mxnet. First copy the this
   #  file so that any local changes will be ignored by git
   #
   #  $ cp make/config.mk .
   #
   #  Next modify the according entries, and then compile by
   #
   #  $ make
   #
   #  or build in parallel with 8 threads
   #
   #  $ make -j8
   #-------------------------------------------------------------------------------
   
   #---------------------
   # choice of compiler
   #--------------------
   
   export CC = gcc
   export CXX = g++
   export NVCC = nvcc
   
   # whether compile with options for MXNet developer
   DEV = 0
   
   # whether compile with debug
   DEBUG = 0
   
   # whether compiler with profiler
   USE_PROFILER =
   
   # the additional link flags you want to add
   ADD_LDFLAGS = /N/soft/rhel7/intel/18.0.0/compilers_and_libraries/linux/mkl/lib/intel64_lin
   
   # the additional compile flags you want to add
   ADD_CFLAGS = /N/soft/rhel7/intel/18.0.0/compilers_and_libraries/linux/mkl/include
   
   #---------------------------------------------
   # matrix computation libraries for CPU/GPU
   #---------------------------------------------
   
   # whether use CUDA during compile
   USE_CUDA = 0
   
   # add the path to CUDA library to link and compile flag
   # if you have already add them to environment variable, leave it as NONE
   # USE_CUDA_PATH = /usr/local/cuda
   USE_CUDA_PATH = NONE
   
   # whether use CuDNN R3 library
   USE_CUDNN = 0
   
   # whether use cuda runtime compiling for writing kernels in native language (i.e. Python)
   USE_NVRTC = 0
   
   # whether use opencv during compilation
   # you can disable it, however, you will not able to use
   # imbin iterator
   USE_OPENCV = 1
   
   # use openmp for parallelization
   USE_OPENMP = 1
   
   # MKL ML Library for Intel CPU/Xeon Phi
   # Please refer to MKL_README.md for details
   
   # MKL ML Library folder, need to be root for /usr/local
   # Change to User Home directory for standard user
   # For USE_BLAS!=mkl only
   MKLML_ROOT=/usr/local
   
   # whether use MKL2017 library
   USE_MKL2017 = 0
   
   # whether use MKL2017 experimental feature for high performance
   # Prerequisite USE_MKL2017=1
   USE_MKL2017_EXPERIMENTAL = 0
   
   # whether use NNPACK library
   USE_NNPACK = 0
   
   # choose the version of blas you want to use
   # can be: mkl, blas, atlas, openblas
   # in default use atlas for linux while apple for osx
   UNAME_S := $(shell uname -s)
   ifeq ($(UNAME_S), Darwin)
   USE_BLAS = apple
   else
   USE_BLAS = atlas
   endif
   
   # whether use lapack during compilation
   # only effective when compiled with blas versions openblas/apple/atlas/mkl
   USE_LAPACK = 1
   
   # path to lapack library in case of a non-standard installation
   USE_LAPACK_PATH =
   
   # add path to intel library, you may need it for MKL, if you did not add the path
   # to environment variable
   USE_INTEL_PATH = NONE
   
   # If use MKL only for BLAS, choose static link automatically to allow python wrapper
   ifeq ($(USE_MKL2017), 0)
   ifeq ($(USE_BLAS), mkl)
   USE_STATIC_MKL = 1
   endif
   else
   USE_STATIC_MKL = NONE
   endif
   
   #----------------------------
   # Settings for power and arm arch
   #----------------------------
   ARCH := $(shell uname -a)
   ifneq (,$(filter $(ARCH), armv6l armv7l powerpc64le ppc64le aarch64))
   	USE_SSE=0
   else
   	USE_SSE=1
   endif
   
   #----------------------------
   # distributed computing
   #----------------------------
   
   # whether or not to enable multi-machine supporting
   USE_DIST_KVSTORE = 0
   
   # whether or not allow to read and write HDFS directly. If yes, then hadoop is
   # required
   USE_HDFS = 0
   
   # path to libjvm.so. required if USE_HDFS=1
   LIBJVM=$(JAVA_HOME)/jre/lib/amd64/server
   
   # whether or not allow to read and write AWS S3 directly. If yes, then
   # libcurl4-openssl-dev is required, it can be installed on Ubuntu by
   # sudo apt-get install -y libcurl4-openssl-dev
   USE_S3 = 0
   
   #----------------------------
   # additional operators
   #----------------------------
   
   # path to folders containing projects specific operators that you don't want to put in src/operators
   EXTRA_OPERATORS =
   
   #----------------------------
   # other features
   #----------------------------
   
   # Create C++ interface package
   USE_CPP_PACKAGE = 0
   
   #----------------------------
   # plugins
   #----------------------------
   
   # whether to use caffe integration. This requires installing caffe.
   # You also need to add CAFFE_PATH/build/lib to your LD_LIBRARY_PATH
   # CAFFE_PATH = $(HOME)/caffe
   # MXNET_PLUGINS += plugin/caffe/caffe.mk
   
   # whether to use torch integration. This requires installing torch.
   # You also need to add TORCH_PATH/install/lib to your LD_LIBRARY_PATH
   # TORCH_PATH = $(HOME)/torch
   # MXNET_PLUGINS += plugin/torch/torch.mk
   
   # WARPCTC_PATH = $(HOME)/warp-ctc
   # MXNET_PLUGINS += plugin/warpctc/warpctc.mk
   
   # whether to use sframe integration. This requires build sframe
   # git@github.com:dato-code/SFrame.git
   # SFRAME_PATH = $(HOME)/SFrame
   # MXNET_PLUGINS += plugin/sframe/plugin.mk
   
   ```
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   ```
   g++: error: /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.a: No such file or directory
   g++: error: /opt/intel/mkl/lib/intel64/libmkl_core.a: No such file or directory
   g++: error: /opt/intel/mkl/lib/intel64/libmkl_intel_thread.a: No such file or directory
   ```
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. make USE_OPENCV=1 USE_BLAS=mkl
   
   ## What have you tried to solve it?
   
   1. Added /N/soft/rhel7/intel/18.0.0/compilers_and_libraries/linux/mkl/lib/intel64_lin, which contains teh file that mxnet is complaining about to PATH and LD_LIBRARY_PATH
   2. Added the same to config.mk
   3. Passed the same with command line option -I 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services