You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/10/16 07:23:14 UTC

[GitHub] inglada opened a new issue #12837: Wrong GLIBCXX version for cpp-package examples

inglada opened a new issue #12837: Wrong GLIBCXX version for cpp-package examples
URL: https://github.com/apache/incubator-mxnet/issues/12837
 
 
   ## Description
   When running the cpp-package examples build from source, the following error occurs
   `./alexnet: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.23' not found (required by ./alexnet)`
   
   ## Environment info (Required)
   
   ```
   ----------Python Info----------
   ('Version      :', '2.7.13')
   ('Compiler     :', 'GCC 6.3.0 20170516')
   ('Build        :', ('default', 'Sep 26 2018 18:42:22'))
   ('Arch         :', ('64bit', ''))
   ------------Pip Info-----------
   ('Version      :', '18.0')
   ('Directory    :', '/usr/local/lib/python2.7/dist-packages/pip')
   ----------MXNet Info-----------
   No MXNet installed.
   ----------System Info----------
   ('Platform     :', 'Linux-4.14.0-0.bpo.3-amd64-x86_64-with-debian-9.5')
   ('system       :', 'Linux')
   ('node         :', 'pc-117-162')
   ('release      :', '4.14.0-0.bpo.3-amd64')
   ('version      :', '#1 SMP Debian 4.14.13-1~bpo9+1 (2018-01-14)')
   ----------Hardware Info----------
   ('machine      :', 'x86_64')
   ('processor    :', '')
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                4
   On-line CPU(s) list:   0-3
   Thread(s) per core:    2
   Core(s) per socket:    2
   Socket(s):             1
   NUMA node(s):          1
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 78
   Model name:            Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
   Stepping:              3
   CPU MHz:               529.946
   CPU max MHz:           3400.0000
   CPU min MHz:           400.0000
   BogoMIPS:              5616.00
   Virtualization:        VT-x
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              256K
   L3 cache:              4096K
   NUMA node0 CPU(s):     0-3
   Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0072 sec, LOAD: 0.8300 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0308 sec, LOAD: 0.7744 sec.
   Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0537 sec, LOAD: 0.3521 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0248 sec, LOAD: 0.1212 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0909 sec, LOAD: 0.9658 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.2454 sec, LOAD: 0.9223 sec.
   ```
   
   Package used (Python/R/Scala/Julia):
   I'm using C++
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio): g++ (GCC) 8.2.0
   
   MXNet commit hash: b89a36d94b5b694b8fd926e6249f7490b38432f6
   
   Build config:
   
   ```
   # Licensed to the Apache Software Foundation (ASF) under one
   # or more contributor license agreements.  See the NOTICE file
   # distributed with this work for additional information
   # regarding copyright ownership.  The ASF licenses this file
   # to you under the Apache License, Version 2.0 (the
   # "License"); you may not use this file except in compliance
   # with the License.  You may obtain a copy of the License at
   #
   #   http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing,
   # software distributed under the License is distributed on an
   # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
   # KIND, either express or implied.  See the License for the
   # specific language governing permissions and limitations
   # under the License.
   
   #-------------------------------------------------------------------------------
   #  Template configuration for compiling mxnet
   #
   #  If you want to change the configuration, please use the following
   #  steps. Assume you are on the root directory of mxnet. First copy the this
   #  file so that any local changes will be ignored by git
   #
   #  $ cp make/config.mk .
   #
   #  Next modify the according entries, and then compile by
   #
   #  $ make
   #
   #  or build in parallel with 8 threads
   #
   #  $ make -j8
   #-------------------------------------------------------------------------------
   
   #---------------------
   # choice of compiler
   #--------------------
   
   ifndef CC
   export CC = gcc
   endif
   ifndef CXX
   export CXX = g++
   endif
   ifndef NVCC
   export NVCC = nvcc
   endif
   
   # whether compile with options for MXNet developer
   DEV = 0
   
   # whether compile with debug
   DEBUG = 0
   
   # whether to turn on segfault signal handler to log the stack trace
   USE_SIGNAL_HANDLER =
   
   # the additional link flags you want to add
   ADD_LDFLAGS =
   
   # the additional compile flags you want to add
   ADD_CFLAGS =
   
   #---------------------------------------------
   # matrix computation libraries for CPU/GPU
   #---------------------------------------------
   
   # whether use CUDA during compile
   USE_CUDA = 0
   
   # add the path to CUDA library to link and compile flag
   # if you have already add them to environment variable, leave it as NONE
   # USE_CUDA_PATH = /usr/local/cuda
   USE_CUDA_PATH = NONE
   
   # whether to enable CUDA runtime compilation
   ENABLE_CUDA_RTC = 1
   
   # whether use CuDNN R3 library
   USE_CUDNN = 0
   
   #whether to use NCCL library
   USE_NCCL = 0
   #add the path to NCCL library
   USE_NCCL_PATH = NONE
   
   # whether use opencv during compilation
   # you can disable it, however, you will not able to use
   # imbin iterator
   USE_OPENCV = 1
   
   #whether use libjpeg-turbo for image decode without OpenCV wrapper
   USE_LIBJPEG_TURBO = 0
   #add the path to libjpeg-turbo library
   USE_LIBJPEG_TURBO_PATH = NONE
   
   # use openmp for parallelization
   USE_OPENMP = 1
   
   # whether use MKL-DNN library
   USE_MKLDNN = 0
   
   # whether use NNPACK library
   USE_NNPACK = 0
   
   # choose the version of blas you want to use
   # can be: mkl, blas, atlas, openblas
   # in default use atlas for linux while apple for osx
   UNAME_S := $(shell uname -s)
   ifeq ($(UNAME_S), Darwin)
   USE_BLAS = apple
   else
   USE_BLAS = atlas
   endif
   
   # whether use lapack during compilation
   # only effective when compiled with blas versions openblas/apple/atlas/mkl
   USE_LAPACK = 1
   
   # path to lapack library in case of a non-standard installation
   USE_LAPACK_PATH =
   
   # add path to intel library, you may need it for MKL, if you did not add the path
   # to environment variable
   USE_INTEL_PATH = NONE
   
   # If use MKL only for BLAS, choose static link automatically to allow python wrapper
   ifeq ($(USE_BLAS), mkl)
   USE_STATIC_MKL = 1
   else
   USE_STATIC_MKL = NONE
   endif
   
   #----------------------------
   # Settings for power and arm arch
   #----------------------------
   ARCH := $(shell uname -a)
   ifneq (,$(filter $(ARCH), armv6l armv7l powerpc64le ppc64le aarch64))
   	USE_SSE=0
   	USE_F16C=0
   else
   	USE_SSE=1
   endif
   
   #----------------------------
   # F16C instruction support for faster arithmetic of fp16 on CPU
   #----------------------------
   # For distributed training with fp16, this helps even if training on GPUs
   # If left empty, checks CPU support and turns it on.
   # For cross compilation, please check support for F16C on target device and turn off if necessary.
   USE_F16C =
   
   #----------------------------
   # distributed computing
   #----------------------------
   
   # whether or not to enable multi-machine supporting
   USE_DIST_KVSTORE = 0
   
   # whether or not allow to read and write HDFS directly. If yes, then hadoop is
   # required
   USE_HDFS = 0
   
   # path to libjvm.so. required if USE_HDFS=1
   LIBJVM=$(JAVA_HOME)/jre/lib/amd64/server
   
   # whether or not allow to read and write AWS S3 directly. If yes, then
   # libcurl4-openssl-dev is required, it can be installed on Ubuntu by
   # sudo apt-get install -y libcurl4-openssl-dev
   USE_S3 = 0
   
   #----------------------------
   # performance settings
   #----------------------------
   # Use operator tuning
   USE_OPERATOR_TUNING = 1
   
   # Use gperftools if found
   USE_GPERFTOOLS = 1
   
   # path to gperftools (tcmalloc) library in case of a non-standard installation
   USE_GPERFTOOLS_PATH =
   
   # Link gperftools statically
   USE_GPERFTOOLS_STATIC =
   
   # Use JEMalloc if found, and not using gperftools
   USE_JEMALLOC = 1
   
   # path to jemalloc library in case of a non-standard installation
   USE_JEMALLOC_PATH =
   
   # Link jemalloc statically
   USE_JEMALLOC_STATIC =
   
   #----------------------------
   # additional operators
   #----------------------------
   
   # path to folders containing projects specific operators that you don't want to put in src/operators
   EXTRA_OPERATORS =
   
   #----------------------------
   # other features
   #----------------------------
   
   # Create C++ interface package
   USE_CPP_PACKAGE = 0
   
   #----------------------------
   # plugins
   #----------------------------
   
   # whether to use caffe integration. This requires installing caffe.
   # You also need to add CAFFE_PATH/build/lib to your LD_LIBRARY_PATH
   # CAFFE_PATH = $(HOME)/caffe
   # MXNET_PLUGINS += plugin/caffe/caffe.mk
   
   # WARPCTC_PATH = $(HOME)/warp-ctc
   # MXNET_PLUGINS += plugin/warpctc/warpctc.mk
   
   # whether to use sframe integration. This requires build sframe
   # git@github.com:dato-code/SFrame.git
   # SFRAME_PATH = $(HOME)/SFrame
   # MXNET_PLUGINS += plugin/sframe/plugin.mk
   ```
   
   ## Error Message:
   `./alexnet: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.23' not found (required by ./alexnet)`
   ## Minimum reproducible example
   
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. git clone --recursive https://github.com/apache/incubator-mxnet mxnet
   2. cd mxnet/
   3. make -j4 USE_CPP_PACKAGE=1 USE_OPENCV=0 USE_BLAS=openblas USE_CUDA=0
   4. cd cpp-package/example/
   5. make all MXNET_USE_CPU=1
   6. export LD_LIBRARY_PATH=../../lib/:$LD_LIBRARY_PATH
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services