You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by gi...@git.apache.org on 2017/08/25 23:39:36 UTC

[GitHub] DickJC123 opened a new pull request #7625: entire codebase build with mshadow_use_clas=0

DickJC123 opened a new pull request #7625: entire codebase build with mshadow_use_clas=0
URL: https://github.com/apache/incubator-mxnet/pull/7625
 
 
   The following file is the result of solving build issues for the entire codebase that arise after setting MSHADOW_USE_CBLAS=0 in mshadow/make/mshadow.mk.  Problems were often that cpu versions of the linalg interface were missing at final link-time, and this PR supplies stubs that log a fatal message that the routine is unimplemented.  In addition, I reworked the linalg_gemm<cpu, DType> specialization to be more consistent with the rest of the file- a '#define' that is instantiated with float and double types.  This clearly supplies a full function specialization (avoiding the partial function specialization error reported against an earlier version of the file).  Also, I personally find the existing implementation of a routine that is templated on <xpu>, yet never refers to xpu, as confusing.  Would this generate candidate functions for gpu operators that would then be ignored because their tensor<...gpu..> arguments would not match the tensor<..cpu...> parameters of the candidates?  Again, this PR offers a simple approach consistent with the rest of the file.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services