You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/04/06 14:13:36 UTC

[GitHub] [incubator-mxnet] TaoLv commented on issue #17980: When compiled with MKL, fully_connected calls DNNL while dot and batch_dot call MKL

TaoLv commented on issue #17980: When compiled with MKL, fully_connected calls DNNL while dot and batch_dot call MKL
URL: https://github.com/apache/incubator-mxnet/issues/17980#issuecomment-609820709
 
 
   Thanks for raising the issue, @kpuatamazon. We plan to optimize the dot and batch_dot operator with DNNL MatMul primitive. But please note that, the performance of MatMul primitive was not optimized until v1.3 which was released last week. That's why we didn't integrate the primitive at the first time when it's introduced in v1.2.
   
   > When compiled with MKL, MXNet should call MKL directly from FullyConnected like it already does for dot and batch_dot.
   
   As mentioned above, dot and batch_dot will also be optimized with DNNL. And as DNNL is more dedicated on deep learning and has more friendly license, we will consider DNNL with high priority when both DNNL and MKL are enabled in compilation.
   
   Please provide a simple reproducer if you find FullyConnected based on DNNL is slower than it based on MKL BLAS.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services