You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "ASF subversion and git services (JIRA)" <ji...@apache.org> on 2015/12/16 13:11:46 UTC

[jira] [Commented] (SINGA-100) Implement layers using CUDNN for GPU training

    [ https://issues.apache.org/jira/browse/SINGA-100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15059895#comment-15059895 ] 

ASF subversion and git services commented on SINGA-100:
-------------------------------------------------------

Commit 6f81adba402bcd217dd020faa4188968b3200bb7 in incubator-singa's branch refs/heads/master from [~zhongle]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=6f81adb ]

SINGA-100 Implement layers using CUDNN for GPU training

Fixs include path problems on cudnn.
A reminder:
Users should configure their library path(LD_LIBRARY_PATH & LIBRARY_PATH) after they install cudnn libs.


> Implement layers using CUDNN for GPU training
> ---------------------------------------------
>
>                 Key: SINGA-100
>                 URL: https://issues.apache.org/jira/browse/SINGA-100
>             Project: Singa
>          Issue Type: New Feature
>            Reporter: wangwei
>
> NVIDIA has released the cudnn library optimized for CNN operations like convolution, pooling, etc. It has achieved overall good performance. Hence, it is essential to add cudnn supported layers in SINGA for efficient GPU training (SINGA-41).
> We will use the cudnn library to implement CNN layers, namely,
>  cudnnConvolutionLayer, cudnnPoolingLayer, cudnnLRNLayer, cudnnSoftmaxLayer, cudnnReLULayer, cudnnSigmoidLayer, cudnnTanhLayer, cudnnDivNormLayer.
> Data type float-16 will not be consider in this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)