You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "ASF subversion and git services (JIRA)" <ji...@apache.org> on 2015/11/16 07:09:11 UTC

[jira] [Commented] (SINGA-80) New Blob Level and Address Level Math Operation Interface

    [ https://issues.apache.org/jira/browse/SINGA-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15006252#comment-15006252 ] 

ASF subversion and git services commented on SINGA-80:
------------------------------------------------------

Commit a65a9535e4d3df21faf0d67b7891df065a006fff in incubator-singa's branch refs/heads/master from seaok
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=a65a953 ]

SINGA-80 New Blob Level and Address Level Math Operation Interface

Update math functions for gpu.
fix compile bug from Makefile.gpu due to 32bit and 64bit

close #74


> New Blob Level and Address Level Math Operation Interface
> ---------------------------------------------------------
>
>                 Key: SINGA-80
>                 URL: https://issues.apache.org/jira/browse/SINGA-80
>             Project: Singa
>          Issue Type: Improvement
>            Reporter: Jinyang Gao
>            Assignee: Jinyang Gao
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> We are going to provide a new two level math interface to replace the current mshadow. The higher blob level interface is going to be used by layer level.  It is xpu transparent, and will support general matrix, element-wise, reduce/expand, pack/unpack operations and etc. in blob level. There is no further need to transfer the blob object into tensor object before math operation. The lower address level interface is going to support efficient cpu/gpu computing task on simple data array. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)