You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "ASF subversion and git services (JIRA)" <ji...@apache.org> on 2016/04/07 15:12:25 UTC

[jira] [Commented] (SINGA-80) New Blob Level and Address Level Math Operation Interface

    [ https://issues.apache.org/jira/browse/SINGA-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15230204#comment-15230204 ] 

ASF subversion and git services commented on SINGA-80:
------------------------------------------------------

Commit 8ade7d76dbe64b75088693febba7019e28d39c30 in incubator-singa's branch refs/heads/master from seaok
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=8ade7d7 ]

SINGA-80 New Blob Level and Address Level Math Operation Interface

Uniform the signature of CPU and GPU.
Fixed some bugs about MVAddRow() and OuterProduct().
Run All	Test OK.


> New Blob Level and Address Level Math Operation Interface
> ---------------------------------------------------------
>
>                 Key: SINGA-80
>                 URL: https://issues.apache.org/jira/browse/SINGA-80
>             Project: Singa
>          Issue Type: Improvement
>            Reporter: Jinyang Gao
>            Assignee: Jinyang Gao
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> We are going to provide a new two level math interface to replace the current mshadow. The higher blob level interface is going to be used by layer level.  It is xpu transparent, and will support general matrix, element-wise, reduce/expand, pack/unpack operations and etc. in blob level. There is no further need to transfer the blob object into tensor object before math operation. The lower address level interface is going to support efficient cpu/gpu computing task on simple data array. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)