You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "wangwei (JIRA)" <ji...@apache.org> on 2015/10/07 04:06:26 UTC

[jira] [Commented] (SINGA-80) New Blob Level and Address Level Math Operation Interface

    [ https://issues.apache.org/jira/browse/SINGA-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946133#comment-14946133 ] 

wangwei commented on SINGA-80:
------------------------------

I think we can separate this feature into a couple of steps,

1. update Blob class, e.g., adding transpose_ field, replacing type of shape_ from vector<int> to int s[4] + int dim_, anything else?
2. implement float-level math functions
3. implement Blob-level  functions

you can use this ticket as a milestone ticket, and create multiple sub-tickets and add links to them. The code would be easier to merge in this way.
(In JIRA, the text "SINGA-80" will be automatically converted into a link.)

> New Blob Level and Address Level Math Operation Interface
> ---------------------------------------------------------
>
>                 Key: SINGA-80
>                 URL: https://issues.apache.org/jira/browse/SINGA-80
>             Project: Singa
>          Issue Type: Improvement
>            Reporter: Jinyang Gao
>            Assignee: Jinyang Gao
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> We are going to provide a new two level math interface to replace the current mshadow. The higher blob level interface is going to be used by layer level.  It is xpu transparent, and will support general matrix, element-wise, reduce/expand, pack/unpack operations and etc. in blob level. There is no further need to transfer the blob object into tensor object before math operation. The lower address level interface is going to support efficient cpu/gpu computing task on simple data array. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)