You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@systemml.apache.org by "Mike Dusenberry (JIRA)" <ji...@apache.org> on 2016/07/13 22:52:20 UTC

[jira] [Updated] (SYSTEMML-540) Deep Learning

     [ https://issues.apache.org/jira/browse/SYSTEMML-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mike Dusenberry updated SYSTEMML-540:
-------------------------------------
    Description: 
This epic covers the addition of deep learning to SystemML, including:

* Core DML layer abstractions for deep (convolutional, recurrent) neural nets, with simple forward/backward API: affine, convolution (start with 2D), max-pooling, non-linearities (relu, sigmoid, softmax), dropout, loss functions.
* Modularized DML optimizers: (mini-batch, stochastic) gradient descent (w/ momentum, etc.).
* Additional DML language support as necessary (tensors, built-in functions such as convolution, function pointers, list structures, etc.).
* Integration with other deep learning frameworks (Caffe, Torch, Theano, TensoFlow, etc.) via automatic DML code generation.
* etc.

[**DONE**] Phase 1:  *MVPs*
* Create mathematically correct DML deep learning library for running basic feed-forward and convolutional neural nets on a singlenode.
* Create mathematically correct built-in operators for convolution and max pooling for singlenode operation.

[**CURRENT**] Phase 2:  *Singlenode*
* Improve performance of DML deep learning library in singlenode operation.
* Expand DML deep learning library to include additional commonly-used layers, such as RNNs and LSTMs, as well as additional optimizers.
* Improve built-in operators for convolution and max pooling to be highly performant in singlenode operation.
* Implement performant GPU acceleration for built-in operators (and end-to-end deep learning algorithms) in singlenode operation.
* Add general engine improvements to improve bottlenecks, such as left-indexing within DML-bodied functions.
* Add end-to-end deep learning algorithm examples, such as a "LeNet" convolutional neural net.

Phase 3: *Distributed*
* Expand deep learning support to include *distributed operations* with large models.  This includes improvements to the DML deep learning library, the built-in operators, the GPU acceleration, and general engine improvements.

Phase 4: *APIs/Wrappers*
* 

  was:
This epic covers the addition of deep learning to SystemML, including:

* Core DML layer abstractions for deep (convolutional, recurrent) neural nets, with simple forward/backward API: affine, convolution (start with 2D), max-pooling, non-linearities (relu, sigmoid, softmax), dropout, loss functions.
* Modularized DML optimizers: (mini-batch, stochastic) gradient descent (w/ momentum, etc.).
* Additional DML language support as necessary (tensors, built-in functions such as convolution, function pointers, list structures, etc.).
* Integration with other deep learning frameworks (Caffe, Torch, Theano, TensoFlow, etc.) via automatic DML code generation.
* etc.


> Deep Learning
> -------------
>
>                 Key: SYSTEMML-540
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-540
>             Project: SystemML
>          Issue Type: Epic
>            Reporter: Mike Dusenberry
>            Assignee: Mike Dusenberry
>
> This epic covers the addition of deep learning to SystemML, including:
> * Core DML layer abstractions for deep (convolutional, recurrent) neural nets, with simple forward/backward API: affine, convolution (start with 2D), max-pooling, non-linearities (relu, sigmoid, softmax), dropout, loss functions.
> * Modularized DML optimizers: (mini-batch, stochastic) gradient descent (w/ momentum, etc.).
> * Additional DML language support as necessary (tensors, built-in functions such as convolution, function pointers, list structures, etc.).
> * Integration with other deep learning frameworks (Caffe, Torch, Theano, TensoFlow, etc.) via automatic DML code generation.
> * etc.
> [**DONE**] Phase 1:  *MVPs*
> * Create mathematically correct DML deep learning library for running basic feed-forward and convolutional neural nets on a singlenode.
> * Create mathematically correct built-in operators for convolution and max pooling for singlenode operation.
> [**CURRENT**] Phase 2:  *Singlenode*
> * Improve performance of DML deep learning library in singlenode operation.
> * Expand DML deep learning library to include additional commonly-used layers, such as RNNs and LSTMs, as well as additional optimizers.
> * Improve built-in operators for convolution and max pooling to be highly performant in singlenode operation.
> * Implement performant GPU acceleration for built-in operators (and end-to-end deep learning algorithms) in singlenode operation.
> * Add general engine improvements to improve bottlenecks, such as left-indexing within DML-bodied functions.
> * Add end-to-end deep learning algorithm examples, such as a "LeNet" convolutional neural net.
> Phase 3: *Distributed*
> * Expand deep learning support to include *distributed operations* with large models.  This includes improvements to the DML deep learning library, the built-in operators, the GPU acceleration, and general engine improvements.
> Phase 4: *APIs/Wrappers*
> * 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)