You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@systemml.apache.org by "Janardhan (JIRA)" <ji...@apache.org> on 2017/12/08 17:21:00 UTC
[jira] [Commented] (SYSTEMML-2041) Implement Block-Sparse GPU
Kernels
[ https://issues.apache.org/jira/browse/SYSTEMML-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283869#comment-16283869 ]
Janardhan commented on SYSTEMML-2041:
-------------------------------------
cc: [~nakul02] [~niketanpansare]
> Implement Block-Sparse GPU Kernels
> ----------------------------------
>
> Key: SYSTEMML-2041
> URL: https://issues.apache.org/jira/browse/SYSTEMML-2041
> Project: SystemML
> Issue Type: New Feature
> Components: Infrastructure
> Reporter: Janardhan
> Attachments: GPU Kernels for Block-Sparse Weights.pdf
>
>
> Sparsity enables, for example, training of neural networks that are much wider and deeper than otherwise possible with a given parameter budget and computational budget, such as LSTMs with tens of thousands of hidden units. (The largest LSTMs trained today are only thousands of hidden units.)
> *Resource:* TensorFlow implemented repo - https://github.com/openai/blocksparse
> *Best Supported architectures:* Maxwell, Pascal with Kepler & Volta support for limited functionality
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)