You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@systemml.apache.org by "Nakul Jindal (JIRA)" <ji...@apache.org> on 2017/03/24 19:04:41 UTC

[jira] [Created] (SYSTEMML-1436) Improve Sparse matrix support for GPU operations

Nakul Jindal created SYSTEMML-1436:
--------------------------------------

             Summary: Improve Sparse matrix support for GPU operations
                 Key: SYSTEMML-1436
                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1436
             Project: SystemML
          Issue Type: Task
          Components: Runtime
            Reporter: Nakul Jindal


SystemML has a preliminary set of GPU implementation for its primitive operations (Matmult, reductions, neural net operations among others). Currently, these GPU operations work when SystemML is run on a single machine (either using the Standalone mode or Spark mode). Programs written in the external DSLs (DML & PyDML) and internal DSLs (Python and Scala) can enable the use of these GPUs.

SystemML is aware of sparsity in matrix blocks and encodes them differently. It has 3 different types of Sparse formats (CSR, COO & a custom MCSR). A lot of the GPU operations are implemented for dense matrix blocks; for some GPU operations, when sparse matrices are encountered, they are first converted to dense and then sent to the GPU.

- This project is to implement CUDA kernels for Sparse Matrix blocks
- Operations to be implemented include reductions, element-wise operations, neural network operations among others

This project is fairly isolated from the internal compiler & optimizer, therefore a thorough knowledge of the entire system will not be needed.

Knowledge of CUDA programming is preferred. 
For a initial implementation, the most efficient CUDA kernel is not required.

Rating - Medium

Mentors - [~nakul02], (optionally [~niketanpansare])



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)