You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@systemml.apache.org by Matthias Boehm <mb...@gmail.com> on 2017/12/09 02:41:46 UTC

[DISCUSS] Roadmap SystemML 1.1 and beyond

Hi all,

with our SystemML 1.0 release around the corner, I think we should start
the discussion on the roadmap for SystemML 1.1 and beyond. Below is an
initial list as a starting point, but please help to add relevant items,
especially for algorithms and APIs, which are barely covered so far.

1) Deep Learning
 * Full compiler integration GPU backend
 * Extended sparse operations on CPU/GPU
 * Extended single-precision support CPU
 * Distributed DL operations?

2) GPU Backend
 * Full support for sparse operations
 * Automatic decisions on CPU vs GPU operations
 * Graduate GPU backends (enable by default)

3) Code generation
 * Graduate code generation (enable by default)
 * Support for deep learning operations
 * Code generation for the heterogeneous HW, incl GPUs

4) Compressed Linear Algebra
 * Support for matrix-matrix multiplications
 * Support for deep learning operations
 * Improvements for ultra-sparse datasets

5) Misc Runtime
 * Large dense matrix blocks > 16GB
 * NUMA-awareness (thread pools, matrix partitioning)
 * Unified memory management (ops, bufferpool, RDDs/broadcasts)
 * Support feather format for matrices and frames
 * Parfor support for broadcasts
 * Extended support for multi-threaded operations
 * Boolean matrices

6) Misc Compiler
 * Support single-output UDFs in expressions
 * Consolidate replicated compilation chain (e.g., diff APIs)
 * Holistic sum-product optimization and operator fusion
 * Extended sparsity estimators
 * Rewrites and compiler improvements for mini-batching
 * Parfor optimizer support for shared reads

7) APIs
 * Python Binding for JMLC API
 * Consistency Python/Java APIs


Regards,
Matthias

Re: [DISCUSS] Roadmap SystemML 1.1 and beyond

Posted by Janardhan Pulivarthi <ja...@gmail.com>.
Hi all, my 0.02$ I am working on one by one.

Please add to the above list..

0. Algorithms
* Factorization machines, with regression & classification capabities with
the help of nn layers.[ 1437]
* A test suite for the nn optimization, with well known optimization test
functions. [1974]

1. Deep Learning
* I am working on model selection + hyperparameter optimization, a basic
implementation
will be possible by January. [SYSTEMML-1973] - some components of it are in
testing phase, now.
* I think distributed DL is a great idea, & it may be necessary now.

2. GPU backends
* Support for sparse operations - [SYSTEMML-2041] Implementation of block
sparse kernel enables us to model LSTM
with 10,000 hidden units, instead current state-of-the-art 1000 hidden units

6. Misc. compiler
* support for single-output UDFs in expressions.
* SPOOF compiler improvement
* Rewrites

8. Builtin functions
* Well known distribution functions - weibull, gamma etc.
* Generalization of operations, such as xor, and, other operations.

9. Documentation improvement.

Thanks,
Janardhan

On Sat, Dec 9, 2017 at 8:11 AM, Matthias Boehm <mb...@gmail.com> wrote:

> Hi all,
>
> with our SystemML 1.0 release around the corner, I think we should start
> the discussion on the roadmap for SystemML 1.1 and beyond. Below is an
> initial list as a starting point, but please help to add relevant items,
> especially for algorithms and APIs, which are barely covered so far.
>
> 1) Deep Learning
>  * Full compiler integration GPU backend
>  * Extended sparse operations on CPU/GPU
>  * Extended single-precision support CPU
>  * Distributed DL operations?
>
> 2) GPU Backend
>  * Full support for sparse operations
>  * Automatic decisions on CPU vs GPU operations
>  * Graduate GPU backends (enable by default)
>
> 3) Code generation
>  * Graduate code generation (enable by default)
>  * Support for deep learning operations
>  * Code generation for the heterogeneous HW, incl GPUs
>
> 4) Compressed Linear Algebra
>  * Support for matrix-matrix multiplications
>  * Support for deep learning operations
>  * Improvements for ultra-sparse datasets
>
> 5) Misc Runtime
>  * Large dense matrix blocks > 16GB
>  * NUMA-awareness (thread pools, matrix partitioning)
>  * Unified memory management (ops, bufferpool, RDDs/broadcasts)
>  * Support feather format for matrices and frames
>  * Parfor support for broadcasts
>  * Extended support for multi-threaded operations
>  * Boolean matrices
>
> 6) Misc Compiler
>  * Support single-output UDFs in expressions
>  * Consolidate replicated compilation chain (e.g., diff APIs)
>  * Holistic sum-product optimization and operator fusion
>  * Extended sparsity estimators
>  * Rewrites and compiler improvements for mini-batching
>  * Parfor optimizer support for shared reads
>
> 7) APIs
>  * Python Binding for JMLC API
>  * Consistency Python/Java APIs
>
>
> Regards,
> Matthias
>