You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@systemml.apache.org by "Matthias Boehm (JIRA)" <ji...@apache.org> on 2017/10/18 06:03:00 UTC
[jira] [Created] (SYSTEMML-1967) Spark rand/seq instructions
generate too many partitions
Matthias Boehm created SYSTEMML-1967:
----------------------------------------
Summary: Spark rand/seq instructions generate too many partitions
Key: SYSTEMML-1967
URL: https://issues.apache.org/jira/browse/SYSTEMML-1967
Project: SystemML
Issue Type: Bug
Reporter: Matthias Boehm
The existing spark rand instructions makes use of an optimizer utils methods for estimating the total size under awareness of sparsity in order to determine the number of partitions. Since this methods was later overloaded, this instructions currently calls the wrong primitive for sparsity with the nnz leading to dense estimates.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)