You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@systemml.apache.org by "Matthias Boehm (JIRA)" <ji...@apache.org> on 2016/09/16 18:40:20 UTC
[jira] [Closed] (SYSTEMML-913) Performance matrix-vector
multiplication w/ tall rhs vector
[ https://issues.apache.org/jira/browse/SYSTEMML-913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Matthias Boehm closed SYSTEMML-913.
-----------------------------------
> Performance matrix-vector multiplication w/ tall rhs vector
> -----------------------------------------------------------
>
> Key: SYSTEMML-913
> URL: https://issues.apache.org/jira/browse/SYSTEMML-913
> Project: SystemML
> Issue Type: Task
> Reporter: Matthias Boehm
> Assignee: Matthias Boehm
> Fix For: SystemML 0.11
>
>
> So far, we compute matrix-vector multiplication with simple row-wise dot products. This works very well for the common case of tall&skinny matrices, where the right-hand-side vector is very small. However, for scenarios with many features and hence a tall rhs vector, this approach suffers from cache unfriendly behavior. Accordingly, this task tracks the introduction of a dedicated cache-conscious matrix-vector multiplication for both sparse and dense matrices.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)