You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@systemml.apache.org by "Fei Hu (JIRA)" <ji...@apache.org> on 2017/07/26 17:07:02 UTC
[jira] [Updated] (SYSTEMML-1809) Optimize the performance of the
distributed MNIST_LeNet_Sgd model training
[ https://issues.apache.org/jira/browse/SYSTEMML-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Fei Hu updated SYSTEMML-1809:
-----------------------------
Description:
For the current version, there are two bottleneck for the distributed MNIST_LeNet_Sdg model training:
# data locality: for {{RemoteParForSpark}}, the tasks are parallelized without considering data locality. It will cause a lot of data shuffling if the volume of the input data size is large;
# Result merge: the current experiments indicate that the result merge part took more time than model training. After the optimization, we need to compare the performance with the distributed Tensorflow.
was:For the current version, there are two bottleneck for the distributed MNIST_LeNet_Sdg model training: 1) data locality: for {{RemoteParForSpark}}, the tasks are parallelized without considering data locality. It will cause a lot of data shuffling if the volume of the input data size is large; 2) Result merge: the current experiments indicate that the result merge part took more time than model training. After the optimization, we need to compare the performance with the distributed Tensorflow.
> Optimize the performance of the distributed MNIST_LeNet_Sgd model training
> --------------------------------------------------------------------------
>
> Key: SYSTEMML-1809
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1809
> Project: SystemML
> Issue Type: Task
> Affects Versions: SystemML 1.0
> Reporter: Fei Hu
> Labels: RemoteParForSpark, deeplearning
>
> For the current version, there are two bottleneck for the distributed MNIST_LeNet_Sdg model training:
> # data locality: for {{RemoteParForSpark}}, the tasks are parallelized without considering data locality. It will cause a lot of data shuffling if the volume of the input data size is large;
> # Result merge: the current experiments indicate that the result merge part took more time than model training. After the optimization, we need to compare the performance with the distributed Tensorflow.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)