You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mxnet.apache.org by "Lin Yuan (JIRA)" <ji...@apache.org> on 2018/10/08 21:28:00 UTC

[jira] [Updated] (MXNET-1050) As a user, I would like to speed up training job using multithreading

     [ https://issues.apache.org/jira/browse/MXNET-1050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lin Yuan updated MXNET-1050:
----------------------------
    Description: 
Currently, multithreading in training is achieved through OpenMP at operator level. There are issues filed on GitHub about OpenMP in MXNet. Besides, there is no clear benchmark on the performance impact on various platforms

*Acceptance Criteria*
- Fix/address all issues related to OpenMP
- Understand the performance impact of OpenMP with benchmarks
- Provide an optimal thread setting for users at different computing platforms

> As a user, I would like to speed up training job using multithreading
> ---------------------------------------------------------------------
>
>                 Key: MXNET-1050
>                 URL: https://issues.apache.org/jira/browse/MXNET-1050
>             Project: Apache MXNet
>          Issue Type: Story
>          Components: Apache MXNet Backend
>            Reporter: Lin Yuan
>            Priority: Major
>
> Currently, multithreading in training is achieved through OpenMP at operator level. There are issues filed on GitHub about OpenMP in MXNet. Besides, there is no clear benchmark on the performance impact on various platforms
> *Acceptance Criteria*
> - Fix/address all issues related to OpenMP
> - Understand the performance impact of OpenMP with benchmarks
> - Provide an optimal thread setting for users at different computing platforms



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org