You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/08/04 08:30:49 UTC

[GitHub] ctcyang opened a new pull request #12031: Fix CPUPinned unexpected behaviour

ctcyang opened a new pull request #12031: Fix CPUPinned unexpected behaviour
URL: https://github.com/apache/incubator-mxnet/pull/12031
 
 
   ## Description ##
   There is unexpected behavior with MXNet Context cpu_pinned.
   This PR fixes this unexpected behavior.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Fix unexpected behavior with cpu_pinned Context
   - [x] A class mxnet::common::cuda::SetDevice that collects what device was active before cudaSetDevice was called, and restores it to this device once the MXNet has done what it needs to do. This is done when the SetDevice object goes out of scope
   
   ## Comments ##
   On master branch, if you do:
   ```
   >>> import mxnet as mx
   >>> data = mx.nd.zeros((2,3), ctx=mx.Context('cpu_pinned', 7))
   >>> print(data)
   
   
   [[0. 0. 0.]
    [0. 0. 0.]]
   <NDArray 2x3 @cpu_pinned(7)>
   ```
   
   This looks right. However, `nvidia-smi` tells me that this data is actually located on GPU 0:
   
   ```
   $ nvidia-smi
   +-----------------------------------------------------------------------------+
   | Processes:                                                       GPU Memory |
   |  GPU       PID   Type   Process name                             Usage      |
   |=============================================================================|
   |    0    109140      C   python3                                      524MiB |
   +-----------------------------------------------------------------------------+
   ```
   
   To fix this unexpected behaviour, I changed four things:
   1) call cudaSetDevice and set current context to GPU we want to allocate on (src/storage/storage.cc)
   2) before, Context::real_dev_id returns the GPU id for kGPU, and 0 for both kCPU and kCPUPinned even though it stores the dev_id that the user passed into the constructor as a private member (include/mxnet/base.h)
   3) add guard to every place cudaSetDevice is called in MXNet C++ code. This guard is implemented using the class mxnet::common::cuda::SetDevice.
   4) change hardcoded CPUPinned(0) to CPU(0) (comm.h, comm_tree.h, kvstore_nccl.h). This change does not cause performance regression and allows `example/image-classification/train_imagenet.py` to be run with `--gpus 7` completely on GPU 7 (provided a slight change in the script `example/image-classification/common/data.py`). Currently, in master branch if this command is run, GPUs 0 and 7 will be used:
   
   ```
   $ nvidia-smi
   +-----------------------------------------------------------------------------+
   | Processes:                                                       GPU Memory |
   |  GPU       PID   Type   Process name                             Usage      |
   |=============================================================================|
   |    0     37369      C   python                                       524MiB |
   |    7     37369      C   python                                     10626MiB |
   +-----------------------------------------------------------------------------+
   ```
   
   Using this PR, the result becomes as expected:
   ```
   +-----------------------------------------------------------------------------+
   | Processes:                                                       GPU Memory |
   |  GPU       PID   Type   Process name                             Usage      |
   |=============================================================================|
   |    7     53228      C   python                                     13026MiB |
   +-----------------------------------------------------------------------------+
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services