You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/06/28 18:20:53 UTC

[GitHub] apeforest opened a new pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator

apeforest opened a new pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator
URL: https://github.com/apache/incubator-mxnet/pull/11466
 
 
   ## Description ##
   This PR is to address the request to have a native temperature parameter in the softmax functions. See [Issue](https://github.com/apache/incubator-mxnet/issues/11016) for more detailed discussion.
   
   I have added the temperature parameter to softmax operator. By default the temperature parameter value is 1.0 and both functions should behave the same as not specifying the temperature parameter.
   
   Verified the change using the following code in Python:
   
   ```
   import mxnet as mx
   
   data = mx.sym.Variable('data')
   net = mx.sym.softmax(data=data, temperature=10)
   
   x = mx.nd.array([ 1,  2,  3])
   
   ex = net.bind(mx.cpu(), args={'data': x, 'softmax2_label': 'softmax2'})
   ex.forward()
   ```
   should expect return
   ```
   [
   [ 0.30060961  0.33222499  0.3671654 ]
    <NDArray 3 @cpu(0)>]
   ```
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X ] All changes have test coverage:
   - Added a unit test to cover the new parameter
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] Softmax operator with new temperature parameter(unittest at test_operator.py:test_softmax_with_temperature)
   ## Comments ##
   - This change is backward compatible. The default value of temperature is 1.0f
   - Because 90% of the time (empirically) user will use this operator using the default temperature, I have added a "if else" branch in CPU computation to optimize the runtime by getting rid of the unnecessary 'divide-by-one' operation. This is not done in GPU kernel because 1) adding if branch will have negative runtime impact in GPU, 2) CUDA compiler may perform additional optimization on this edge case.
   - I have use check_speed function to verify this optimization in CPU. In my experiment, I choose a vector of size 10000 and run 10 times. The runtime reduction is about 25%. Experiment code is pasted below:
   ```
   import mxnet as mx
   from mxnet.test_utils import check_speed
   data_shape=(1,10000)
   data = mx.sym.Variable(name='data',shape=data_shape)
   ctx=mx.cpu(0)
   x = mx.nd.random.normal(0, 1.0, shape=data_shape, ctx=ctx)
   net = mx.sym.softmax(data=data)
   softmax_time = check_speed(sym=net, location={'data': x}, ctx=ctx, N=10, grad_req='null', typ='forward') * 1000
   print(softmax_time)
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services