You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/01/29 23:56:23 UTC

[GitHub] [incubator-mxnet] ChaiBapchya removed a comment on issue #17449: Implemented large tensor flag for opperf testing

ChaiBapchya removed a comment on issue #17449: Implemented large tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#issuecomment-579949961
 
 
   Actually, if the mxnet is built with LTS ON then user can just give >2**32
   as a custom shape and use the opperf utility.
   
   ```
   import mxnet as mx
   from mxnet import nd
   
   from benchmark.opperf.utils.benchmark_utils import run_performance_test
   run_performance_test(nd.add, run_backward=True, dtype='float32', ctx=mx.cpu(),
                                  inputs=[{"lhs": (2**32+1, 1),
                                           "rhs": (2**32+1, 1)}],
                                  warmup=0, runs=1)
   ```
   
   This flag serves as a quick way of testing for Large tensor Ops.
   So for example if user doesn't want to add custom shapes for each operator
   and just wants to see perf times for all operators then this flag comes in
   handy.
   
   ```
   python incubator-mxnet/benchmark/opperf/opperf.py --output-format json --output-file mxnet_operator_benchmark_results.json --large-tensor ON
   ```
   
   So ya, both are separate use cases and both are possible.
   With the obvious assumption, mxnet is built with USE_INT64_TENSOR_SIZE = ON
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services