You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/09/15 05:11:51 UTC

[GitHub] sxjscience commented on a change in pull request #12446: [WIP][Bugfix] Fix flaky topk

sxjscience commented on a change in pull request #12446: [WIP][Bugfix] Fix flaky topk
URL: https://github.com/apache/incubator-mxnet/pull/12446#discussion_r217876156
 
 

 ##########
 File path: src/operator/tensor/ordering_op-inl.h
 ##########
 @@ -455,8 +457,7 @@ void TopKImpl(const RunContext &ctx,
   // Cast `ret_indices` from int to real_t could introduce conversion error when the element_num
   // is large enough.
   if (param.ret_typ == topk_enum::kReturnMask) {
-    Tensor<xpu, 2, DType> ret_mask =
-      ret[0].get_with_shape<xpu, 2, DType>(Shape2(ret[0].Size(), 1), s);
+    Tensor<xpu, 1, DType> ret_mask = ret[0].FlatTo1D<xpu, DType>(s);
     ret_mask = scalar<DType>(0);
 
 Review comment:
   Now it raises a really weird "CUDA Misaligned Memory Error". I currently having no idea what triggers it. Actually it happens when we initialize the ret_mask to all zero.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services