You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@mxnet.apache.org by GitBox <gi...@apache.org> on 2022/02/09 17:38:39 UTC

[GitHub] [incubator-mxnet] gregoryfdel commented on issue #17633: Windows link failures in Debug mode

gregoryfdel commented on issue #17633:
URL: https://github.com/apache/incubator-mxnet/issues/17633#issuecomment-1034023839


   To build off of a previous answer here is what I added to the following files to solve this compilation error, I added each definition under it's declaration
   
   **src/operator/image/image_random.cu**
   ```
   template
   void ToTensorImplCUDA
       <mshadow::bfloat::bf16_t, Tensor<gpu, 3, mshadow::bfloat::bf16_t>, Tensor<gpu, 3, float>>
       (mshadow::Stream<gpu>*,
         const Tensor<gpu, 3, mshadow::bfloat::bf16_t>,
         const Tensor<gpu, 3, float>,
         const int, 
         const float);
   
   template
   void ToTensorImplCUDA
       <mshadow::bfloat::bf16_t, Tensor<gpu, 4, mshadow::bfloat::bf16_t>, Tensor<gpu, 4, float>>
       (mshadow::Stream<gpu>*,
         const Tensor<gpu, 4, mshadow::bfloat::bf16_t>,
         const Tensor<gpu, 4, float>,
         const int, 
         const float);
   
   template
   void NormalizeImplCUDA
       <mshadow::bfloat::bf16_t>
       (mshadow::Stream<gpu> *s,
         const mshadow::bfloat::bf16_t*,
         mshadow::bfloat::bf16_t*,
         const int,
         const int,
         const int,
         const int,
         const int,
         const float,
         const float,
         const float,
         const float,
         const float,
         const float);
   
   template
   void NormalizeBackwardImplCUDA
       <mshadow::bfloat::bf16_t>
       (mshadow::Stream<gpu> *s,
         const mshadow::bfloat::bf16_t*,
         mshadow::bfloat::bf16_t*,
         const int,
         const int,
         const int,
         const int,
         const int,
         const float,
         const float,
         const float);
   
   ```
   
   **src/operator/image/resize.cu**
   ```
   template 
   void ResizeImplCUDA
       <mshadow::bfloat::bf16_t, Tensor<gpu, 3, mshadow::bfloat::bf16_t>, float>
       (Stream<gpu>*, 
         const Tensor<gpu, 3, mshadow::bfloat::bf16_t>, 
         const Tensor<gpu, 3, mshadow::bfloat::bf16_t>);
   
   template 
   void ResizeImplCUDA
       <mshadow::bfloat::bf16_t, Tensor<gpu, 4, mshadow::bfloat::bf16_t>, float>
       (Stream<gpu>*, 
         const Tensor<gpu, 4, mshadow::bfloat::bf16_t>, 
         const Tensor<gpu, 4, mshadow::bfloat::bf16_t>);
   ```
   
   **src/operator/numpy/random/np_multinomial_op.cu**
   ```
   template 
   void CheckPvalGPU
       <mshadow::bfloat::bf16_t>
       (const OpContext&, 
         mshadow::bfloat::bf16_t* , 
         int);
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@mxnet.apache.org
For additional commands, e-mail: issues-help@mxnet.apache.org