You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/06/21 16:56:20 UTC

[GitHub] [tvm] AndrewZhaoLuo opened a new issue #8295: Vulkan Support for Mixed Precision Pass

AndrewZhaoLuo opened a new issue #8295:
URL: https://github.com/apache/tvm/issues/8295


   Solve issues and make modifications to support Vulkan for mixed precision pass here: https://github.com/apache/tvm/pull/8069
   
   Current initial issues as described by @Lunderberg 
   
   > On the vulkan side, it's something similar with the validation checks failing an alignment rule.
   
   > > Check failed: res == SPV_SUCCESS (-10 vs. 0) :  index=27 error:Structure id 12 decorated as Block for variable in StorageBuffer storage class must follow standard storage buffer layout rules: member 0 contains an array with stride 6 not satisfying alignment to 8
   %_struct_12 = OpTypeStruct %_runtimearr_v3half  
   
   This issue is completed when unit tests can pass for Vulkan target.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Lunderberg commented on issue #8295: [AMP] Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
Lunderberg commented on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-884911702


   Regarding the numerical accuracy, I had a few maybe-similar issues when putting together the unittests in #8529.  There's a decent number of schedules that perform poorly if the accumulator dtype is float16.  I had a short discussion with @AndrewZhaoLuo last week on how best to implement float32 accumulation in the mixed precision pass, but haven't looked into it much yet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Lunderberg commented on issue #8295: [AMP] Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
Lunderberg commented on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-884903117


   I ran into a few issues with vectorization when I was running ResNet50 with float16.  If you apply PR #8528 , is it still necessary to disable the vectorization?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on issue #8295: [AMP] Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
masahi commented on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-884813492


   I can confirm that TF2 ssd mobilenet v2 can be converted to fp16 and runs on vulkan (AMD) and opencl (Intel Ice lake), if I disable vectorization on fp16 at https://github.com/apache/tvm/blob/main/python/tvm/topi/cuda/injective.py#L54-L55 (cc @Lunderberg).
   
   But the output from fp16 is a bit off compared to fp32 (on both vk and ocl):
   ```
   fp32
   Mean Squared Error of output 0 and shape (1, 100, 4) is 9.562732618023824e-15
   Mean Squared Error of output 1 and shape (1, 100) is 0.0
   Mean Squared Error of output 2 and shape (1, 100) is 4.539840725570343e-13
   Mean Squared Error of output 3 and shape (1,) is 0.0
   Mean Squared Error of output 4 and shape (1, 12804, 4) is 3.1784283863710294e-13
   Mean Squared Error of output 5 and shape (1, 12804, 91) is 2.194374375133825
   
   fp16
   Mean Squared Error of output 0 and shape (1, 100, 4) is 0.01756046526134014
   Mean Squared Error of output 1 and shape (1, 100) is 8.5600004196167
   Mean Squared Error of output 2 and shape (1, 100) is 5.59057809823571e-07
   Mean Squared Error of output 3 and shape (1,) is 0.0
   Mean Squared Error of output 4 and shape (1, 12804, 4) is 5.098227120470256e-07
   Mean Squared Error of output 5 and shape (1, 12804, 91) is 2.664001463870136e-09
   ``` 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on issue #8295: [AMP] Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-884813492


   I can confirm that TF2 ssd mobilenet v2 can be converted to fp16 and runs on vulkan (AMD) and opencl (Intel Ice lake), if I disable vectorization on fp16 at https://github.com/apache/tvm/blob/main/python/tvm/topi/cuda/injective.py#L54-L55 (cc @Lunderberg).
   
   But the output from fp16 is a bit off compared to fp32 (on both vk and ocl). Also on other models I got type mismatch `float32 vs float16` error at Relay level. 
   ```
   fp32
   Mean Squared Error of output 0 and shape (1, 100, 4) is 9.562732618023824e-15
   Mean Squared Error of output 1 and shape (1, 100) is 0.0
   Mean Squared Error of output 2 and shape (1, 100) is 4.539840725570343e-13
   Mean Squared Error of output 3 and shape (1,) is 0.0
   Mean Squared Error of output 4 and shape (1, 12804, 4) is 3.1784283863710294e-13
   Mean Squared Error of output 5 and shape (1, 12804, 91) is 2.194374375133825
   
   fp16
   Mean Squared Error of output 0 and shape (1, 100, 4) is 0.01756046526134014
   Mean Squared Error of output 1 and shape (1, 100) is 8.5600004196167
   Mean Squared Error of output 2 and shape (1, 100) is 5.59057809823571e-07
   Mean Squared Error of output 3 and shape (1,) is 0.0
   Mean Squared Error of output 4 and shape (1, 12804, 4) is 5.098227120470256e-07
   Mean Squared Error of output 5 and shape (1, 12804, 91) is 2.664001463870136e-09
   ``` 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on issue #8295: Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-865193509


   cc @Lunderberg 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on issue #8295: [AMP] Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
masahi commented on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-885204716


   With #8528, I get this error:
   ```
     1: tvm::codegen::CodeGenSPIRV::VisitStmt_(tvm::tir::StoreNode const*)
     0: tvm::codegen::CodeGenSPIRV::StorageInfo::CheckContentType(tvm::runtime::DataType, int)
     File "/home/masa/projects/dev/tvm/src/target/spirv/codegen_spirv.h", line 160
   TVMError: 
   ---------------------------------------------------------------
   An error occurred during the execution of TVM.
   For more information, please see: https://tvm.apache.org/docs/errors.html
   ---------------------------------------------------------------
   
     Check failed: type == expected_type (int32 vs. float32) : Attempted to access buffer T_reshape as element type int32 using an index of size 1 when the element type is float32
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] AndrewZhaoLuo commented on issue #8295: Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
AndrewZhaoLuo commented on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-865193509


   cc @Lunderberg 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on issue #8295: [AMP] Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
masahi commented on issue #8295:
URL: https://github.com/apache/tvm/issues/8295#issuecomment-902388931


   Vulkan support for fp16 is fully functional, thanks @Lunderberg 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi closed issue #8295: [AMP] Vulkan Support for Mixed Precision Pass

Posted by GitBox <gi...@apache.org>.
masahi closed issue #8295:
URL: https://github.com/apache/tvm/issues/8295


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org