You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/02/04 23:23:23 UTC

[GitHub] [tvm] icemelon9 commented on a change in pull request #7404: [Relay][Topi][CPU] Dense with weight transform

icemelon9 commented on a change in pull request #7404:
URL: https://github.com/apache/tvm/pull/7404#discussion_r570606734



##########
File path: src/relay/op/nn/nn.h
##########
@@ -88,6 +90,29 @@ bool DenseRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
   return true;
 }
 
+template <typename AttrType>

Review comment:
       should we keep this function in `nn.cc`?

##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -364,14 +364,13 @@ def conv1d_strategy_cpu(attrs, inputs, out_type, target):
 def dense_strategy_cpu(attrs, inputs, out_type, target):
     """dense x86 strategy"""
     strategy = _op.OpStrategy()
-    m, _ = inputs[0].shape
     same_type = inputs[0].dtype == inputs[1].dtype == out_type.dtype
     dtype = inputs[0].dtype
     u8s8s32 = dtype == "uint8" and inputs[1].dtype == "int8" and out_type.dtype == "int32"
     strategy.add_implementation(
-        wrap_compute_dense(topi.x86.dense_nopack),
-        wrap_topi_schedule(topi.x86.schedule_dense_nopack),
-        name="dense_nopack.x86",
+        wrap_compute_dense(topi.x86.dense_pack),
+        wrap_topi_schedule(topi.x86.schedule_dense_pack),
+        name="dense_pack.x86",

Review comment:
       You could add both implementation but make `dense_pack` with higher plevel.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org