You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/05/14 13:48:23 UTC

[GitHub] [tvm] Hzfengsy opened a new pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Hzfengsy opened a new pull request #8044:
URL: https://github.com/apache/tvm/pull/8044


   The last PR of M1c, which enables the lowering and building for TensorIR.
   Please note that this PR will change the building progress if we input an IRModule.
   
   Previously, we assume that the IRModule has done all the passes if we try to build an IRModule to a runnable Module.
   
   However, TensorIR regards IRModule as the schedule unit, so that we may want to build an IRModule at any stage. (has done no passes, some of the passes or all of the passes). We will apply all passes to an IRModule when we want to build it.
   
   cc @tqchen @junrushao1994 @comaniac @jcf94 @xqdan 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #8044:
URL: https://github.com/apache/tvm/pull/8044#issuecomment-841377189


   Let's fix the CI and get it merged :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] xqdan commented on a change in pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Posted by GitBox <gi...@apache.org>.
xqdan commented on a change in pull request #8044:
URL: https://github.com/apache/tvm/pull/8044#discussion_r632896193



##########
File path: python/tvm/driver/build_module.py
##########
@@ -160,16 +173,38 @@ def lower(sch, args, name="main", binds=None, simple_mode=False):
     lower_phase3 = [x[1] for x in add_lower_pass if x[0] > 2]
 
     # Phase 0
-    if isinstance(sch, schedule.Schedule):
-        mod = form_irmodule(sch, args, name, binds)
+    pass_list = lower_phase0
+    is_legacy_te_schedule: bool = False
+
+    if isinstance(inputs, schedule.Schedule):
+        if args is None:
+            raise ValueError("args must be given for lowering from TE schedule")
+        mod = form_irmodule(inputs, args, name, binds)
+        is_legacy_te_schedule = True
+    elif isinstance(inputs, PrimFunc):
+        func = inputs.with_attr("global_symbol", name)
+        if pass_ctx.config.get("tir.noalias", True):
+            func = func.with_attr("tir.noalias", True)
+        mod = tvm.IRModule({name: func})
+    elif isinstance(inputs, IRModule):
+        mod = inputs
     else:
-        mod = sch
+        raise TypeError(
+            f"tvm.lower expected te.Schedule, PrimFunc or IRModule, but got {type(inputs)}"
+        )
 
-    pass_list = lower_phase0
     # Phase 1
+    if is_legacy_te_schedule:
+        pass_list += [
+            tvm.tir.transform.InjectPrefetch(),
+            tvm.tir.transform.StorageFlatten(64, instrument_bound_checkers),
+        ]
     pass_list += [
-        tvm.tir.transform.InjectPrefetch(),
-        tvm.tir.transform.StorageFlatten(64, instrument_bound_checkers),
+        tvm.tir.transform.LowerInitBlock(),
+        tvm.tir.transform.PlanAndUpdateBufferAllocationLocation(),
+        tvm.tir.transform.ConvertBlocksToOpaque(),
+        tvm.tir.transform.CompactBufferAllocation(),
+        tvm.tir.transform.FlattenBuffer(),

Review comment:
       For is_legacy_te_schedule flow,do we need to have both StorageFlatten and FlattenBuffer?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen merged pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Posted by GitBox <gi...@apache.org>.
tqchen merged pull request #8044:
URL: https://github.com/apache/tvm/pull/8044


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on a change in pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Posted by GitBox <gi...@apache.org>.
tqchen commented on a change in pull request #8044:
URL: https://github.com/apache/tvm/pull/8044#discussion_r632955612



##########
File path: python/tvm/driver/build_module.py
##########
@@ -160,16 +173,38 @@ def lower(sch, args, name="main", binds=None, simple_mode=False):
     lower_phase3 = [x[1] for x in add_lower_pass if x[0] > 2]
 
     # Phase 0
-    if isinstance(sch, schedule.Schedule):
-        mod = form_irmodule(sch, args, name, binds)
+    pass_list = lower_phase0
+    is_legacy_te_schedule: bool = False
+
+    if isinstance(inputs, schedule.Schedule):
+        if args is None:
+            raise ValueError("args must be given for lowering from TE schedule")
+        mod = form_irmodule(inputs, args, name, binds)
+        is_legacy_te_schedule = True
+    elif isinstance(inputs, PrimFunc):
+        func = inputs.with_attr("global_symbol", name)
+        if pass_ctx.config.get("tir.noalias", True):
+            func = func.with_attr("tir.noalias", True)
+        mod = tvm.IRModule({name: func})
+    elif isinstance(inputs, IRModule):
+        mod = inputs
     else:
-        mod = sch
+        raise TypeError(
+            f"tvm.lower expected te.Schedule, PrimFunc or IRModule, but got {type(inputs)}"
+        )
 
-    pass_list = lower_phase0
     # Phase 1
+    if is_legacy_te_schedule:
+        pass_list += [
+            tvm.tir.transform.InjectPrefetch(),
+            tvm.tir.transform.StorageFlatten(64, instrument_bound_checkers),
+        ]
     pass_list += [
-        tvm.tir.transform.InjectPrefetch(),
-        tvm.tir.transform.StorageFlatten(64, instrument_bound_checkers),
+        tvm.tir.transform.LowerInitBlock(),
+        tvm.tir.transform.PlanAndUpdateBufferAllocationLocation(),
+        tvm.tir.transform.ConvertBlocksToOpaque(),
+        tvm.tir.transform.CompactBufferAllocation(),
+        tvm.tir.transform.FlattenBuffer(),

Review comment:
       we do not, i agree it is helpful to put new tir schedule specific passes to an else block for now




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen merged pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Posted by GitBox <gi...@apache.org>.
tqchen merged pull request #8044:
URL: https://github.com/apache/tvm/pull/8044


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on a change in pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Posted by GitBox <gi...@apache.org>.
tqchen commented on a change in pull request #8044:
URL: https://github.com/apache/tvm/pull/8044#discussion_r632955391



##########
File path: python/tvm/driver/build_module.py
##########
@@ -119,32 +125,39 @@ def form_irmodule(sch, args, name, binds):
     return tvm.IRModule({name: func})
 
 
-def lower(sch, args, name="main", binds=None, simple_mode=False):
+def lower(
+    inputs: Union[schedule.Schedule, PrimFunc, IRModule],
+    args: Optional[List[Union[Buffer, tensor.Tensor, Var]]] = None,
+    name: str = "main",
+    binds: Optional[Mapping[tensor.Tensor, Buffer]] = None,
+    simple_mode: bool = False,
+) -> IRModule:
     """Lowering step before build into target.
 
     Parameters
     ----------
-    sch : tvm.te.schedule.Schedule
-        The schedule to be built
+    input : Union[schedule.Schedule, PrimFunc, IRModule]
+        The TE schedule or TensorIR PrimFunc/IRModule to be built
 
-    args : list of Buffer or Tensor or Var
-        The argument lists to the function.
+    args : Optional[List[Union[Buffer, tensor.Tensor, Var]]]
+        The argument lists to the function for TE schedule.
+        It should be None if we want to lower TensorIR.
 
-    name : str, optional
+    name : str
         The name of result function.
 
-    binds : dict of :any:`Tensor` to :any:`Buffer`, optional
+    binds : Optional[Mapping[tensor.Tensor, Buffer]]
         Dictionary that maps the Tensor to Buffer which specified the data layout
         requirement of the function. By default, a new compact buffer is created
         for each tensor in the argument.
 
-    simple_mode : bool, optional
+    simple_mode : bool
         Whether only output simple and compact statement, this will skip
         LoopPartition, api wrapper generation and Unrolling.
 
     Returns
     -------
-    m : IRModule or Stmt
+    m : IRModule
        The result IRModule, if simple_mode=False
        Then the Stmt before make api is returned.

Review comment:
       I agree, @Hzfengsy let us remove the comment of simple_mode since it no longer applies.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] xqdan commented on a change in pull request #8044: [TensorIR][M1c] Lower and build TensorIR

Posted by GitBox <gi...@apache.org>.
xqdan commented on a change in pull request #8044:
URL: https://github.com/apache/tvm/pull/8044#discussion_r632895555



##########
File path: python/tvm/driver/build_module.py
##########
@@ -119,32 +125,39 @@ def form_irmodule(sch, args, name, binds):
     return tvm.IRModule({name: func})
 
 
-def lower(sch, args, name="main", binds=None, simple_mode=False):
+def lower(
+    inputs: Union[schedule.Schedule, PrimFunc, IRModule],
+    args: Optional[List[Union[Buffer, tensor.Tensor, Var]]] = None,
+    name: str = "main",
+    binds: Optional[Mapping[tensor.Tensor, Buffer]] = None,
+    simple_mode: bool = False,
+) -> IRModule:
     """Lowering step before build into target.
 
     Parameters
     ----------
-    sch : tvm.te.schedule.Schedule
-        The schedule to be built
+    input : Union[schedule.Schedule, PrimFunc, IRModule]
+        The TE schedule or TensorIR PrimFunc/IRModule to be built
 
-    args : list of Buffer or Tensor or Var
-        The argument lists to the function.
+    args : Optional[List[Union[Buffer, tensor.Tensor, Var]]]
+        The argument lists to the function for TE schedule.
+        It should be None if we want to lower TensorIR.
 
-    name : str, optional
+    name : str
         The name of result function.
 
-    binds : dict of :any:`Tensor` to :any:`Buffer`, optional
+    binds : Optional[Mapping[tensor.Tensor, Buffer]]
         Dictionary that maps the Tensor to Buffer which specified the data layout
         requirement of the function. By default, a new compact buffer is created
         for each tensor in the argument.
 
-    simple_mode : bool, optional
+    simple_mode : bool
         Whether only output simple and compact statement, this will skip
         LoopPartition, api wrapper generation and Unrolling.
 
     Returns
     -------
-    m : IRModule or Stmt
+    m : IRModule
        The result IRModule, if simple_mode=False
        Then the Stmt before make api is returned.

Review comment:
       We may need update Stmt related comment here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org