You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/09/04 21:42:09 UTC

[GitHub] [tvm] Mousius commented on a change in pull request #8914: Run InferType in LowerTEPass and remove per_target_modules_ from the Interpreter

Mousius commented on a change in pull request #8914:
URL: https://github.com/apache/tvm/pull/8914#discussion_r702331382



##########
File path: src/relay/backend/te_compiler.cc
##########
@@ -918,33 +918,15 @@ Map<Target, IRModule> GetPerTargetModules(IRModule mod) {
   return per_target_modules;
 }
 
-IRModule GetMainModule(IRModule mod) {
-  IRModule main_module;
-  // Copy the type defs
-  for (const auto& kv : mod->type_definitions) {
-    main_module->AddTypeDef(kv.first, kv.second);
-  }
-  // Copy all Relay functions (we don't include PrimFuncs)
-  for (auto kv : mod->functions) {
-    const GlobalVar& var = kv.first;
-    const BaseFunc& func = kv.second;
-    if (func->IsInstance<tvm::relay::FunctionNode>()) {
-      main_module->Add(var, func);
-    }
-  }
-  return main_module;
-}
-
 Pass LowerTEPass(TargetMap targets, DeviceMap device_context_map,
                  backend::StaticMemoryPlan memory_plan, const String& module_name,
                  std::function<void(Function)> process_fn) {
   runtime::TypedPackedFunc<IRModule(IRModule, PassContext)> pass_func = [=](IRModule module,
                                                                             PassContext ctx) {
     return LowerTE(module, targets, device_context_map, memory_plan, module_name, process_fn);
   };
-  // TODO(@electriclilies, mbs): Fold InferType() pass into LowerTEPass since it will always need to
-  // be called afterwards
-  return tvm::transform::CreateModulePass(pass_func, 1, "LowerTE", {});
+  return tvm::transform::Sequential(
+      {tvm::transform::CreateModulePass(pass_func, 1, "LowerTE", {}), InferType()});

Review comment:
       `opt_level` should be `0` for `LowerTE` so it's always ran - it wasn't checked before this change but `Sequential` checks it when it decides whether to run the pass or not (that's why you're seeing weird graph executor codegen failures as the test runs with `opt_level=0`)
   
   As a side note, we should keep the test to ensure things compile successfully at `opt_level=0` even after the compile engine stuff is removed :smile_cat: 

##########
File path: src/relay/ir/transform.cc
##########
@@ -133,9 +133,8 @@ IRModule FunctionPassNode::operator()(IRModule mod, const PassContext& pass_ctx)
   DLOG(INFO) << "Executing function pass : " << pass_info->name
              << " with opt level: " << pass_info->opt_level;
 
-  // Execute the pass function and return a new module.
   IRModule updated_mod =
-      IRModule(mod->functions, mod->type_definitions, mod->Imports(), mod->source_map);
+      IRModule(mod->functions, mod->type_definitions, mod->Imports(), mod->source_map, mod->attrs);

Review comment:
       Is this getting to the point where a copy constructor would be more appropriate so it'd just be `IRModule(mod)` rather than having to replicate the arguments?

##########
File path: src/ir/module.cc
##########
@@ -43,7 +43,8 @@ namespace tvm {
 
 IRModule::IRModule(tvm::Map<GlobalVar, BaseFunc> functions,
                    tvm::Map<GlobalTypeVar, TypeData> type_definitions,
-                   std::unordered_set<String> import_set, parser::SourceMap source_map) {
+                   std::unordered_set<String> import_set, parser::SourceMap source_map,
+                   DictAttrs attrs) {

Review comment:
       Should we add `attrs` to `SEqualReduce` (`return equal(attrs, other->attrs);`) and `SHashReduce` (`hash_reduce(attrs)`) ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org