You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by Nam Nguyen Duc via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/08/29 08:18:25 UTC

[Apache TVM Discuss] [Questions] How to apply best history after Auto Scheduler for relay.vm.compile


I'm using Auto Scheduler to find the best performance for the model. However, when I apply best history like this:
```python
    with auto_scheduler.ApplyHistoryBest(log_file):
        with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"], config={"relay.backend.use_auto_scheduler": True}):
            vm_exec = relay.vm.compile(mod, target=TARGET, params=params)
```
A lot of log throw to terminal: `Cannot find config for target... `  and model after compile not faster:
```python
Cannot find config for target=llvm -keys=cpu -libs=mkl -link-params=0 -mcpu=core-avx2, workload=('conv2d_NCHWc.x86' .....
```
Because my model not able to run with relay.build 
<br> How I could be applied log file with relay.vm.compile !!?





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-apply-best-history-after-auto-scheduler-for-relay-vm-compile/10908/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/19d012fd4a2e8aa280a49b22d3ea2068e209c5a75d87bfc16b48d56daa47bf3a).

[Apache TVM Discuss] [Questions] How to apply best history after Auto Scheduler for relay.vm.compile

Posted by Nam Nguyen Duc via Apache TVM Discuss <no...@discuss.tvm.ai>.

Are there any other solutions !?
I spent a lot of time tuning the model. But now I can't use it :disappointed_relieved:





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-apply-best-history-after-auto-scheduler-for-relay-vm-compile/10908/3) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/f4e0e95dfb080f260bfc7d7961733e8236a10a86e5f94afd0fb88e51e02a00ec).

[Apache TVM Discuss] [Questions] How to apply best history after Auto Scheduler for relay.vm.compile

Posted by Andrey Malyshev via Apache TVM Discuss <no...@discuss.tvm.ai>.

[quote="namduc, post:1, topic:10908"]
`relay.vm.compile`
[/quote]
virtual machine execution cannot be tuned so far





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-apply-best-history-after-auto-scheduler-for-relay-vm-compile/10908/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/8b41354d7458614688ada11356cf8bdce0882d56ebf77bae4f3582543795d6ba).

[Apache TVM Discuss] [Questions] How to apply best history after Auto Scheduler for relay.vm.compile

Posted by Nam Nguyen Duc via Apache TVM Discuss <no...@discuss.tvm.ai>.

@jwfromm Thanks for your support!
I use the model architecture customized from the Maskrcnn model
<br> Here is my tuning script:
```python
import tvm
from tvm import relay, auto_scheduler
from tvm.runtime.vm import VirtualMachine

TARGET = tvm.target.Target("llvm -mcpu=broadwell")
log_file = "card_extraction-autoschedule.json"

dummy_input = torch.randn(1, 3, 800, 800,device='cpu', requires_grad=True)
model = torch.jit.trace(model, dummy_input)
mod, params = relay.frontend.from_pytorch(model, input_infos=[('input0', dummy_input.shape)])

print("Extract tasks...")
tasks, task_weights = auto_scheduler.extract_tasks(mod["main"], params, TARGET)

for idx, task in enumerate(tasks):
     print("========== Task %d  (workload key: %s) ==========" % (idx, task.workload_key))
     print(task.compute_dag)

def run_tuning():
     print("Begin tuning...")
     tuner = auto_scheduler.TaskScheduler(tasks, task_weights)
     tune_option = auto_scheduler.TuningOptions(
     num_measure_trials=20000,  
         runner=auto_scheduler.LocalRunner(repeat=10, enable_cpu_cache_flush=True),
         measure_callbacks=[auto_scheduler.RecordToFile(log_file)],
      )
      tuner.tune(tune_option)

run_tuning()

# I apply log file here to compiling model
 with auto_scheduler.ApplyHistoryBest(log_file):
      with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"], config={"relay.backend.use_auto_scheduler": True}):
         vm_exec = relay.vm.compile(mod, target=TARGET, params=params)
    
dev = tvm.cpu()
vm = VirtualMachine(vm_exec, dev)
start_t = time.time()
vm.set_input("main", **{"input0": sample.cpu().numpy()})
tvm_res = vm.run()
print(tvm_res[0].numpy().tolist())
print("Inference time of model after tuning: {:0.4f}".format(time.time() - start_t))
```





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-apply-best-history-after-auto-scheduler-for-relay-vm-compile/10908/5) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/bd7b9f2507ea10e6274fc42249ddec529834a6add0a3f518376945e1cfee3573).

[Apache TVM Discuss] [Questions] How to apply best history after Auto Scheduler for relay.vm.compile

Posted by Josh Fromm via Apache TVM Discuss <no...@discuss.tvm.ai>.

Hi @namduc, deploying with the VM after autoscheduling shoudl be fine and it's not clear why autoscheduler thinks your logs dont apply to your model. Would it be possible for you to post your tuning script as well?





---
[Visit Topic](https://discuss.tvm.apache.org/t/how-to-apply-best-history-after-auto-scheduler-for-relay-vm-compile/10908/4) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/d4e6ee64531e91559b1ae9010f4012ee4529a5f5802d950ee9ba0fff6e1f254c).