You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by 李泽旭 via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/11/17 08:21:46 UTC

[Apache TVM Discuss] [Questions] [heterogeneous execution] How to use heterogeneous execution on multiple GPU?


[quote="Maxwell-Hu, post:1, topic:4347, full:true"]
I'm interested in heterogeneous execution on multiple GPU, the basic idea is to schedule different ops to different GPU.

What I expected was:
- GPU-0 executes 'sigmoid' and 'tanh'
- GPU-1 executes 'nn.dense'

However, the result seems that all the operators are executed on GPU-0, as I observed that only the GPU-0 is busy even if I place all ops to the GPU-1

Any comments and suggestions are greatly appreciated.

![image|558x312](upload://eJHnE862ilt11RzMtdqIiw4TcvG.png) 

The code is referenced from  [[Heterogeneous execution examle]](https://discuss.tvm.ai/t/relay-homogenous-compilation-example/2539/5?u=maxwell-hu) 

    annotated_ops_list_gpu0 = {"sigmoid", "tanh"}
    annotated_relay_ops_gpu0 = [tvm.relay.op.get(op) for op in annotated_ops_list_gpu0]
    annotated_ops_list_gpu1 = {"nn.dense"}
    annotated_relay_ops_gpu1 = [tvm.relay.op.get(op) for op in annotated_ops_list_gpu1]
    
    class ScheduleDense(ExprMutator):
        def __init__(self, device_0, device_1):
        self.device_0 = device_0
        self.device_1 = device_1
        super(ScheduleDense, self).__init__()

        def visit_call(self, expr):
             visit = super().visit_call(expr)
            if expr.op in annotated_relay_ops_gpu0:
                return relay.annotation.on_device(visit, self.device_1)
            elif expr.op in annotated_relay_ops_gpu1:
                return relay.annotation.on_device(visit, self.device_1)
            else:
                return visit

    def schedule_dense_on_gpu(expr):
        sched = ScheduleDense(tvm.gpu(2), tvm.gpu(1))
        return sched.visit(expr)
[/quote]
Nín hǎo, zhège wèntí jiějuéle ma

volume_up

11 / 5000

## Translation results

Hello, is this problem solved?





---
[Visit Topic](https://discuss.tvm.apache.org/t/heterogeneous-execution-how-to-use-heterogeneous-execution-on-multiple-gpu/4347/4) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/9e3811e3872eab6071d9ec22c198c3deb3427c230fa757eb4a272510500a6c8e).