You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by chenugray via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/12/30 09:02:21 UTC

[Apache TVM Discuss] [Questions] Bert-large masked lm pre-quantization model build failed


    from pytorch_pretrained_bert import BertForMaskedLM
    import torch

    def main(args):
        bert_model_origin = BertForMaskedLM.from_pretrained("bert-large-uncased")
        example_tensor = torch.randint(0, 100, (1, 256))
        model_int8 = torch.quantization.quantize_dynamic(bert_model_origin, quant_layers={torch.nn.Linear}, dtype=torch.qint8)
        model_int8.eval()
        trace_model = torch.jit.trace(model_int8, [example_tensor])
        trace_model.eval()
        shape_list = [(i.debugName().split('.')[0], i.type().sizes()) for i in list(trace_model.graph.inputs())[1:]]
        mod_bert, params_bert = tvm.relay.frontend.pytorch.from_pytorch(trace_model, shape_list)
        target = tvm.target.Target(target="llvm", host="llvm")
        with tvm.transform.PassContext(opt_level=3):
            lib = relay.build(mod_bert, target=target, params=params_bert)
            lib.export_library(os.path.realpath("net_int18_cpu.tar"))

see code above, when build pre-quantization bert-large masked lm model, it will a failure like this:
![image|690x233](upload://lSAlz1g5rkG9VbI3TOkfSZMdFaV.png)





---
[Visit Topic](https://discuss.tvm.apache.org/t/bert-large-masked-lm-pre-quantization-model-build-failed/11800/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/09fcdf5ea934d061b9ce23c6c4b15ce30bc7c2e04310c2c62df33066be3f6a47).