You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/04/24 11:55:58 UTC

[GitHub] [tvm] jiangjiajun opened a new pull request, #11108: [TVMC] Fix error while compile paddle model with tvmc

jiangjiajun opened a new pull request, #11108:
URL: https://github.com/apache/tvm/pull/11108

   The `tvmc` command will throw a error while the passed path of model is not exist,  But for PaddlePaddle model, it contains 2 file `model_name.pdmodel` and `model_name.pdiparams`, we only pass the prefix like `inference_model/model_name`. 
   @leandron 
   
   Thanks for contributing to TVM!   Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] gromero commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
gromero commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1125834292

   @jiangjiajun Hi. Let me know if you need any help with the test or with the fix itself.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jiangjiajun commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
jiangjiajun commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1131406335

   @leandron @gromero Hi, is there any problem? I think this pull request is ready to merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao1994 commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1156260572

   > I have only one comment in there. Once that is fixed, I'm happy for this PR to be merged.
   
   Looks like this thread has been inactive for a month, while according to the conversation above, the PR itself has been mature enough to merge. To this end, to quickly unblock our open-source contributors, I would love to propose that we create a new PR based on the latest HEAD, and get it merged as soon as we could.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jiangjiajun commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
jiangjiajun commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1151139719

   HI, after I remove the tflite test from command line, the pull request passed the ci.
   
   Since this pull request is to provide a command line testing for paddle frontend, I remove the unrelated code. 
   
   Could you help to review and merge this PR? @gromero @leandron 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] Liliyaw commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
Liliyaw commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1154663814

   @gromero Hi Gromero, we may need this to be fixed in time so hope to get your support to review it! Thank you so much for the great support!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jiangjiajun commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
jiangjiajun commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1126709806

   @gromero I have updated the code, could you help to review this PR? Also there's a strange error happened in ci, is it caused by this PR? I have tested in my environment without any failing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jiangjiajun commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
jiangjiajun commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1126073336

   @gromero Sorry, I'm supporting new project in Paddle team these days, I saw you commit a code for the command line, I'll check it and update this pull request next week. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] gromero commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
gromero commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1108613361

   @jiangjiajun Hi. Thanks for reporting this issue and for the fix.
   
   I think paddle import kind violates the expectation that `FILE` is always a file on `tvmc compile`...
   
   @jiangjiajun @leandron My preference for consistence here is to keep `FILE` argument being passed to `tvmc compile` as only a file  (not a directory). In this case we would need to change the current behavior to load a paddle model passing a `model_name.pmodel` file, for example:
   
   ```$ tvmc compile --target "llvm" ~/scripts/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/inference.pdmodel```
   
   instead of a directory as we do currently. Then it would be necessary to change `class PaddleFrontend`, [load() method](https://github.com/apache/tvm/blob/main/python/tvm/driver/tvmc/frontends.py#L273) to use maybe `os.path.splitext()` to acquire the "Directory path to save model + model name without suffix" expected by Paddle's [load_inference_model](https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/static/load_inference_model_en.html).
   
   This would also have the benefit that the model format could now be guessed for Paddle as well.  ( @jiangjiajun I think that before this issue one needed to pass explicitly the model format  for Paddle models (i.e. `--model-format paddle`) since although we have `suffixes()` properly in place for Paddle there is no way `guess_frontend()` can use it when `FILE` is in the `inference_model/model_name` form?)
   
   Finally, we need a test case for compiling a simple paddle model to avoid such kind of regressions (to be submitted with this PR also).
   
   @jiangjiajun @leandron WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] leandron commented on a diff in pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
leandron commented on code in PR #11108:
URL: https://github.com/apache/tvm/pull/11108#discussion_r876891742


##########
tests/python/driver/tvmc/test_command_line.py:
##########
@@ -155,3 +157,41 @@ def test_tvmc_tune_file_check(capsys, invalid_input):
     )
     on_assert_error = f"'tvmc tune' failed to check invalid FILE: {invalid_input}"
     assert captured.err == expected_err, on_assert_error
+
+
+@pytest.fixture
+def paddle_model(paddle_resnet50):
+    # If we can't import "paddle" module, skip testing paddle as the input model.
+    if pytest.importorskip("paddle", reason="'paddle' module not installed"):
+        return paddle_resnet50
+
+
+@pytest.fixture
+def tflite_model(tflite_mobilenet_v1_1_quant):
+    if pytest.importorskip("tflitex", reason="'tflite' module not installed"):

Review Comment:
   ```suggestion
       if pytest.importorskip("tflite", reason="'tflite' module not installed"):
   ```



##########
tests/python/driver/tvmc/test_command_line.py:
##########
@@ -155,3 +157,41 @@ def test_tvmc_tune_file_check(capsys, invalid_input):
     )
     on_assert_error = f"'tvmc tune' failed to check invalid FILE: {invalid_input}"
     assert captured.err == expected_err, on_assert_error
+
+
+@pytest.fixture
+def paddle_model(paddle_resnet50):
+    # If we can't import "paddle" module, skip testing paddle as the input model.
+    if pytest.importorskip("paddle", reason="'paddle' module not installed"):
+        return paddle_resnet50
+
+
+@pytest.fixture
+def tflite_model(tflite_mobilenet_v1_1_quant):
+    if pytest.importorskip("tflitex", reason="'tflite' module not installed"):

Review Comment:
   I think this is being always skipped because we don't have a `tflitex` module.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jiangjiajun commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
jiangjiajun commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1132370741

   > Sorry I forgot to review this one again. I have only one comment in there. Once that is fixed, I'm happy for this PR to be merged.
   
   @leandron  Fixed. But the ci failed again, I have no idea why this error happened, is it caused by this pull request? https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-11108/10/pipeline


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] gromero commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
gromero commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1110471946

   > @gromero @leandron I have no problem. Could you provide an example of test case for other tvmc frontend, I will update in this PR
   
   @jiangjiajun So, since I understand we need to test the code path between tvmc' s `main()` (in `main.py`) and `compile_model()` (in `compiler.py`), which includes exercising any logic/check to validate the `args` attributes passed to `fronends.load_model()` plus the load method itself (i.e. it's a command line test), I suggest something like:
   
   ```
   diff --git a/tests/python/driver/tvmc/test_command_line.py b/tests/python/driver/tvmc/test_command_line.py
   index 5b15492aa..1d9180fed 100644
   --- a/tests/python/driver/tvmc/test_command_line.py
   +++ b/tests/python/driver/tvmc/test_command_line.py
   @@ -20,9 +20,11 @@ import pytest
    import shutil
    
    from pytest_lazyfixture import lazy_fixture
   +from unittest import mock
   +
    from tvm.driver.tvmc.main import _main
    from tvm.driver.tvmc.model import TVMCException
   -
   +from tvm.driver.tvmc import compiler
    
    @pytest.mark.skipif(
        platform.machine() == "aarch64",
   @@ -155,3 +157,25 @@ def test_tvmc_tune_file_check(capsys, invalid_input):
        )
        on_assert_error = f"'tvmc tune' failed to check invalid FILE: {invalid_input}"
        assert captured.err == expected_err, on_assert_error
   +
   +@pytest.fixture
   +def paddle_model(paddle_resnet50):
   +    # If we can't import "paddle" module, skip testing paddle
   +    if pytest.importorskip("paddle", reason="'paddle' module not installed"):
   +        return paddle_resnet50
   +
   +@pytest.mark.parametrize("model", [lazy_fixture("paddle_model"), ])
   +# compile_model() can take too long and is tested elsewhere, hence it's mocked below
   +@mock.patch.object(compiler, "compile_model")
   +def test_tvmc_compile_input_model(mock_compile_model, tmpdir_factory, model):
   +
   +    output_dir = tmpdir_factory.mktemp("output")
   +    output_file = output_dir / "model.tar"
   +
   +    compile_cmd =  f"tvmc compile --target 'llvm' {model} --model-format paddle --output {output_file}"
   +    run_arg = compile_cmd.split(" ")[1:]
   +
   +    _main(run_arg)
   +
   +    mock_compile_model.assert_called_once()
   ```
   
   This way we could also easily extend  `test_tvmc_compile_input_model`  to test other input formats. For instance, a `tflite` model, by adding:
   
   ```
   diff --git a/tests/python/driver/tvmc/test_command_line.py b/tests/python/driver/tvmc/test_command_line.py
   index 505faea65..8e4a116b4 100644
   --- a/tests/python/driver/tvmc/test_command_line.py
   +++ b/tests/python/driver/tvmc/test_command_line.py
   @@ -164,7 +164,14 @@ def paddle_model(paddle_resnet50):
        if pytest.importorskip("paddle", reason="'paddle' module not installed"):
            return paddle_resnet50
    
   -@pytest.mark.parametrize("model", [lazy_fixture("paddle_model"), ])
   +
   +@pytest.fixture
   +def tflite_model(tflite_mobilenet_v1_1_quant):
   +    if pytest.importorskip("tflite", reason="'tflite' module not installed"):
   +        return tflite_mobilenet_v1_1_quant
   +
   +
   +@pytest.mark.parametrize("model", [lazy_fixture("paddle_model"), lazy_fixture("tflite_model"), ])
    # compile_model() can take too long and is tested elsewhere, hence it's mocked below
    @mock.patch.object(compiler, "compile_model")
    # @mock.patch.object(compiler, "compile_model")
   ```
   
   I've also used mock() for `compile_model()` because it takes too long and it's also tested elsewhere.
   
   Also `compile_cmd` in the test won't have `--model-format paddle` after the fix in the sense we've discussed, so it would work for other formats too.
   
   Note I've mentioned the `tflite` format just as an example on how to extend the test in the future, it's out of scope for the PR.
   
   You probably know it already, but just in case, you can use pytest `-k` flags to select only that test to run, for example:
   
   ```
   $ pytest -k test_tvmc_compile_input_model -vvvv ./test_command_line.py 
   [02:37:03] /home/gromero/git/tvm/src/target/target_kind.cc:163: Warning: Unable to detect CUDA version, default to "-arch=sm_20" instead
   enabled targets: llvm
   pytest marker: 
   ==================================================================================== test session starts =====================================================================================
   platform linux -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
   cachedir: .pytest_cache
   rootdir: /home/gromero/git/tvm/tests/python/driver/tvmc
   plugins: lazy-fixture-0.6.3
   collected 10 items / 8 deselected / 2 selected                                                                                                                                               
   
   test_command_line.py::test_tvmc_compile_input_model[paddle_model] PASSED                                                                                                               [ 50%]
   test_command_line.py::test_tvmc_compile_input_model[tflite_model] SKIPPED ('tflite' module not installed)                                                                              [100%]
   
   ========================================================================= 1 passed, 1 skipped, 8 deselected in 1.67s =========================================================================
   ```
   
   Feel free to adapt it or use your own test version.
   
   HTH.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] leandron commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
leandron commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1108782359

   > @jiangjiajun @leandron WDYT?
   
   I agree with @gromero, that we can use the path to a provided `*.pdmodel` to infer the path to the other required files. Does that seem feasible @jiangjiajun?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] jiangjiajun commented on pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
jiangjiajun commented on PR #11108:
URL: https://github.com/apache/tvm/pull/11108#issuecomment-1109226100

   @gromero @leandron I have no problem. Could you provide an example of  test case for other tvmc frontend, I will update in this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] junrushao1994 closed pull request #11108: [TVMC] Fix error while compile paddle model with tvmc

Posted by GitBox <gi...@apache.org>.
junrushao1994 closed pull request #11108: [TVMC] Fix error while compile paddle model with tvmc
URL: https://github.com/apache/tvm/pull/11108


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org