You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Hequn Cheng (Jira)" <ji...@apache.org> on 2019/11/19 09:41:00 UTC

[jira] [Commented] (FLINK-14581) Support to run Python UDF jobs in a YARN cluster

    [ https://issues.apache.org/jira/browse/FLINK-14581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977308#comment-16977308 ] 

Hequn Cheng commented on FLINK-14581:
-------------------------------------

Resolved in 1.10.0 via df9b757b73b59e18a0b739c207bfe3f149a075d7

> Support to run Python UDF jobs in a YARN cluster
> ------------------------------------------------
>
>                 Key: FLINK-14581
>                 URL: https://issues.apache.org/jira/browse/FLINK-14581
>             Project: Flink
>          Issue Type: Sub-task
>          Components: API / Python
>            Reporter: Dian Fu
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.10.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently it will throw the following exception when submit a Python UDF job in YARN cluster:
> {code:java}
> Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.io.IOException: Cannot run program "null/bin/pyflink-udf-runner.sh": error=2, No such file or directory
>   at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4966)
>   at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:211)
>   at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:202)
>   at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:185)
>   at org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:171)
>   at org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractPythonScalarFunctionOperator.java:177)
>   at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:114)
>   at org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:137)
>   at org.apache.flink.table.runtime.operators.python.PythonScalarFunctionOperator.open(PythonScalarFunctionOperator.java:70)
>   at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:565)
>   at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:412)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:696)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)
>   ... 1 more
> {code}
> The reason is that pyflink-udf-runner.sh is not submitted and is not available for the operator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)