You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by ri...@sina.cn on 2017/12/25 13:13:54 UTC

flink yarn-cluster run job --files

Hi,all
in spark,the submit job can have --files,this means" Comma-separated list of files to be placed in the working directory of each executor."
so,in flink,if there have the same method,i use --classpath file:///****,but the job run error,there has not the file.

Re: flink yarn-cluster run job --files

Posted by Ufuk Celebi <uc...@apache.org>.
The file URL needs to be accessible from all nodes, e.g. something
like S3://... or hdfs://...

From the CLI:

```
Adds a URL to each user code classloader  on all nodes in the cluster.
The paths must specify a protocol (e.g. file://) and be accessible on
all nodes (e.g. by means of a NFS share). You can use this option
multiple times for specifying more than one URL. The protocol must be
supported by the {@link java.net.URLClassLoader}.
```

Is this the case?

I don't know whether this would work to access any file you provide though...



On Mon, Dec 25, 2017 at 2:13 PM,  <ri...@sina.cn> wrote:
> Hi,all
>
> in spark,the submit job can have --files,this means" Comma-separated list of
> files to be placed in the working directory of each executor."
>
> so,in flink,if there have the same method,i use --classpath file:///****,but
> the job run error,there has not the file.
>