You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Adrian Zhong (Jira)" <ji...@apache.org> on 2022/04/08 04:00:00 UTC

[jira] [Updated] (FLINK-27130) unable to read custom System properties in job class

     [ https://issues.apache.org/jira/browse/FLINK-27130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Adrian Zhong updated FLINK-27130:
---------------------------------
    Description: 
I'm using Flink YARN-PER-JOB mode to submit a job.

I'm wondering what is wrong, and how flink CliFrontend prevent job class to read system properties which seems to be per-jvm.

 

I have searched all related issues, read unit tests for CliFrontend and DynamicProperties, however, I can't figure out.

 

Here is my job class:
{code:java}
public static void main(String[] args) {  
    String property = System.getProperty("kafka.start_from_timestamp");  
    if (property == null) {
    //-Dkafka.start_from_timestamp=1648828800000
    System.err.println("-Dkafka.start_from_timestamp Not found");
    System.err.println("This are Properties Found in this JVM:");
    System.err.println(System.getProperties().stringPropertyNames());
  } else {
    System.err.println("-Dkafka.start_from_timestamp is" + property);
  }  //....
} {code}
outputs:
{code:java}
-Dkafka.start_from_timestamp Not found
This are Properties Found in this JVM:
[zookeeper.sasl.client, java.runtime.name, sun.boot.library.path, java.vm.version, java.vm.vendor, java.vendor.url, path.separator, java.vm.name, file.encoding.pkg, user.country, sun.java.launcher, sun.os.patch.level, java.vm.specification.name, user.dir, java.runtime.version, java.awt.graphicsenv, java.endorsed.dirs, os.arch, java.io.tmpdir, line.separator, java.vm.specification.vendor, os.name, log4j.configuration, sun.jnu.encoding, java.library.path, java.specification.name, java.class.version, sun.management.compiler, os.version, user.home, user.timezone, java.awt.printerjob, file.encoding, java.specification.version, log4j.configurationFile, user.name, java.class.path, log.file, java.vm.specification.version, sun.arch.data.model, java.home, sun.java.command, java.specification.vendor, user.language, awt.toolkit, java.vm.info, java.version, java.ext.dirs, sun.boot.class.path, java.vendor, logback.configurationFile, java.security.auth.login.config, file.separator, java.vendor.url.bug, sun.cpu.endian, sun.io.unicode.encoding, sun.cpu.isalist] {code}
Environment:

JDK: Oracle 1.8/25.121-b13

Flink flink-1.13.0 

 

What I have tried:
{code:java}
-Denv.java.opts.client="-Dkafka.start_from_timestamp=1648828800000"
-Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001"
-Dkafka.start_from_timestamp=1648828800002
-yD env.java.opts="kafka.start_from_timestamp=1648828800003" {code}
submit command:
{code:java}
bin/flink run -yarnjobManagerMemory 1G --yarntaskManagerMemory 1G --yarnqueue root.users.appuser --yarnslots 1 --yarnname SocketWindowWordCount -m yarn-cluster --class com.slankka.learn.rtc.SocketWindowWordCount -Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001" -Dkafka.start_from_timestamp=1648828800002 -yD env.java.opts="kafka.start_from_timestamp=1648828800003" -d  /data/files_upload/socketWindowWordCount.jar  -hostname 10.11.159.156 --port 7890 {code}
 

  was:
I'm using Flink YARN-PER-JOB mode to submit a job.

I'm wondering what is wrong, and how flink CliFrontend prevent job class to read system properties which seems to be per-jvm.

 

I have searched all related issues, read unit tests for CliFrontend and DynamicProperties, however, I can't figure out.

 

Here is my job class:

```

public static void main(String[] args) {

  String property = System.getProperty("kafka.start_from_timestamp");

  if (property == null) {
    //-Dkafka.start_from_timestamp=1648828800000
    System.err.println("-Dkafka.start_from_timestamp Not found");
    System.err.println("This are Properties Found in this JVM:");
    System.err.println(System.getProperties().stringPropertyNames());
  } else {
  System.err.println("-Dkafka.start_from_timestamp is" + property);
  }

  //....

}

```

outputs:

```

-Dkafka.start_from_timestamp Not found
This are Properties Found in this JVM:
[zookeeper.sasl.client, java.runtime.name, sun.boot.library.path, java.vm.version, java.vm.vendor, java.vendor.url, path.separator, java.vm.name, file.encoding.pkg, user.country, sun.java.launcher, sun.os.patch.level, java.vm.specification.name, user.dir, java.runtime.version, java.awt.graphicsenv, java.endorsed.dirs, os.arch, java.io.tmpdir, line.separator, java.vm.specification.vendor, os.name, log4j.configuration, sun.jnu.encoding, java.library.path, java.specification.name, java.class.version, sun.management.compiler, os.version, user.home, user.timezone, java.awt.printerjob, file.encoding, java.specification.version, log4j.configurationFile, user.name, java.class.path, log.file, java.vm.specification.version, sun.arch.data.model, java.home, sun.java.command, java.specification.vendor, user.language, awt.toolkit, java.vm.info, java.version, java.ext.dirs, sun.boot.class.path, java.vendor, logback.configurationFile, java.security.auth.login.config, file.separator, java.vendor.url.bug, sun.cpu.endian, sun.io.unicode.encoding, sun.cpu.isalist]

```

Environment:

JDK: Oracle 1.8/25.121-b13

Flink flink-1.13.0 

 

What I have tried:

```

-Denv.java.opts.client="-Dkafka.start_from_timestamp=1648828800000"

-Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001"

-Dkafka.start_from_timestamp=1648828800002

-yD env.java.opts="kafka.start_from_timestamp=1648828800003"

```

submit command:

```

bin/flink run --yarnjobManagerMemory 1G --yarntaskManagerMemory 1G --yarnqueue root.users.appuser --yarnslots 1 --yarnname SocketWindowWordCount -m yarn-cluster --class com.slankka.learn.rtc.SocketWindowWordCount -Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001" -Dkafka.start_from_timestamp=1648828800002 -yD env.java.opts="kafka.start_from_timestamp=1648828800003" -d  /data/files_upload/socketWindowWordCount.jar  --hostname 10.11.159.156 --port 7890

```

 


> unable to read custom System properties in job class
> ----------------------------------------------------
>
>                 Key: FLINK-27130
>                 URL: https://issues.apache.org/jira/browse/FLINK-27130
>             Project: Flink
>          Issue Type: Bug
>          Components: Client / Job Submission
>    Affects Versions: 1.13.0, 1.13.6
>            Reporter: Adrian Zhong
>            Priority: Major
>
> I'm using Flink YARN-PER-JOB mode to submit a job.
> I'm wondering what is wrong, and how flink CliFrontend prevent job class to read system properties which seems to be per-jvm.
>  
> I have searched all related issues, read unit tests for CliFrontend and DynamicProperties, however, I can't figure out.
>  
> Here is my job class:
> {code:java}
> public static void main(String[] args) {  
>     String property = System.getProperty("kafka.start_from_timestamp");  
>     if (property == null) {
>     //-Dkafka.start_from_timestamp=1648828800000
>     System.err.println("-Dkafka.start_from_timestamp Not found");
>     System.err.println("This are Properties Found in this JVM:");
>     System.err.println(System.getProperties().stringPropertyNames());
>   } else {
>     System.err.println("-Dkafka.start_from_timestamp is" + property);
>   }  //....
> } {code}
> outputs:
> {code:java}
> -Dkafka.start_from_timestamp Not found
> This are Properties Found in this JVM:
> [zookeeper.sasl.client, java.runtime.name, sun.boot.library.path, java.vm.version, java.vm.vendor, java.vendor.url, path.separator, java.vm.name, file.encoding.pkg, user.country, sun.java.launcher, sun.os.patch.level, java.vm.specification.name, user.dir, java.runtime.version, java.awt.graphicsenv, java.endorsed.dirs, os.arch, java.io.tmpdir, line.separator, java.vm.specification.vendor, os.name, log4j.configuration, sun.jnu.encoding, java.library.path, java.specification.name, java.class.version, sun.management.compiler, os.version, user.home, user.timezone, java.awt.printerjob, file.encoding, java.specification.version, log4j.configurationFile, user.name, java.class.path, log.file, java.vm.specification.version, sun.arch.data.model, java.home, sun.java.command, java.specification.vendor, user.language, awt.toolkit, java.vm.info, java.version, java.ext.dirs, sun.boot.class.path, java.vendor, logback.configurationFile, java.security.auth.login.config, file.separator, java.vendor.url.bug, sun.cpu.endian, sun.io.unicode.encoding, sun.cpu.isalist] {code}
> Environment:
> JDK: Oracle 1.8/25.121-b13
> Flink flink-1.13.0 
>  
> What I have tried:
> {code:java}
> -Denv.java.opts.client="-Dkafka.start_from_timestamp=1648828800000"
> -Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001"
> -Dkafka.start_from_timestamp=1648828800002
> -yD env.java.opts="kafka.start_from_timestamp=1648828800003" {code}
> submit command:
> {code:java}
> bin/flink run -yarnjobManagerMemory 1G --yarntaskManagerMemory 1G --yarnqueue root.users.appuser --yarnslots 1 --yarnname SocketWindowWordCount -m yarn-cluster --class com.slankka.learn.rtc.SocketWindowWordCount -Denv.java.opts="-Dkafka.start_from_timestamp=1648828800001" -Dkafka.start_from_timestamp=1648828800002 -yD env.java.opts="kafka.start_from_timestamp=1648828800003" -d  /data/files_upload/socketWindowWordCount.jar  -hostname 10.11.159.156 --port 7890 {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)