You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by GitBox <gi...@apache.org> on 2022/04/24 08:44:20 UTC

[GitHub] [incubator-seatunnel] BenJFan opened a new issue, #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

BenJFan opened a new issue, #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733

   ### Search before asking
   
   - [X] I had searched in the [issues](https://github.com/apache/incubator-seatunnel/issues?q=is%3Aissue+label%3A%22bug%22) and found no similar issues.
   
   
   ### What happened
   
   Can't run SeaTunnel program on spark cluster, which spark deployment method is standalone, and use cluster mode to submit. The reason maybe is `plugins.tar.gz` have different path between client mode and cluster mode.
   
   ### SeaTunnel Version
   
   dev
   
   ### SeaTunnel Config
   
   ```conf
   env {
   
     spark.app.name = "SeaTunnel"
     spark.executor.instances = 2
     spark.executor.cores = 1
     spark.executor.memory = "1g"
   }
   
   source {
     # This is a example input plugin **only for test and demonstrate the feature input plugin**
     Fake {
       result_table_name = "my_dataset"
     }
   
   }
   
   transform {
    
   }
   
   sink {
     # choose stdout output plugin to output data to console
     Console {}
   }
   ```
   
   
   ### Running Command
   
   ```shell
   ./bin/start-seatunnel-spark.sh --config ~/spark.batch.conf -m spark://localhost:7077 -e cluster
   ```
   
   
   ### Error Exception
   
   ```log
   2022-04-24 16:38:57 INFO  BaseTaskExecuteCommand:136 - preparing cluster mode work dir files...
   2022-04-24 16:38:57 WARN  BaseTaskExecuteCommand:140 - 	 list file: /Users/fanjia/Tool/spark-2.4.0-bin-hadoop2.6/work/driver-20220424163856-0002/./stdout 
   2022-04-24 16:38:57 WARN  BaseTaskExecuteCommand:140 - 	 list file: /Users/fanjia/Tool/spark-2.4.0-bin-hadoop2.6/work/driver-20220424163856-0002/./stderr 
   2022-04-24 16:38:57 WARN  BaseTaskExecuteCommand:140 - 	 list file: /Users/fanjia/Tool/spark-2.4.0-bin-hadoop2.6/work/driver-20220424163856-0002/./seatunnel-core-spark.jar 
   2022-04-24 16:38:57 INFO  CompressionUtils:146 - Unzipping /Users/fanjia/Tool/spark-2.4.0-bin-hadoop2.6/work/driver-20220424163856-0002/plugins.tar.gz to dir /Users/fanjia/Tool/spark-2.4.0-bin-hadoop2.6/work/driver-20220424163856-0002/..
   2022-04-24 16:38:57 ERROR BaseTaskExecuteCommand:149 - failed to decompress plugins.tar.gz
   java.io.FileNotFoundException: plugins.tar.gz (No such file or directory)
   	at java.base/java.io.FileInputStream.open0(Native Method)
   	at java.base/java.io.FileInputStream.open(FileInputStream.java:216)
   	at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
   	at org.apache.seatunnel.utils.CompressionUtils.unGzip(CompressionUtils.java:150)
   	at org.apache.seatunnel.command.BaseTaskExecuteCommand.deployModeCheck(BaseTaskExecuteCommand.java:146)
   	at org.apache.seatunnel.command.BaseTaskExecuteCommand.baseCheckConfig(BaseTaskExecuteCommand.java:59)
   	at org.apache.seatunnel.command.spark.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:50)
   	at org.apache.seatunnel.command.spark.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:36)
   	at org.apache.seatunnel.Seatunnel.run(Seatunnel.java:48)
   	at org.apache.seatunnel.SeatunnelSpark.main(SeatunnelSpark.java:27)
   	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
   	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
   	at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:65)
   	at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
   2022-04-24 16:38:57 INFO  SparkContext:54 - Invoking stop() from shutdown hook
   ```
   
   
   ### Flink or Spark Version
   
   spark 2.4.0
   
   ### Java or Scala Version
   
   _No response_
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] BenJFan commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
BenJFan commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1119302511

   Close by #1757 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] yx91490 commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
yx91490 commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1109168014

   got it, I'll take time to solve it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] BenJFan commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
BenJFan commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1109231004

   > @BenJFan can you rerun script with `bash -x ./bin/start-seatunnel-spark.sh...` and paste log here?
   
   @yx91490 here
   ```
   ➜  apache-seatunnel-incubating-2.1.1-SNAPSHOT git:(dev) ✗ bash -x ./bin/start-seatunnel-spark.sh --config ~/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf -m spark://localhost:7077 -e cluster
   + set -eu
   +++ dirname ./bin/start-seatunnel-spark.sh
   ++ cd ./bin/../
   ++ pwd
   + APP_DIR=/Users/fanjia/Code/seatunnel-fork/seatunnel-dist/target/apache-seatunnel-incubating-2.1.1-SNAPSHOT
   + CONF_DIR=/Users/fanjia/Code/seatunnel-fork/seatunnel-dist/target/apache-seatunnel-incubating-2.1.1-SNAPSHOT/config
   + APP_JAR=/Users/fanjia/Code/seatunnel-fork/seatunnel-dist/target/apache-seatunnel-incubating-2.1.1-SNAPSHOT/lib/seatunnel-core-spark.jar
   + '[' -f /Users/fanjia/Code/seatunnel-fork/seatunnel-dist/target/apache-seatunnel-incubating-2.1.1-SNAPSHOT/config/seatunnel-env.sh ']'
   + . /Users/fanjia/Code/seatunnel-fork/seatunnel-dist/target/apache-seatunnel-incubating-2.1.1-SNAPSHOT/config/seatunnel-env.sh
   ++ SPARK_HOME=/Users/fanjia/Tool/spark-2.4.0-bin-hadoop2.6
   ++ FLINK_HOME=/Users/fanjia/Tool/flink-1.13.6
   ++ export SEATUNNEL_PRINT_ASCII_LOGO=true
   ++ SEATUNNEL_PRINT_ASCII_LOGO=true
   ++ java -cp /Users/fanjia/Code/seatunnel-fork/seatunnel-dist/target/apache-seatunnel-incubating-2.1.1-SNAPSHOT/lib/seatunnel-core-spark.jar org.apache.seatunnel.SparkStarter --config /Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf -m spark://localhost:7077 -e cluster
   + CMD='${SPARK_HOME}/bin/spark-submit --class "org.apache.seatunnel.SeatunnelSpark" --name "SeaTunnelGhy" --master "spark://localhost:7077" --deploy-mode "cluster" --files "plugins.tar.gz,/Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf" --conf "spark.executor.memory=512m" --conf "spark.master=local" --conf "spark.executor.cores=1" --conf "spark.app.name=SeaTunnelGhy" --conf "spark.executor.instances=2" lib/seatunnel-core-spark.jar --config /Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf -m spark://localhost:7077 -e cluster'
   + EXIT_CODE=0
   + '[' 0 -eq 234 ']'
   + '[' 0 -eq 0 ']'
   + echo 'Execute SeaTunnel Flink Job: ${SPARK_HOME}/bin/spark-submit --class "org.apache.seatunnel.SeatunnelSpark" --name "SeaTunnelGhy" --master "spark://localhost:7077" --deploy-mode "cluster" --files "plugins.tar.gz,/Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf" --conf "spark.executor.memory=512m" --conf "spark.master=local" --conf "spark.executor.cores=1" --conf "spark.app.name=SeaTunnelGhy" --conf "spark.executor.instances=2" lib/seatunnel-core-spark.jar --config /Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf -m spark://localhost:7077 -e cluster'
   Execute SeaTunnel Flink Job: ${SPARK_HOME}/bin/spark-submit --class "org.apache.seatunnel.SeatunnelSpark" --name "SeaTunnelGhy" --master "spark://localhost:7077" --deploy-mode "cluster" --files "plugins.tar.gz,/Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf" --conf "spark.executor.memory=512m" --conf "spark.master=local" --conf "spark.executor.cores=1" --conf "spark.app.name=SeaTunnelGhy" --conf "spark.executor.instances=2" lib/seatunnel-core-spark.jar --config /Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf -m spark://localhost:7077 -e cluster
   + eval '${SPARK_HOME}/bin/spark-submit' --class '"org.apache.seatunnel.SeatunnelSpark"' --name '"SeaTunnelGhy"' --master '"spark://localhost:7077"' --deploy-mode '"cluster"' --files '"plugins.tar.gz,/Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf"' --conf '"spark.executor.memory=512m"' --conf '"spark.master=local"' --conf '"spark.executor.cores=1"' --conf '"spark.app.name=SeaTunnelGhy"' --conf '"spark.executor.instances=2"' lib/seatunnel-core-spark.jar --config /Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf -m spark://localhost:7077 -e cluster
   ++ /Users/fanjia/Tool/spark-2.4.0-bin-hadoop2.6/bin/spark-submit --class org.apache.seatunnel.SeatunnelSpark --name SeaTunnelGhy --master spark://localhost:7077 --deploy-mode cluster --files plugins.tar.gz,/Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf --conf spark.executor.memory=512m --conf spark.master=local --conf spark.executor.cores=1 --conf spark.app.name=SeaTunnelGhy --conf spark.executor.instances=2 lib/seatunnel-core-spark.jar --config /Users/fanjia/Code/seatunnel-fork/seatunnel-examples/seatunnel-spark-examples/src/main/resources/examples/spark.batch.conf -m spark://localhost:7077 -e cluster
   2022-04-26 10:21:43 WARN  Utils:66 - Your hostname, fanjias-MacBook-Pro.local resolves to a loopback address: 127.0.0.1; using 192.168.5.104 instead (on interface en0)
   2022-04-26 10:21:43 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
   2022-04-26 10:21:43 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] quanzhian commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
quanzhian commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1110617046

   @BenJFan There is an error in the decompression code. The repair code is as follows
   
   find class org.apache.seatunnel.utils.CompressionUtils
   
   Fixed code
   
   ```
       /**
        * Untar an input file into an output file.
        * <p>
        * The output file is created in the output folder, having the same name
        * as the input file, minus the '.tar' extension.
        *
        * @param inputFile the input .tar file
        * @param outputDir the output directory file.
        * @throws IOException           io exception
        * @throws FileNotFoundException file not found exception
        * @throws ArchiveException      archive exception
        */
       public static void unTar(final File inputFile, final File outputDir) throws  IOException, ArchiveException {
   
           LOGGER.info("Untaring {} to dir {}.", inputFile.getAbsolutePath(), outputDir.getAbsolutePath());
   
           final List<File> untaredFiles = new LinkedList<>();
           try (final InputStream is = new FileInputStream(inputFile);
                final TarArchiveInputStream debInputStream = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", is)) {
               TarArchiveEntry entry = null;
               while ((entry = (TarArchiveEntry) debInputStream.getNextEntry()) != null) {
                   final File outputFile = new File(outputDir, entry.getName()).toPath().normalize().toFile();
                   if (entry.isDirectory()) {
                       LOGGER.info("Attempting to write output directory {}.", outputFile.getAbsolutePath());
                       if (!outputFile.exists()) {
                           LOGGER.info("Attempting to create output directory {}.", outputFile.getAbsolutePath());
                           if (!outputFile.mkdirs()) {
                               throw new IllegalStateException(String.format("Couldn't create directory %s.", outputFile.getAbsolutePath()));
                           }
                       }
                   } else {
                       LOGGER.info("Creating output file {}.", outputFile.getAbsolutePath());
                       File outputParentFile = outputFile.getParentFile();
                       if (outputParentFile != null && !outputParentFile.exists()) {
                           outputParentFile.mkdirs();
                       }
                       final OutputStream outputFileStream = new FileOutputStream(outputFile);
                       IOUtils.copy(debInputStream, outputFileStream);
                       outputFileStream.close();
                   }
                   untaredFiles.add(outputFile);
               }
           }
       }
   ```
   
   old code (There is an incorrect code)
   
   ```
      /**
        * Untar an input file into an output file.
        * <p>
        * The output file is created in the output folder, having the same name
        * as the input file, minus the '.tar' extension.
        *
        * @param inputFile the input .tar file
        * @param outputDir the output directory file.
        * @throws IOException           io exception
        * @throws FileNotFoundException file not found exception
        * @throws ArchiveException      archive exception
        */
       public static void unTar(final File inputFile, final File outputDir) throws  IOException, ArchiveException {
   
           LOGGER.info("Untaring {} to dir {}.", inputFile.getAbsolutePath(), outputDir.getAbsolutePath());
   
           final List<File> untaredFiles = new LinkedList<>();
           try (final InputStream is = new FileInputStream(inputFile);
                final TarArchiveInputStream debInputStream = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", is)) {
               TarArchiveEntry entry = null;
               while ((entry = (TarArchiveEntry) debInputStream.getNextEntry()) != null) {
                   final File outputFile = new File(outputDir, entry.getName());
                   if (!outputFile.toPath().normalize().startsWith(outputDir.toPath())) {
                       throw new IllegalStateException("Bad zip entry");
                   }
                   if (entry.isDirectory()) {
                       LOGGER.info("Attempting to write output directory {}.", outputFile.getAbsolutePath());
                       if (!outputFile.exists()) {
                           LOGGER.info("Attempting to create output directory {}.", outputFile.getAbsolutePath());
                           if (!outputFile.mkdirs()) {
                               throw new IllegalStateException(String.format("Couldn't create directory %s.", outputFile.getAbsolutePath()));
                           }
                       }
                   } else {
                       LOGGER.info("Creating output file {}.", outputFile.getAbsolutePath());
                       final OutputStream outputFileStream = new FileOutputStream(outputFile);
                       IOUtils.copy(debInputStream, outputFileStream);
                       outputFileStream.close();
                   }
                   untaredFiles.add(outputFile);
               }
           }
       }
   ```
   
   Here are my test details
   
   ```
   [xxxxxx@bigdata-app03 apache-seatunnel-incubating-2.1.1-SNAPSHOT]# ./bin/start-seatunnel-spark.sh --master yarn --deploy-mode cluster --config /mnt/services/seatunnel/spark_batch.conf
   22/04/27 14:33:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   22/04/27 14:33:44 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
   22/04/27 14:33:44 INFO EsServiceCredentialProvider: Loaded EsServiceCredentialProvider
   22/04/27 14:33:44 INFO Client: Requesting a new application from cluster with 5 NodeManagers
   22/04/27 14:33:44 INFO Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.4.0-315/0/resource-types.xml
   22/04/27 14:33:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container)
   22/04/27 14:33:44 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
   22/04/27 14:33:44 INFO Client: Setting up container launch context for our AM
   22/04/27 14:33:44 INFO Client: Setting up the launch environment for our AM container
   22/04/27 14:33:44 INFO Client: Preparing resources for our AM container
   22/04/27 14:33:45 INFO EsServiceCredentialProvider: Hadoop Security Enabled = [false]
   22/04/27 14:33:45 INFO EsServiceCredentialProvider: ES Auth Method = [SIMPLE]
   22/04/27 14:33:45 INFO EsServiceCredentialProvider: Are creds required = [false]
   22/04/27 14:33:45 INFO Client: Source and destination file systems are the same. Not copying hdfs:/hdp/apps/3.1.4.0-315/spark2/spark2-hdp-yarn-archive.tar.gz
   22/04/27 14:33:45 INFO Client: Uploading resource file:/mnt/services/seatunnel/apache-seatunnel-incubating-2.1.1-SNAPSHOT/lib/seatunnel-core-spark.jar -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/seatunnel-core-spark.jar
   22/04/27 14:33:46 INFO Client: Uploading resource file:/mnt/services/seatunnel/apache-seatunnel-incubating-2.1.1-SNAPSHOT/plugins.tar.gz -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/plugins.tar.gz
   22/04/27 14:33:46 INFO Client: Uploading resource file:/mnt/services/seatunnel/spark_batch.conf -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/spark_batch.conf
   22/04/27 14:33:46 INFO Client: Uploading resource file:/tmp/spark-5d399c9e-df19-4881-8a0b-67dd57f3f6c2/__spark_conf__1201408946509169751.zip -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/__spark_conf__.zip
   22/04/27 14:33:46 INFO SecurityManager: Changing view acls to: xxxxxx,xxx_user
   22/04/27 14:33:46 INFO SecurityManager: Changing modify acls to: xxxxxx,xxx_user
   22/04/27 14:33:46 INFO SecurityManager: Changing view acls groups to: 
   22/04/27 14:33:46 INFO SecurityManager: Changing modify acls groups to: 
   22/04/27 14:33:46 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(xxxxxx, xxx_user); groups with view permissions: Set(); users  with modify permissions: Set(xxxxxx, xxx_user); groups with modify permissions: Set()
   22/04/27 14:33:46 INFO Client: Submitting application application_1643094720025_42454 to ResourceManager
   22/04/27 14:33:46 INFO YarnClientImpl: Submitted application application_1643094720025_42454
   22/04/27 14:33:47 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   22/04/27 14:33:47 INFO Client: 
   	 client token: N/A
   	 diagnostics: AM container is launched, waiting for AM container to Register with RM
   	 ApplicationMaster host: N/A
   	 ApplicationMaster RPC port: -1
   	 queue: default
   	 start time: 1651041226887
   	 final status: UNDEFINED
   	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   	 user: xxx_user
   22/04/27 14:33:48 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   22/04/27 14:33:49 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   22/04/27 14:33:50 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   22/04/27 14:33:51 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:51 INFO Client: 
   	 client token: N/A
   	 diagnostics: N/A
   	 ApplicationMaster host: 172.18.247.16
   	 ApplicationMaster RPC port: 0
   	 queue: default
   	 start time: 1651041226887
   	 final status: UNDEFINED
   	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   	 user: xxx_user
   22/04/27 14:33:52 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:53 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:54 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:55 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:56 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:57 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:58 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:33:59 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   22/04/27 14:34:00 INFO Client: Application report for application_1643094720025_42454 (state: FINISHED)
   22/04/27 14:34:00 INFO Client: 
   	 client token: N/A
   	 diagnostics: N/A
   	 ApplicationMaster host: 172.18.247.16
   	 ApplicationMaster RPC port: 0
   	 queue: default
   	 start time: 1651041226887
   	 final status: SUCCEEDED
   	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   	 user: xxx_user
   22/04/27 14:34:00 INFO Client: Deleted staging directory hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454
   22/04/27 14:34:00 INFO ShutdownHookManager: Shutdown hook called
   22/04/27 14:34:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-5d399c9e-df19-4881-8a0b-67dd57f3f6c2
   22/04/27 14:34:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-121ad009-6b38-468d-a4eb-a5faf4dbb28d
   
   ```
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] quanzhian commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
quanzhian commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1110632165

   > > @BenJFan There is an error in the decompression code. The repair code is as follows
   > > find class org.apache.seatunnel.utils.CompressionUtils
   > > Fixed code
   > > ```
   > >     /**
   > >      * Untar an input file into an output file.
   > >      * <p>
   > >      * The output file is created in the output folder, having the same name
   > >      * as the input file, minus the '.tar' extension.
   > >      *
   > >      * @param inputFile the input .tar file
   > >      * @param outputDir the output directory file.
   > >      * @throws IOException           io exception
   > >      * @throws FileNotFoundException file not found exception
   > >      * @throws ArchiveException      archive exception
   > >      */
   > >     public static void unTar(final File inputFile, final File outputDir) throws  IOException, ArchiveException {
   > > 
   > >         LOGGER.info("Untaring {} to dir {}.", inputFile.getAbsolutePath(), outputDir.getAbsolutePath());
   > > 
   > >         final List<File> untaredFiles = new LinkedList<>();
   > >         try (final InputStream is = new FileInputStream(inputFile);
   > >              final TarArchiveInputStream debInputStream = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", is)) {
   > >             TarArchiveEntry entry = null;
   > >             while ((entry = (TarArchiveEntry) debInputStream.getNextEntry()) != null) {
   > >                 final File outputFile = new File(outputDir, entry.getName()).toPath().normalize().toFile();
   > >                 if (entry.isDirectory()) {
   > >                     LOGGER.info("Attempting to write output directory {}.", outputFile.getAbsolutePath());
   > >                     if (!outputFile.exists()) {
   > >                         LOGGER.info("Attempting to create output directory {}.", outputFile.getAbsolutePath());
   > >                         if (!outputFile.mkdirs()) {
   > >                             throw new IllegalStateException(String.format("Couldn't create directory %s.", outputFile.getAbsolutePath()));
   > >                         }
   > >                     }
   > >                 } else {
   > >                     LOGGER.info("Creating output file {}.", outputFile.getAbsolutePath());
   > >                     File outputParentFile = outputFile.getParentFile();
   > >                     if (outputParentFile != null && !outputParentFile.exists()) {
   > >                         outputParentFile.mkdirs();
   > >                     }
   > >                     final OutputStream outputFileStream = new FileOutputStream(outputFile);
   > >                     IOUtils.copy(debInputStream, outputFileStream);
   > >                     outputFileStream.close();
   > >                 }
   > >                 untaredFiles.add(outputFile);
   > >             }
   > >         }
   > >     }
   > > ```
   > > 
   > > 
   > >     
   > >       
   > >     
   > > 
   > >       
   > >     
   > > 
   > >     
   > >   
   > > old code (There is an incorrect code)
   > > ```
   > >    /**
   > >      * Untar an input file into an output file.
   > >      * <p>
   > >      * The output file is created in the output folder, having the same name
   > >      * as the input file, minus the '.tar' extension.
   > >      *
   > >      * @param inputFile the input .tar file
   > >      * @param outputDir the output directory file.
   > >      * @throws IOException           io exception
   > >      * @throws FileNotFoundException file not found exception
   > >      * @throws ArchiveException      archive exception
   > >      */
   > >     public static void unTar(final File inputFile, final File outputDir) throws  IOException, ArchiveException {
   > > 
   > >         LOGGER.info("Untaring {} to dir {}.", inputFile.getAbsolutePath(), outputDir.getAbsolutePath());
   > > 
   > >         final List<File> untaredFiles = new LinkedList<>();
   > >         try (final InputStream is = new FileInputStream(inputFile);
   > >              final TarArchiveInputStream debInputStream = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", is)) {
   > >             TarArchiveEntry entry = null;
   > >             while ((entry = (TarArchiveEntry) debInputStream.getNextEntry()) != null) {
   > >                 final File outputFile = new File(outputDir, entry.getName());
   > >                 if (!outputFile.toPath().normalize().startsWith(outputDir.toPath())) {
   > >                     throw new IllegalStateException("Bad zip entry");
   > >                 }
   > >                 if (entry.isDirectory()) {
   > >                     LOGGER.info("Attempting to write output directory {}.", outputFile.getAbsolutePath());
   > >                     if (!outputFile.exists()) {
   > >                         LOGGER.info("Attempting to create output directory {}.", outputFile.getAbsolutePath());
   > >                         if (!outputFile.mkdirs()) {
   > >                             throw new IllegalStateException(String.format("Couldn't create directory %s.", outputFile.getAbsolutePath()));
   > >                         }
   > >                     }
   > >                 } else {
   > >                     LOGGER.info("Creating output file {}.", outputFile.getAbsolutePath());
   > >                     final OutputStream outputFileStream = new FileOutputStream(outputFile);
   > >                     IOUtils.copy(debInputStream, outputFileStream);
   > >                     outputFileStream.close();
   > >                 }
   > >                 untaredFiles.add(outputFile);
   > >             }
   > >         }
   > >     }
   > > ```
   > > 
   > > 
   > >     
   > >       
   > >     
   > > 
   > >       
   > >     
   > > 
   > >     
   > >   
   > > Here are my test details
   > > ```
   > > [xxxxxx@bigdata-app03 apache-seatunnel-incubating-2.1.1-SNAPSHOT]# ./bin/start-seatunnel-spark.sh --master yarn --deploy-mode cluster --config /mnt/services/seatunnel/spark_batch.conf
   > > 22/04/27 14:33:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   > > 22/04/27 14:33:44 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
   > > 22/04/27 14:33:44 INFO EsServiceCredentialProvider: Loaded EsServiceCredentialProvider
   > > 22/04/27 14:33:44 INFO Client: Requesting a new application from cluster with 5 NodeManagers
   > > 22/04/27 14:33:44 INFO Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.4.0-315/0/resource-types.xml
   > > 22/04/27 14:33:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container)
   > > 22/04/27 14:33:44 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
   > > 22/04/27 14:33:44 INFO Client: Setting up container launch context for our AM
   > > 22/04/27 14:33:44 INFO Client: Setting up the launch environment for our AM container
   > > 22/04/27 14:33:44 INFO Client: Preparing resources for our AM container
   > > 22/04/27 14:33:45 INFO EsServiceCredentialProvider: Hadoop Security Enabled = [false]
   > > 22/04/27 14:33:45 INFO EsServiceCredentialProvider: ES Auth Method = [SIMPLE]
   > > 22/04/27 14:33:45 INFO EsServiceCredentialProvider: Are creds required = [false]
   > > 22/04/27 14:33:45 INFO Client: Source and destination file systems are the same. Not copying hdfs:/hdp/apps/3.1.4.0-315/spark2/spark2-hdp-yarn-archive.tar.gz
   > > 22/04/27 14:33:45 INFO Client: Uploading resource file:/mnt/services/seatunnel/apache-seatunnel-incubating-2.1.1-SNAPSHOT/lib/seatunnel-core-spark.jar -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/seatunnel-core-spark.jar
   > > 22/04/27 14:33:46 INFO Client: Uploading resource file:/mnt/services/seatunnel/apache-seatunnel-incubating-2.1.1-SNAPSHOT/plugins.tar.gz -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/plugins.tar.gz
   > > 22/04/27 14:33:46 INFO Client: Uploading resource file:/mnt/services/seatunnel/spark_batch.conf -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/spark_batch.conf
   > > 22/04/27 14:33:46 INFO Client: Uploading resource file:/tmp/spark-5d399c9e-df19-4881-8a0b-67dd57f3f6c2/__spark_conf__1201408946509169751.zip -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/__spark_conf__.zip
   > > 22/04/27 14:33:46 INFO SecurityManager: Changing view acls to: xxxxxx,xxx_user
   > > 22/04/27 14:33:46 INFO SecurityManager: Changing modify acls to: xxxxxx,xxx_user
   > > 22/04/27 14:33:46 INFO SecurityManager: Changing view acls groups to: 
   > > 22/04/27 14:33:46 INFO SecurityManager: Changing modify acls groups to: 
   > > 22/04/27 14:33:46 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(xxxxxx, xxx_user); groups with view permissions: Set(); users  with modify permissions: Set(xxxxxx, xxx_user); groups with modify permissions: Set()
   > > 22/04/27 14:33:46 INFO Client: Submitting application application_1643094720025_42454 to ResourceManager
   > > 22/04/27 14:33:46 INFO YarnClientImpl: Submitted application application_1643094720025_42454
   > > 22/04/27 14:33:47 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > > 22/04/27 14:33:47 INFO Client: 
   > > 	 client token: N/A
   > > 	 diagnostics: AM container is launched, waiting for AM container to Register with RM
   > > 	 ApplicationMaster host: N/A
   > > 	 ApplicationMaster RPC port: -1
   > > 	 queue: default
   > > 	 start time: 1651041226887
   > > 	 final status: UNDEFINED
   > > 	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   > > 	 user: xxx_user
   > > 22/04/27 14:33:48 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > > 22/04/27 14:33:49 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > > 22/04/27 14:33:50 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > > 22/04/27 14:33:51 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:51 INFO Client: 
   > > 	 client token: N/A
   > > 	 diagnostics: N/A
   > > 	 ApplicationMaster host: 172.18.247.16
   > > 	 ApplicationMaster RPC port: 0
   > > 	 queue: default
   > > 	 start time: 1651041226887
   > > 	 final status: UNDEFINED
   > > 	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   > > 	 user: xxx_user
   > > 22/04/27 14:33:52 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:53 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:54 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:55 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:56 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:57 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:58 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:33:59 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > > 22/04/27 14:34:00 INFO Client: Application report for application_1643094720025_42454 (state: FINISHED)
   > > 22/04/27 14:34:00 INFO Client: 
   > > 	 client token: N/A
   > > 	 diagnostics: N/A
   > > 	 ApplicationMaster host: 172.18.247.16
   > > 	 ApplicationMaster RPC port: 0
   > > 	 queue: default
   > > 	 start time: 1651041226887
   > > 	 final status: SUCCEEDED
   > > 	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   > > 	 user: xxx_user
   > > 22/04/27 14:34:00 INFO Client: Deleted staging directory hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454
   > > 22/04/27 14:34:00 INFO ShutdownHookManager: Shutdown hook called
   > > 22/04/27 14:34:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-5d399c9e-df19-4881-8a0b-67dd57f3f6c2
   > > 22/04/27 14:34:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-121ad009-6b38-468d-a4eb-a5faf4dbb28d
   > > ```
   > 
   > @quanzhian Can you create an PR to fix that? Welcome to contributer family!
   
   ok


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] yx91490 commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
yx91490 commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1109201145

   @BenJFan can you rerun script with `bash -x ./bin/start-seatunnel-spark.sh...` and paste log here?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] BenJFan closed issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
BenJFan closed issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.
URL: https://github.com/apache/incubator-seatunnel/issues/1733


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] BenJFan commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
BenJFan commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1110628446

   > @BenJFan There is an error in the decompression code. The repair code is as follows
   > 
   > find class org.apache.seatunnel.utils.CompressionUtils
   > 
   > Fixed code
   > 
   > ```
   >     /**
   >      * Untar an input file into an output file.
   >      * <p>
   >      * The output file is created in the output folder, having the same name
   >      * as the input file, minus the '.tar' extension.
   >      *
   >      * @param inputFile the input .tar file
   >      * @param outputDir the output directory file.
   >      * @throws IOException           io exception
   >      * @throws FileNotFoundException file not found exception
   >      * @throws ArchiveException      archive exception
   >      */
   >     public static void unTar(final File inputFile, final File outputDir) throws  IOException, ArchiveException {
   > 
   >         LOGGER.info("Untaring {} to dir {}.", inputFile.getAbsolutePath(), outputDir.getAbsolutePath());
   > 
   >         final List<File> untaredFiles = new LinkedList<>();
   >         try (final InputStream is = new FileInputStream(inputFile);
   >              final TarArchiveInputStream debInputStream = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", is)) {
   >             TarArchiveEntry entry = null;
   >             while ((entry = (TarArchiveEntry) debInputStream.getNextEntry()) != null) {
   >                 final File outputFile = new File(outputDir, entry.getName()).toPath().normalize().toFile();
   >                 if (entry.isDirectory()) {
   >                     LOGGER.info("Attempting to write output directory {}.", outputFile.getAbsolutePath());
   >                     if (!outputFile.exists()) {
   >                         LOGGER.info("Attempting to create output directory {}.", outputFile.getAbsolutePath());
   >                         if (!outputFile.mkdirs()) {
   >                             throw new IllegalStateException(String.format("Couldn't create directory %s.", outputFile.getAbsolutePath()));
   >                         }
   >                     }
   >                 } else {
   >                     LOGGER.info("Creating output file {}.", outputFile.getAbsolutePath());
   >                     File outputParentFile = outputFile.getParentFile();
   >                     if (outputParentFile != null && !outputParentFile.exists()) {
   >                         outputParentFile.mkdirs();
   >                     }
   >                     final OutputStream outputFileStream = new FileOutputStream(outputFile);
   >                     IOUtils.copy(debInputStream, outputFileStream);
   >                     outputFileStream.close();
   >                 }
   >                 untaredFiles.add(outputFile);
   >             }
   >         }
   >     }
   > ```
   > 
   > old code (There is an incorrect code)
   > 
   > ```
   >    /**
   >      * Untar an input file into an output file.
   >      * <p>
   >      * The output file is created in the output folder, having the same name
   >      * as the input file, minus the '.tar' extension.
   >      *
   >      * @param inputFile the input .tar file
   >      * @param outputDir the output directory file.
   >      * @throws IOException           io exception
   >      * @throws FileNotFoundException file not found exception
   >      * @throws ArchiveException      archive exception
   >      */
   >     public static void unTar(final File inputFile, final File outputDir) throws  IOException, ArchiveException {
   > 
   >         LOGGER.info("Untaring {} to dir {}.", inputFile.getAbsolutePath(), outputDir.getAbsolutePath());
   > 
   >         final List<File> untaredFiles = new LinkedList<>();
   >         try (final InputStream is = new FileInputStream(inputFile);
   >              final TarArchiveInputStream debInputStream = (TarArchiveInputStream) new ArchiveStreamFactory().createArchiveInputStream("tar", is)) {
   >             TarArchiveEntry entry = null;
   >             while ((entry = (TarArchiveEntry) debInputStream.getNextEntry()) != null) {
   >                 final File outputFile = new File(outputDir, entry.getName());
   >                 if (!outputFile.toPath().normalize().startsWith(outputDir.toPath())) {
   >                     throw new IllegalStateException("Bad zip entry");
   >                 }
   >                 if (entry.isDirectory()) {
   >                     LOGGER.info("Attempting to write output directory {}.", outputFile.getAbsolutePath());
   >                     if (!outputFile.exists()) {
   >                         LOGGER.info("Attempting to create output directory {}.", outputFile.getAbsolutePath());
   >                         if (!outputFile.mkdirs()) {
   >                             throw new IllegalStateException(String.format("Couldn't create directory %s.", outputFile.getAbsolutePath()));
   >                         }
   >                     }
   >                 } else {
   >                     LOGGER.info("Creating output file {}.", outputFile.getAbsolutePath());
   >                     final OutputStream outputFileStream = new FileOutputStream(outputFile);
   >                     IOUtils.copy(debInputStream, outputFileStream);
   >                     outputFileStream.close();
   >                 }
   >                 untaredFiles.add(outputFile);
   >             }
   >         }
   >     }
   > ```
   > 
   > Here are my test details
   > 
   > ```
   > [xxxxxx@bigdata-app03 apache-seatunnel-incubating-2.1.1-SNAPSHOT]# ./bin/start-seatunnel-spark.sh --master yarn --deploy-mode cluster --config /mnt/services/seatunnel/spark_batch.conf
   > 22/04/27 14:33:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   > 22/04/27 14:33:44 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
   > 22/04/27 14:33:44 INFO EsServiceCredentialProvider: Loaded EsServiceCredentialProvider
   > 22/04/27 14:33:44 INFO Client: Requesting a new application from cluster with 5 NodeManagers
   > 22/04/27 14:33:44 INFO Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.4.0-315/0/resource-types.xml
   > 22/04/27 14:33:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container)
   > 22/04/27 14:33:44 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
   > 22/04/27 14:33:44 INFO Client: Setting up container launch context for our AM
   > 22/04/27 14:33:44 INFO Client: Setting up the launch environment for our AM container
   > 22/04/27 14:33:44 INFO Client: Preparing resources for our AM container
   > 22/04/27 14:33:45 INFO EsServiceCredentialProvider: Hadoop Security Enabled = [false]
   > 22/04/27 14:33:45 INFO EsServiceCredentialProvider: ES Auth Method = [SIMPLE]
   > 22/04/27 14:33:45 INFO EsServiceCredentialProvider: Are creds required = [false]
   > 22/04/27 14:33:45 INFO Client: Source and destination file systems are the same. Not copying hdfs:/hdp/apps/3.1.4.0-315/spark2/spark2-hdp-yarn-archive.tar.gz
   > 22/04/27 14:33:45 INFO Client: Uploading resource file:/mnt/services/seatunnel/apache-seatunnel-incubating-2.1.1-SNAPSHOT/lib/seatunnel-core-spark.jar -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/seatunnel-core-spark.jar
   > 22/04/27 14:33:46 INFO Client: Uploading resource file:/mnt/services/seatunnel/apache-seatunnel-incubating-2.1.1-SNAPSHOT/plugins.tar.gz -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/plugins.tar.gz
   > 22/04/27 14:33:46 INFO Client: Uploading resource file:/mnt/services/seatunnel/spark_batch.conf -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/spark_batch.conf
   > 22/04/27 14:33:46 INFO Client: Uploading resource file:/tmp/spark-5d399c9e-df19-4881-8a0b-67dd57f3f6c2/__spark_conf__1201408946509169751.zip -> hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454/__spark_conf__.zip
   > 22/04/27 14:33:46 INFO SecurityManager: Changing view acls to: xxxxxx,xxx_user
   > 22/04/27 14:33:46 INFO SecurityManager: Changing modify acls to: xxxxxx,xxx_user
   > 22/04/27 14:33:46 INFO SecurityManager: Changing view acls groups to: 
   > 22/04/27 14:33:46 INFO SecurityManager: Changing modify acls groups to: 
   > 22/04/27 14:33:46 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(xxxxxx, xxx_user); groups with view permissions: Set(); users  with modify permissions: Set(xxxxxx, xxx_user); groups with modify permissions: Set()
   > 22/04/27 14:33:46 INFO Client: Submitting application application_1643094720025_42454 to ResourceManager
   > 22/04/27 14:33:46 INFO YarnClientImpl: Submitted application application_1643094720025_42454
   > 22/04/27 14:33:47 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > 22/04/27 14:33:47 INFO Client: 
   > 	 client token: N/A
   > 	 diagnostics: AM container is launched, waiting for AM container to Register with RM
   > 	 ApplicationMaster host: N/A
   > 	 ApplicationMaster RPC port: -1
   > 	 queue: default
   > 	 start time: 1651041226887
   > 	 final status: UNDEFINED
   > 	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   > 	 user: xxx_user
   > 22/04/27 14:33:48 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > 22/04/27 14:33:49 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > 22/04/27 14:33:50 INFO Client: Application report for application_1643094720025_42454 (state: ACCEPTED)
   > 22/04/27 14:33:51 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:51 INFO Client: 
   > 	 client token: N/A
   > 	 diagnostics: N/A
   > 	 ApplicationMaster host: 172.18.247.16
   > 	 ApplicationMaster RPC port: 0
   > 	 queue: default
   > 	 start time: 1651041226887
   > 	 final status: UNDEFINED
   > 	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   > 	 user: xxx_user
   > 22/04/27 14:33:52 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:53 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:54 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:55 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:56 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:57 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:58 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:33:59 INFO Client: Application report for application_1643094720025_42454 (state: RUNNING)
   > 22/04/27 14:34:00 INFO Client: Application report for application_1643094720025_42454 (state: FINISHED)
   > 22/04/27 14:34:00 INFO Client: 
   > 	 client token: N/A
   > 	 diagnostics: N/A
   > 	 ApplicationMaster host: 172.18.247.16
   > 	 ApplicationMaster RPC port: 0
   > 	 queue: default
   > 	 start time: 1651041226887
   > 	 final status: SUCCEEDED
   > 	 tracking URL: http://bigdata-master01:8088/proxy/application_1643094720025_42454/
   > 	 user: xxx_user
   > 22/04/27 14:34:00 INFO Client: Deleted staging directory hdfs://nameservice1/user/xxx_user/.sparkStaging/application_1643094720025_42454
   > 22/04/27 14:34:00 INFO ShutdownHookManager: Shutdown hook called
   > 22/04/27 14:34:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-5d399c9e-df19-4881-8a0b-67dd57f3f6c2
   > 22/04/27 14:34:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-121ad009-6b38-468d-a4eb-a5faf4dbb28d
   > ```
   
   @quanzhian Can you create an PR to fix that? Welcome to contributer family!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [incubator-seatunnel] ruanwenjun commented on issue #1733: [Bug] [Spark] Can't run SeaTunnel on spark standalone cluster.

Posted by GitBox <gi...@apache.org>.
ruanwenjun commented on issue #1733:
URL: https://github.com/apache/incubator-seatunnel/issues/1733#issuecomment-1107993945

   @yx91490 Please take a look.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@seatunnel.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org