You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by GitBox <gi...@apache.org> on 2021/09/28 07:49:13 UTC

[GitHub] [zeppelin] martin-g edited a comment on pull request #4238: ZEPPELIN-5543 Add CircleCI config to test on Linux ARM64

martin-g edited a comment on pull request #4238:
URL: https://github.com/apache/zeppelin/pull/4238#issuecomment-928919100


   The new build at CircleCI failed due to:
   ```
   INFO [2021-09-28 06:53:04,243] ({main} DownloadUtils.java[runShellCommand]:136) - Starting shell commands: wget https://dlcdn.apache.org//spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz -P /home/circleci/.cache/spark
    WARN [2021-09-28 06:53:04,401] ({main} DownloadUtils.java[download]:113) - Failed to download spark from mirror site, fallback to use apache archive
   java.io.IOException: Fail to run shell commands: wget https://dlcdn.apache.org//spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz -P /home/circleci/.cache/spark
   	at org.apache.zeppelin.interpreter.integration.DownloadUtils.runShellCommand(DownloadUtils.java:143)
   	at org.apache.zeppelin.interpreter.integration.DownloadUtils.download(DownloadUtils.java:110)
   	at org.apache.zeppelin.interpreter.integration.DownloadUtils.download(DownloadUtils.java:132)
   	at org.apache.zeppelin.interpreter.integration.DownloadUtils.downloadSpark(DownloadUtils.java:58)
   	at org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncherTest.setUp(SparkInterpreterLauncherTest.java:55)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           ...
   
   Tests run: 7, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 59.461 sec <<< FAILURE! - in org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncherTest
   testYarnClusterMode_1(org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncherTest)  Time elapsed: 0.642 sec  <<< ERROR!
   java.io.IOException: Fail to set additional jars for spark interpreter
   	at org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncher.buildEnvFromProperties(SparkInterpreterLauncher.java:139)
   	at org.apache.zeppelin.interpreter.launcher.StandardInterpreterLauncher.launchDirectly(StandardInterpreterLauncher.java:77)
   	at org.apache.zeppelin.interpreter.launcher.InterpreterLauncher.launch(InterpreterLauncher.java:110)
   	at org.apache.zeppelin.interpreter.launcher.SparkInterpreterLauncherTest.testYarnClusterMode_1(SparkInterpreterLauncherTest.java:194)
   ```
   
   https://dlcdn.apache.org//spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz gives 404.
   Why does it try to download Spark 2.4.4 when the active Maven profile is `-Pspark-3.0` ?
   
   **Update**: It seems the archive is actually downloaded from the alternative url:
   ```
   INFO [2021-09-28 06:53:04,405] ({main} DownloadUtils.java[runShellCommand]:136) - Starting shell commands: wget https://archive.apache.org/dist/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz -P /home/circleci/.cache/spark
    INFO [2021-09-28 06:53:09,454] ({Thread-2} DownloadUtils.java[run]:166) -   2350K .......... .......... .......... .......... ..........  1%  519K 7m3s
   ...
   ```
   
   It seems the problem is here:
   ```
     Path scalaFolder =  Paths.get(zConf.getZeppelinHome(), "/interpreter/spark/scala-" + scalaVersion);
           if (!scalaFolder.toFile().exists()) {
             throw new IOException("spark scala folder " + scalaFolder.toFile() + " doesn't exist");
           }
   ```
   CircleCI allows to connect via SSH to the build node and debug! I will check the full stacktrace and debug the issue!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@zeppelin.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org