You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "WangTaoTheTonic (JIRA)" <ji...@apache.org> on 2014/09/17 03:51:33 UTC
[jira] [Updated] (SPARK-3547) Maybe we should not simply make
return code 1 equal to CLASS_NOT_FOUND
[ https://issues.apache.org/jira/browse/SPARK-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
WangTaoTheTonic updated SPARK-3547:
-----------------------------------
Description:
It incurred runtime exception when hadoop version is not A.B.* format, which is detected by Hive. Then the jvm return code is 1, while equals to CLASS_NOT_FOUND_EXIT_STATUS in start-thriftserver.sh script. It proves even runtime exception can lead the jvm existed with code 1.
Should we just modify the misleading error message in script ?
The error message in script:
CLASS_NOT_FOUND_EXIT_STATUS=1
if [[ exit_status -eq CLASS_NOT_FOUND_EXIT_STATUS ]]; then
echo
echo "Failed to load Hive Thrift server main class $CLASS."
echo "You need to build Spark with -Phive."
fi
Below is exception stack I met:
[omm@dc1-rack1-host2 sbin]$ ./start-thriftserver.sh
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:286)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:54)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:332)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:79)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthenticator(HiveUtils.java:368)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:278)
... 9 more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator.setConf(HadoopDefaultAuthenticator.java:53)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthenticator(HiveUtils.java:365)
... 10 more
Caused by: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:141)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:113)
at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:80)
at org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator.setConf(HadoopDefaultAuthenticator.java:51)
... 13 more
Failed to load Hive Thrift server main class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.
You need to build Spark with -Phive.
I tested runtime exception and ioexception today, and JVM will also return with exit code 1. Below is my code and error it lead.
Code, throw ArrayIndexOutOfBoundsException and FileNotFoundException:
object ExitCodeWithRuntimeException
{
def main(args: Array[String]): Unit =
{
if(args(0).equals("array"))
arrayIndexOutOfBoundsException(args)
else if(args(0).equals("file"))
fileNotFoundException()
}
def arrayIndexOutOfBoundsException(args: Array[String]): Unit =
{
println(args(args.length))
}
def fileNotFoundException(): Unit =
{
val fis = new FileInputStream("filedoesnotexist")
}
}
Error:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
at scala.ExitCodeWithRuntimeException$.arrayIndexOutOfBoundsException(ExitCodeWithRuntimeException.scala:24)
at scala.ExitCodeWithRuntimeException$.main(ExitCodeWithRuntimeException.scala:17)
at scala.ExitCodeWithRuntimeException.main(ExitCodeWithRuntimeException.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Process finished with exit code 1
Exception in thread "main" java.io.FileNotFoundException: filedoesnotexist (系统找不到指定的文件。)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:120)
at java.io.FileInputStream.<init>(FileInputStream.java:79)
at scala.ExitCodeWithRuntimeException$.fileNotFoundException(ExitCodeWithRuntimeException.scala:29)
at scala.ExitCodeWithRuntimeException$.main(ExitCodeWithRuntimeException.scala:19)
at scala.ExitCodeWithRuntimeException.main(ExitCodeWithRuntimeException.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Process finished with exit code 1
So, we need to change the exit code represented ClassNotFoundException to a special one.
was:
It incurred runtime exception when hadoop version is not A.B.* format, which is detected by Hive. Then the jvm return code is 1, while equals to CLASS_NOT_FOUND_EXIT_STATUS in start-thriftserver.sh script. It proves even runtime exception can lead the jvm existed with code 1.
Should we just modify the misleading error message in script ?
The error message in script:
CLASS_NOT_FOUND_EXIT_STATUS=1
if [[ exit_status -eq CLASS_NOT_FOUND_EXIT_STATUS ]]; then
echo
echo "Failed to load Hive Thrift server main class $CLASS."
echo "You need to build Spark with -Phive."
fi
Below is exception stack I met:
[omm@dc1-rack1-host2 sbin]$ ./start-thriftserver.sh
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:286)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:54)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:332)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:79)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthenticator(HiveUtils.java:368)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:278)
... 9 more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator.setConf(HadoopDefaultAuthenticator.java:53)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthenticator(HiveUtils.java:365)
... 10 more
Caused by: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:141)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:113)
at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:80)
at org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator.setConf(HadoopDefaultAuthenticator.java:51)
... 13 more
Failed to load Hive Thrift server main class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.
You need to build Spark with -Phive.
> Maybe we should not simply make return code 1 equal to CLASS_NOT_FOUND
> ----------------------------------------------------------------------
>
> Key: SPARK-3547
> URL: https://issues.apache.org/jira/browse/SPARK-3547
> Project: Spark
> Issue Type: Improvement
> Components: Deploy
> Reporter: WangTaoTheTonic
> Priority: Minor
>
> It incurred runtime exception when hadoop version is not A.B.* format, which is detected by Hive. Then the jvm return code is 1, while equals to CLASS_NOT_FOUND_EXIT_STATUS in start-thriftserver.sh script. It proves even runtime exception can lead the jvm existed with code 1.
> Should we just modify the misleading error message in script ?
> The error message in script:
> CLASS_NOT_FOUND_EXIT_STATUS=1
> if [[ exit_status -eq CLASS_NOT_FOUND_EXIT_STATUS ]]; then
> echo
> echo "Failed to load Hive Thrift server main class $CLASS."
> echo "You need to build Spark with -Phive."
> fi
> Below is exception stack I met:
> [omm@dc1-rack1-host2 sbin]$ ./start-thriftserver.sh
> log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
> at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:286)
> at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:54)
> at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:332)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:79)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
> at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthenticator(HiveUtils.java:368)
> at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:278)
> ... 9 more
> Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
> at org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator.setConf(HadoopDefaultAuthenticator.java:53)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
> at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
> at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthenticator(HiveUtils.java:365)
> ... 10 more
> Caused by: java.lang.RuntimeException: Illegal Hadoop Version: V100R001C00 (expected A.B.* format)
> at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:141)
> at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:113)
> at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:80)
> at org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator.setConf(HadoopDefaultAuthenticator.java:51)
> ... 13 more
> Failed to load Hive Thrift server main class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.
> You need to build Spark with -Phive.
> I tested runtime exception and ioexception today, and JVM will also return with exit code 1. Below is my code and error it lead.
> Code, throw ArrayIndexOutOfBoundsException and FileNotFoundException:
> object ExitCodeWithRuntimeException
> {
> def main(args: Array[String]): Unit =
> {
> if(args(0).equals("array"))
> arrayIndexOutOfBoundsException(args)
> else if(args(0).equals("file"))
> fileNotFoundException()
> }
> def arrayIndexOutOfBoundsException(args: Array[String]): Unit =
> {
> println(args(args.length))
> }
> def fileNotFoundException(): Unit =
> {
> val fis = new FileInputStream("filedoesnotexist")
> }
> }
> Error:
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
> at scala.ExitCodeWithRuntimeException$.arrayIndexOutOfBoundsException(ExitCodeWithRuntimeException.scala:24)
> at scala.ExitCodeWithRuntimeException$.main(ExitCodeWithRuntimeException.scala:17)
> at scala.ExitCodeWithRuntimeException.main(ExitCodeWithRuntimeException.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
> Process finished with exit code 1
> Exception in thread "main" java.io.FileNotFoundException: filedoesnotexist (系统找不到指定的文件。)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.<init>(FileInputStream.java:120)
> at java.io.FileInputStream.<init>(FileInputStream.java:79)
> at scala.ExitCodeWithRuntimeException$.fileNotFoundException(ExitCodeWithRuntimeException.scala:29)
> at scala.ExitCodeWithRuntimeException$.main(ExitCodeWithRuntimeException.scala:19)
> at scala.ExitCodeWithRuntimeException.main(ExitCodeWithRuntimeException.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
> Process finished with exit code 1
> So, we need to change the exit code represented ClassNotFoundException to a special one.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org