You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:35:21 UTC

[jira] [Resolved] (SPARK-10961) Specified metastore 0.12.0 but spark-shell still using metastore classes for 0.13+

     [ https://issues.apache.org/jira/browse/SPARK-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-10961.
----------------------------------
    Resolution: Incomplete

> Specified metastore 0.12.0 but spark-shell still using metastore classes for 0.13+
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-10961
>                 URL: https://issues.apache.org/jira/browse/SPARK-10961
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell, SQL
>    Affects Versions: 1.5.1
>            Reporter: Curtis Wilde
>            Priority: Major
>              Labels: bulk-closed
>
> After setting metastore to 0.12.0 in {{spark-defaults.conf}}, spark-shell still uses some classes from 0.13 or above. The method {{get_functions}} is not found in {{org.apache.hadoop.hive.metastore.HiveMetaStore.java}} until 0.13.x.
> From {{spark-defaults.conf}}:
> {{spark.sql.hive.metastore.jars                           /usr/lib/spark/lib/commons-logging-1.1.1.jar:/usr/lib/spark/lib/hadoop-client-2.2.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hadoop-common-2.2.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hive-common-0.12.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hive-exec-0.12.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hive-metastore-0.12.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hive-serde-0.12.0.2.0.10.0-1.jar:/usr/lib/spark/lib/slf4j-api-1.7.5.jar:/usr/lib/spark/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/spark/lib/hadoop-mapreduce-client-core-2.2.0.2.0.10.0-1.jar:/usr/lib/spark/lib/commons-configuration-1.6.jar:/usr/lib/spark/lib/commons-lang-2.6.jar:/usr/lib/spark/lib/hadoop-auth-2.2.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hive-jdbc-0.12.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hive-contrib-0.12.0.2.0.10.0-1.jar:/usr/lib/spark/lib/hive-cli-0.12.0.2.0.10.0-1.jar}}
> {{spark.sql.hive.metastore.version                        0.12.0}}
> Output from spark-shell:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/usr/lib/spark-1.5.1-bin-hadoop2.3/lib/spark-assembly-1.5.1-hadoop2.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/usr/lib/spark-1.5.1-bin-hadoop2.3/lib/spark-examples-1.5.1-hadoop2.3.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 15/10/06 14:12:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 1.5.1
>       /_/
> Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.8.0_31)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/10/06 14:12:51 WARN SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
> 15/10/06 14:12:51 WARN SparkConf:
> SPARK_CLASSPATH was detected (set to '/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/lib/spark-1.5.1-yarn-shuffle.jar:/usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/guava-12.0.1.jar').
> This is deprecated in Spark 1.0+.
> Please instead use:
>  - ./spark-submit with --driver-class-path to augment the driver classpath
>  - spark.executor.extraClassPath to augment the executor classpath
> 15/10/06 14:12:51 WARN SparkConf: Setting 'spark.executor.extraClassPath' to '/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/lib/spark-1.5.1-yarn-shuffle.jar:/usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/guava-12.0.1.jar' as a work-around.
> 15/10/06 14:12:51 WARN SparkConf: Setting 'spark.driver.extraClassPath' to '/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/lib/spark-1.5.1-yarn-shuffle.jar:/usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/guava-12.0.1.jar' as a work-around.
> 15/10/06 14:12:51 WARN SparkConf:
> SPARK_WORKER_INSTANCES was detected (set to '4').
> This is deprecated in Spark 1.0+.
> Please instead use:
>  - ./spark-submit with --num-executors to specify the number of executors
>  - Or set SPARK_EXECUTOR_INSTANCES
>  - spark.executor.instances to configure the number of instances in the spark config.
> 15/10/06 14:12:51 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
> Spark context available as sc.
> 15/10/06 14:12:53 WARN HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
> 15/10/06 14:12:53 WARN HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
> 15/10/06 14:12:53 WARN HiveConf: HiveConf of name hive.semantic.analyzer.factory.impl does not exist
> 15/10/06 14:12:53 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect.
> org.apache.thrift.TApplicationException: Invalid method name: 'get_functions'
>         at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
>         at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_functions(ThriftHiveMetastore.java:3256)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_functions(ThriftHiveMetastore.java:3242)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunctions(HiveMetaStoreClient.java:2044)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:483)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
>         at com.sun.proxy.$Proxy12.getFunctions(Unknown Source)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getFunctions(Hive.java:3244)
>         at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:175)
>         at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
>         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
>         at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
>         at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:162)
>         at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:160)
>         at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:391)
>         at org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:235)
>         at org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:234)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:234)
>         at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:72)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>         at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
>         at $line4.$read$$iwC$$iwC.<init>(<console>:9)
>         at $line4.$read$$iwC.<init>(<console>:18)
>         at $line4.$read.<init>(<console>:20)
>         at $line4.$read$.<init>(<console>:24)
>         at $line4.$read$.<clinit>(<console>)
>         at $line4.$eval$.<init>(<console>:7)
>         at $line4.$eval$.<clinit>(<console>)
>         at $line4.$eval.$print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:483)
>         at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
>         at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>         at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>         at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
>         at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
>         at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
>         at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
>         at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
>         at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
>         at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
>         at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
>         at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>         at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:483)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 15/10/06 14:12:54 WARN Hive: Failed to access metastore. This class should not accessed in runtime.
> org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Invalid method name: 'get_functions'
>         at org.apache.hadoop.hive.ql.metadata.Hive.getFunctions(Hive.java:3246)
>         at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:175)
>         at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
>         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
>         at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
>         at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:162)
>         at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:160)
>         at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:391)
>         at org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:235)
>         at org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:234)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:234)
>         at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:72)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>         at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
>         at $line4.$read$$iwC$$iwC.<init>(<console>:9)
>         at $line4.$read$$iwC.<init>(<console>:18)
>         at $line4.$read.<init>(<console>:20)
>         at $line4.$read$.<init>(<console>:24)
>         at $line4.$read$.<clinit>(<console>)
>         at $line4.$eval$.<init>(<console>:7)
>         at $line4.$eval$.<clinit>(<console>)
>         at $line4.$eval.$print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:483)
>         at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
>         at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>         at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>         at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>         at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
>         at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
>         at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
>         at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
>         at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
>         at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
>         at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
>         at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
>         at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>         at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>         at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:483)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.thrift.TApplicationException: Invalid method name: 'get_functions'
>         at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
>         at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_functions(ThriftHiveMetastore.java:3256)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_functions(ThriftHiveMetastore.java:3242)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunctions(HiveMetaStoreClient.java:2044)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:483)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
>         at com.sun.proxy.$Proxy12.getFunctions(Unknown Source)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getFunctions(Hive.java:3244)
>         ... 67 more
> 15/10/06 14:12:54 WARN BlockReaderLocal: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
> 15/10/06 14:12:54 WARN BlockReaderLocal: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
> 15/10/06 14:12:54 WARN BlockReaderLocal: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
> 15/10/06 14:12:54 WARN BlockReaderLocal: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
> 15/10/06 14:12:54 WARN BlockReaderLocal: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
> 15/10/06 14:12:54 WARN HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
> 15/10/06 14:12:54 WARN HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
> 15/10/06 14:12:54 WARN HiveConf: HiveConf of name hive.semantic.analyzer.factory.impl does not exist
> 15/10/06 14:12:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> SQL context available as sqlContext.
> scala> val props = sys.props("java.class.path")
> props: String = /usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/lib/spark-1.5.1-yarn-shuffle.jar:/usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/guava-12.0.1.jar:/usr/lib/spark/conf/:/usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/conf/
> scala> val sysprops = props.split(":")
> sysprops: Array[String] = Array(/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar, /usr/lib/spark/lib/datanucleus-core-3.2.10.jar, /usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar, /usr/lib/spark/lib/spark-1.5.1-yarn-shuffle.jar, /usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar, /usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.3.0.jar, /usr/lib/spark/lib/guava-12.0.1.jar, /usr/lib/spark/conf/, /usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar, /usr/lib/spark/lib/datanucleus-core-3.2.10.jar, /usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar, /usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar, /usr/lib/spark/conf/)
> scala> sysprops.foreach \{ println \}
> /usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar
> /usr/lib/spark/lib/datanucleus-core-3.2.10.jar
> /usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar
> /usr/lib/spark/lib/spark-1.5.1-yarn-shuffle.jar
> /usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar
> /usr/lib/spark/lib/spark-examples-1.5.1-hadoop2.3.0.jar
> /usr/lib/spark/lib/guava-12.0.1.jar
> /usr/lib/spark/conf/
> /usr/lib/spark/lib/spark-assembly-1.5.1-hadoop2.3.0.jar
> /usr/lib/spark/lib/datanucleus-core-3.2.10.jar
> /usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar
> /usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar
> /usr/lib/spark/conf/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org