You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/07/14 19:38:21 UTC

[jira] [Resolved] (SPARK-15118) spark couldn't get hive properyties in hive-site.xml

     [ https://issues.apache.org/jira/browse/SPARK-15118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-15118.
-------------------------------
    Resolution: Invalid

> spark couldn't get hive properyties in hive-site.xml 
> -----------------------------------------------------
>
>                 Key: SPARK-15118
>                 URL: https://issues.apache.org/jira/browse/SPARK-15118
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager, Deploy
>    Affects Versions: 1.6.1
>         Environment: hadoop-2.7.1.tar.gz;
> apache-hive-2.0.0-bin.tar.gz; 
> spark-1.6.1-bin-hadoop2.6.tgz; 
> scala-2.11.8.tgz
>            Reporter: eksmile
>
> I have three question.
> First:
> I've already put "hive-site.xml" in $SPARK_HOME/conf, but when I run spark-sql, it tell me "HiveConf of name *** does not exist", and repeat many times.
> All of these "HiveConf" are in "hive-site.xml", why these warnings appear?
> I'm not sure this is a bug or not.
> Second:
> In the middle of logs as follow, there's a paragraph : "Failed to get database default, returning NoSuchObjectException", 
> I don't know is there something worng?
> Third:
> In the middle of logs, there's a paragraph : " metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY", 
> but, in the end of logs, there's a paragraph : "metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL"
> My Hive metastore is MYSQL. Is this something wrong?
> spark-env.sh as follow: 
> export JAVA_HOME=/usr/java/jdk1.8.0_73
> export SCALA_HOME=/home/scala
> export SPARK_MASTER_IP=192.168.124.129
> export SPARK_WORKER_MEMORY=1g
> export HADOOP_CONF_DIR=/usr/hadoop/etc/hadoop
> export HIVE_HOME=/opt/hive
> export HIVE_CONF_DIR=/opt/hive/conf
> export SPARK_CLASSPATH=$SPARK_CLASSPATH:/opt/hive/lib/mysql-connector-java-5.1.38-bin.jar
> export HADOOP_HOME=/usr/hadoop
> Thanks for reading 
> Here're the logs:
> [yezt@Master spark]$ bin/spark-sql --master spark://master:7077   
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.connect.retry.limit does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.direct does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.stats does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.client.consistent.splits does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.start does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.ttl does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.management.acl does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.delegation.token.lifetime does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.strict.checks.large.query does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.min does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.user does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.wait.queue.comparator.class.name does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.am.liveness.heartbeat.interval.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.object.cache.enabled does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.ssl does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.location does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.delay.seconds does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.file.cleaner.threads does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.test.fail.compaction does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.service.metrics.class does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.vectorized.execution.reducesink.new.enabled does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.compactor.max.num.delta does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.attempted does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.webui.port does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.compactor.initiator.failed.compacts.threshold does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.service.metrics.reporter does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.execution.mode does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.mode does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.threadpool.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.webui.use.spnego does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.service.metrics.file.frequency does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.fastpath does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.file.cleanup.delay.seconds does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.management.rpc.port does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.tree does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.failed does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.orc.splits.ms.footer.cache.enabled does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup.min does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.service.refresh.interval.sec does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.output.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.driver.parallel.compilation does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.cache.allow.synthetic.fileid does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.hash.table.inflation.factor does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.hbase.ttl does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.auto.enforce.vectorized does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.exec.schema.evolution does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.auto.allow.uber does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.llap.concurrent.queries does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.auto.auth does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.orc.splits.include.fileid does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.communicator.num.threads does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.sleep.between.retries.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.partitions does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.shuffle.port does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.response.header.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.conf.internal.variable.list does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductionpercentage does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.enabled does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.retry.limit does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.orc.splits.directory.batch.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.reader.wait does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.max.timeout.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.execution.mode does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.catalog.cache.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.wait.queue.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.cache.entries does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.memory.ttl does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.port does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.cache.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.clean.until does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.port does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.metrics.enabled does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.cli.tez.session.async does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.tez.bucket.pruning.compat does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.webui.spnego.principal does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.shuffle.dir.watcher.enabled does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.arena.count does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.task.communicator.connection.timeout.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.transpose.aggr.join does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.spark.dynamic.partition.pruning.max.data.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggr.stats.invalidator.frequency does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.lrfu does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.conf.hidden.list does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.timedout.txn.reaper.interval does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.max.variance does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.lrfu.lambda does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.optimize.point.lookup does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.ssl does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.compactor.history.retention.succeeded does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.use.fileid.path does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.hashtable.probe.percent does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.reenable.min.timeout.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.support.special.characters.tablename does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.yarn.container.mb does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.optimize.dynamic.partition.hashjoin does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.task.scheduler.enable.preemption does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.num.executors does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.full does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.connection.class does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.password does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.cache.max.writer.wait does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.request.header.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.webui.max.threads does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.optimize.limittranspose.reductiontuples does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.test.rollbacktxn does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.num.schedulable.tasks.per.node does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.acl does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.memory.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.strict.checks.type.safety does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.auto.max.input.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.tez.enable.memory.manager does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.server2.tez.session.lifetime.jitter does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.web.port does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.strict.checks.cartesian.product does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.rpc.num.handlers does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.vcpus.per.instance does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.optimize.partition.columns.separate does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.locality.delay does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.txn.heartbeat.threadpool.size does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.disable.backoff.factor does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
> 16/05/04 16:17:16 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist
> 16/05/04 16:17:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 16/05/04 16:17:18 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/04 16:17:18 INFO metastore.ObjectStore: ObjectStore, initialize called
> 16/05/04 16:17:18 INFO DataNucleus.Persistence: Property datanucleus.schema.validateColumns unknown - will be ignored
> 16/05/04 16:17:18 INFO DataNucleus.Persistence: Property datanucleus.schema.validateConstraints unknown - will be ignored
> 16/05/04 16:17:18 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 16/05/04 16:17:18 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
> 16/05/04 16:17:18 INFO DataNucleus.Persistence: Property datanucleus.schema.autoCreateAll unknown - will be ignored
> 16/05/04 16:17:18 INFO DataNucleus.Persistence: Property datanucleus.schema.validateTables unknown - will be ignored
> 16/05/04 16:17:18 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
> 16/05/04 16:17:19 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
> 16/05/04 16:17:21 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist
> 16/05/04 16:17:21 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist
> 16/05/04 16:17:21 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist
> 16/05/04 16:17:21 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist
> ......
> ( I delete these messages, they're same as above )
> ......
> 16/05/04 16:17:22 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist
> 16/05/04 16:17:22 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist
> 16/05/04 16:17:22 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist
> 16/05/04 16:17:22 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist
> 16/05/04 16:17:22 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
> 16/05/04 16:17:22 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist
> 16/05/04 16:17:22 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 16/05/04 16:17:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:25 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:25 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:25 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
> 16/05/04 16:17:25 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/04 16:17:25 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
> 16/05/04 16:17:26 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
> 16/05/04 16:17:26 INFO metastore.HiveMetaStore: Added admin role in metastore
> 16/05/04 16:17:26 INFO metastore.HiveMetaStore: Added public role in metastore
> 16/05/04 16:17:26 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
> 16/05/04 16:17:26 INFO metastore.HiveMetaStore: 0: get_all_databases
> 16/05/04 16:17:26 INFO HiveMetaStore.audit: ugi=yezt    ip=unknown-ip-addr      cmd=get_all_databases
> 16/05/04 16:17:26 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
> 16/05/04 16:17:26 INFO HiveMetaStore.audit: ugi=yezt    ip=unknown-ip-addr      cmd=get_functions: db=default pat=*
> 16/05/04 16:17:26 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:27 INFO session.SessionState: Created local directory: /home/yezt/hive/iotmp
> 16/05/04 16:17:28 INFO session.SessionState: Created HDFS directory: /tmp/hive/yezt/abe15f39-7f47-40ae-87e1-42108c2286ee
> 16/05/04 16:17:28 INFO session.SessionState: Created local directory: /home/yezt/hive/iotmp/abe15f39-7f47-40ae-87e1-42108c2286ee
> 16/05/04 16:17:28 INFO session.SessionState: Created HDFS directory: /tmp/hive/yezt/abe15f39-7f47-40ae-87e1-42108c2286ee/_tmp_space.db
> 16/05/04 16:17:28 INFO spark.SparkContext: Running Spark version 1.6.1
> 16/05/04 16:17:28 WARN spark.SparkConf: 
> SPARK_CLASSPATH was detected (set to ':/opt/hive/lib/mysql-connector-java-5.1.38-bin.jar').
> This is deprecated in Spark 1.0+.
> Please instead use:
>  - ./spark-submit with --driver-class-path to augment the driver classpath
>  - spark.executor.extraClassPath to augment the executor classpath
>         
> 16/05/04 16:17:28 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/opt/hive/lib/mysql-connector-java-5.1.38-bin.jar' as a work-around.
> 16/05/04 16:17:28 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/opt/hive/lib/mysql-connector-java-5.1.38-bin.jar' as a work-around.
> 16/05/04 16:17:28 INFO spark.SecurityManager: Changing view acls to: yezt
> 16/05/04 16:17:28 INFO spark.SecurityManager: Changing modify acls to: yezt
> 16/05/04 16:17:28 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yezt); users with modify permissions: Set(yezt)
> 16/05/04 16:17:29 INFO util.Utils: Successfully started service 'sparkDriver' on port 33667.
> 16/05/04 16:17:30 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 16/05/04 16:17:30 INFO Remoting: Starting remoting
> 16/05/04 16:17:30 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.124.129:46614]
> 16/05/04 16:17:30 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 46614.
> 16/05/04 16:17:30 INFO spark.SparkEnv: Registering MapOutputTracker
> 16/05/04 16:17:30 INFO spark.SparkEnv: Registering BlockManagerMaster
> 16/05/04 16:17:30 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-5933a2ad-a70b-4a2a-98fc-143740ea7721
> 16/05/04 16:17:30 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
> 16/05/04 16:17:30 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 16/05/04 16:17:31 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 16/05/04 16:17:31 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
> 16/05/04 16:17:31 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
> 16/05/04 16:17:31 INFO ui.SparkUI: Started SparkUI at http://192.168.124.129:4040
> 16/05/04 16:17:31 INFO client.AppClient$ClientEndpoint: Connecting to master spark://master:7077...
> 16/05/04 16:17:31 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160504161731-0003
> 16/05/04 16:17:31 INFO client.AppClient$ClientEndpoint: Executor added: app-20160504161731-0003/0 on worker-20160504160506-192.168.124.129-52461 (192.168.124.129:52461) with 1 cores
> 16/05/04 16:17:31 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160504161731-0003/0 on hostPort 192.168.124.129:52461 with 1 cores, 1024.0 MB RAM
> 16/05/04 16:17:31 INFO client.AppClient$ClientEndpoint: Executor added: app-20160504161731-0003/1 on worker-20160504160503-192.168.124.130-60681 (192.168.124.130:60681) with 1 cores
> 16/05/04 16:17:31 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40724.
> 16/05/04 16:17:31 INFO netty.NettyBlockTransferService: Server created on 40724
> 16/05/04 16:17:31 INFO storage.BlockManagerMaster: Trying to register BlockManager
> 16/05/04 16:17:31 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160504161731-0003/1 on hostPort 192.168.124.130:60681 with 1 cores, 1024.0 MB RAM
> 16/05/04 16:17:31 INFO client.AppClient$ClientEndpoint: Executor added: app-20160504161731-0003/2 on worker-20160504160504-192.168.124.131-45538 (192.168.124.131:45538) with 1 cores
> 16/05/04 16:17:31 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160504161731-0003/2 on hostPort 192.168.124.131:45538 with 1 cores, 1024.0 MB RAM
> 16/05/04 16:17:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.124.129:40724 with 517.4 MB RAM, BlockManagerId(driver, 192.168.124.129, 40724)
> 16/05/04 16:17:31 INFO storage.BlockManagerMaster: Registered BlockManager
> 16/05/04 16:17:31 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160504161731-0003/0 is now RUNNING
> 16/05/04 16:17:31 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160504161731-0003/1 is now RUNNING
> 16/05/04 16:17:31 INFO client.AppClient$ClientEndpoint: Executor updated: app-20160504161731-0003/2 is now RUNNING
> 16/05/04 16:17:32 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
> 16/05/04 16:17:34 INFO hive.HiveContext: Initializing execution hive, version 1.2.1
> 16/05/04 16:17:34 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
> 16/05/04 16:17:34 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
> SET hive.support.sql11.reserved.keywords=false
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.connect.retry.limit does not exist
> ......
> ( I delete these messages, they're same as above )
> ......
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.disable.backoff.factor does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
> 16/05/04 16:17:35 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist
> 16/05/04 16:17:35 INFO hive.HiveContext: default warehouse location is /user/hive/warehouse
> 16/05/04 16:17:35 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
> 16/05/04 16:17:35 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
> 16/05/04 16:17:35 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
> 16/05/04 16:17:36 INFO cluster.SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (Slave01:59932) with ID 1
> 16/05/04 16:17:36 INFO storage.BlockManagerMasterEndpoint: Registering block manager Slave01:40271 with 517.4 MB RAM, BlockManagerId(1, Slave01, 40271)
> 16/05/04 16:17:36 INFO cluster.SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (Slave02:39350) with ID 2
> 16/05/04 16:17:36 INFO storage.BlockManagerMasterEndpoint: Registering block manager Slave02:47549 with 517.4 MB RAM, BlockManagerId(2, Slave02, 47549)
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.connect.retry.limit does not exist
> ......
> ( I delete these messages, they're same as above )
> ......
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.disable.backoff.factor does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
> 16/05/04 16:17:37 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist
> 16/05/04 16:17:38 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/04 16:17:38 INFO metastore.ObjectStore: ObjectStore, initialize called
> 16/05/04 16:17:38 INFO DataNucleus.Persistence: Property datanucleus.schema.validateColumns unknown - will be ignored
> 16/05/04 16:17:38 INFO DataNucleus.Persistence: Property datanucleus.schema.validateConstraints unknown - will be ignored
> 16/05/04 16:17:38 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 16/05/04 16:17:38 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
> 16/05/04 16:17:38 INFO DataNucleus.Persistence: Property datanucleus.schema.autoCreateAll unknown - will be ignored
> 16/05/04 16:17:38 INFO DataNucleus.Persistence: Property datanucleus.schema.validateTables unknown - will be ignored
> 16/05/04 16:17:39 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
> 16/05/04 16:17:39 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.metastore.hbase.aggregate.stats.false.positive.probability does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.llap.io.orc.time.counters does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.server2.metrics.enabled does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.timeout.ms does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.client.connect.retry.limit does not exist
> ......
> ( I delete these messages, they're same as above )
> ......
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.llap.task.scheduler.node.disable.backoff.factor does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.llap.am.liveness.connection.sleep.between.retries.ms does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.llap.daemon.memory.per.instance.mb does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.msck.path.validation does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.compactor.history.reaper.interval does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist
> 16/05/04 16:17:40 WARN conf.HiveConf: HiveConf of name hive.llap.io.allocator.alloc.max does not exist
> 16/05/04 16:17:40 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 16/05/04 16:17:42 INFO cluster.SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (Master:51517) with ID 0
> 16/05/04 16:17:42 INFO storage.BlockManagerMasterEndpoint: Registering block manager Master:60869 with 517.4 MB RAM, BlockManagerId(0, Master, 60869)
> 16/05/04 16:17:43 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:43 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:43 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:43 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:43 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
> 16/05/04 16:17:43 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
> 16/05/04 16:17:43 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/04 16:17:44 INFO metastore.HiveMetaStore: Added admin role in metastore
> 16/05/04 16:17:44 INFO metastore.HiveMetaStore: Added public role in metastore
> 16/05/04 16:17:44 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
> 16/05/04 16:17:45 INFO metastore.HiveMetaStore: 0: get_all_databases
> 16/05/04 16:17:45 INFO HiveMetaStore.audit: ugi=yezt    ip=unknown-ip-addr      cmd=get_all_databases
> 16/05/04 16:17:45 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
> 16/05/04 16:17:45 INFO HiveMetaStore.audit: ugi=yezt    ip=unknown-ip-addr      cmd=get_functions: db=default pat=*
> 16/05/04 16:17:45 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
> 16/05/04 16:17:45 INFO session.SessionState: Created HDFS directory: /tmp/hive/yezt/a9749204-8d91-436d-8d67-8c2d8f5d524f
> 16/05/04 16:17:45 INFO session.SessionState: Created local directory: /home/yezt/hive/iotmp/a9749204-8d91-436d-8d67-8c2d8f5d524f
> 16/05/04 16:17:45 INFO session.SessionState: Created HDFS directory: /tmp/hive/yezt/a9749204-8d91-436d-8d67-8c2d8f5d524f/_tmp_space.db
> SET spark.sql.hive.version=1.2.1
> SET spark.sql.hive.version=1.2.1
> spark-sql> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org