You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "yukunpeng (Jira)" <ji...@apache.org> on 2021/03/31 02:20:00 UTC

[jira] [Commented] (KYLIN-4949) Spark build Cube java.lang.NullPointerException

    [ https://issues.apache.org/jira/browse/KYLIN-4949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17311970#comment-17311970 ] 

yukunpeng commented on KYLIN-4949:
----------------------------------

The command is: 
export HADOOP_CONF_DIR=/etc/hadoop/conf && /usr/hdp/current/spark2-client//bin/spark-submit --class org.apache.kylin.common.util.SparkEntry --name "Build Dimension Dictionary with Spark" --conf spark.executor.cores=1 --conf sp
ark.hadoop.yarn.timeline-service.enabled=false --conf spark.hadoop.mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec --conf spark.executor.extraJavaOptions=-Dhdp.version=current --co
nf spark.master=yarn --conf spark.hadoop.mapreduce.output.fileoutputformat.compress=true --conf spark.executor.instances=40 --conf spark.yarn.am.extraJavaOptions=-Dhdp.version=current --conf spark.executor.memory=4G --conf
 spark.yarn.queue=default --conf spark.submit.deployMode=cluster --conf spark.dynamicAllocation.minExecutors=1 --conf spark.network.timeout=600 --conf spark.hadoop.dfs.replication=2 --conf spark.yarn.executor.memoryOverhea
d=1024 --conf spark.dynamicAllocation.executorIdleTimeout=300 --conf spark.history.fs.logDirectory=hdfs:///kylin/spark-history --conf spark.driver.memory=2G --conf spark.driver.extraJavaOptions=-Dhdp.version=current --conf
 spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec --conf spark.eventLog.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.eventLog.dir=hdfs:///kylin/spark-history --conf spark.dynamicA
llocation.maxExecutors=1000 --conf spark.dynamicAllocation.enabled=true --jars /usr/local/apache-kylin-3.1.1-bin-hbase1x/lib/kylin-job-3.1.1.jar /usr/local/apache-kylin-3.1.1-bin-hbase1x/lib/kylin-job-3.1.1.jar -className org.
apache.kylin.engine.spark.SparkBuildDictionary -input hdfs://hzwtest/kylin/kylin_metadata/kylin-429f7533-a1a4-bcbb-8228-b5472f45c409/abc/fact_distinct_columns -cubingJobId 429f7533-a1a4-bcbb-8228-b5472f45c409 -counterOutput hdf
s://hzwtest/kylin/kylin_metadata/kylin-429f7533-a1a4-bcbb-8228-b5472f45c409/abc/counter -dictPath hdfs://hzwtest/kylin/kylin_metadata/kylin-429f7533-a1a4-bcbb-8228-b5472f45c409/abc/dict -segmentId fd220fc9-ec1e-959b-356e-30d29a
91c1d9 -metaUrl kylin_metadata@hdfs,path=hdfs://hzwtest/kylin/kylin_metadata/kylin-429f7533-a1a4-bcbb-8228-b5472f45c409/abc/metadata -cubename abc

> Spark build Cube java.lang.NullPointerException
> -----------------------------------------------
>
>                 Key: KYLIN-4949
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4949
>             Project: Kylin
>          Issue Type: Bug
>    Affects Versions: v3.1.1
>            Reporter: yukunpeng
>            Priority: Major
>
> 21/03/31 09:40:37 INFO Client: 
>  client token: N/A
>  diagnostics: User class threw exception: java.lang.RuntimeException: error execute org.apache.kylin.engine.spark.SparkBuildDictionary. Root cause: Job aborted due to stage failure: Task 9 in stage 0.0 failed 4 times, most recent failure: Lost task 9.3 in stage 0.0 (TID 12, bigdata-platform-test-04, executor 4): java.lang.NullPointerException
>  at org.apache.kylin.common.KylinConfig.getManager(KylinConfig.java:462)
>  at org.apache.kylin.cube.CubeManager.getInstance(CubeManager.java:106)
>  at org.apache.kylin.engine.spark.SparkBuildDictionary$DimensionDictsBuildFunction.init(SparkBuildDictionary.java:246)
>  at org.apache.kylin.engine.spark.SparkBuildDictionary$DimensionDictsBuildFunction.call(SparkBuildDictionary.java:257)
>  at org.apache.kylin.engine.spark.SparkBuildDictionary$DimensionDictsBuildFunction.call(SparkBuildDictionary.java:219)
>  at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
>  at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
>  at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>  at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
>  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
>  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
>  at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
>  at scala.collection.AbstractIterator.to(Iterator.scala:1336)
>  at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
>  at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
>  at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
>  at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
>  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:939)
>  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:939)
>  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
>  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>  at org.apache.spark.scheduler.Task.run(Task.scala:109)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)