You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "Dayue Gao (JIRA)" <ji...@apache.org> on 2015/09/18 04:36:04 UTC

[jira] [Commented] (KYLIN-953) when running the cube job at "Convert Cuboid Data to HFile" step, an error is throw

    [ https://issues.apache.org/jira/browse/KYLIN-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14804879#comment-14804879 ] 

Dayue Gao commented on KYLIN-953:
---------------------------------

[~qhzhou]

[HBASE-13010|https://issues.apache.org/jira/browse/HBASE-13010] changes partitioner's path from "/tmp" to ${hadoop.tmp.dir}, which is included in 0.98.11

[HBASE-13625|https://issues.apache.org/jira/browse/HBASE-13625] further changes that to conf.get("hadoop.tmp.dir"), introducing a new config "hadoop.tmp.dir" BUT WITHOUT any defaults, included in 0.98.13

To summarize, if user changes kylin's hbase dependency "hbase-hadoop2.version" from 0.98.8-hadoop2 to 0.98.13-hadoop2, he/she should set the config "hadoop.tmp.dir" in kylin_job_conf.xml

> when running the cube job at "Convert Cuboid Data to HFile" step, an error is throw
> -----------------------------------------------------------------------------------
>
>                 Key: KYLIN-953
>                 URL: https://issues.apache.org/jira/browse/KYLIN-953
>             Project: Kylin
>          Issue Type: Bug
>          Components: Job Engine
>    Affects Versions: v0.7.2
>            Reporter: JerryShao
>            Assignee: ZhouQianhao
>
> when cube job run at the "Convert Cuboid Data to HFile" step, throws an error like bellow:
> [pool-5-thread-8]:[2015-08-18 09:43:15,854][ERROR][org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:98)] - error in CubeHFileJ
> ob
> java.lang.IllegalArgumentException: Can not create a Path from a null string
>         at org.apache.hadoop.fs.Path.checkPathArg(Path.java:123)
>         at org.apache.hadoop.fs.Path.<init>(Path.java:135)
>         at org.apache.hadoop.fs.Path.<init>(Path.java:89)
>         at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:545)
>         at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:394)
>         at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureIncrementalLoad(HFileOutputFormat.java:88)
>         at org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:89)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>         at org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)
>         at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
>         at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
>         at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
>         at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:133)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)