You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "JerryShao (JIRA)" <ji...@apache.org> on 2015/08/23 10:43:45 UTC

[jira] [Comment Edited] (KYLIN-953) when running the cube job at "Convert Cuboid Data to HFile" step, an error is throw

    [ https://issues.apache.org/jira/browse/KYLIN-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14706292#comment-14706292 ] 

JerryShao edited comment on KYLIN-953 at 8/23/15 8:43 AM:
----------------------------------------------------------

my environment is like below:
   hadoop2.6.0 
   hive0.14.0
   hbase0.98
   zookeeper3.4.6
those components are installed alone, not use the sandbox.

I also  tracked the source code, and find org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:545) is
Path partitionsPath = new Path("/tmp", "partitions_" + UUID.randomUUID());
which is indeed impossible to produce any null value. And can't find out environmental problems.


was (Author: jerryshao2015):
my environment is like below:
   hadoop2.6.0 
   hive3.4.6
   hbase0.98
   zookeeper3.4.6
those components are installed alone, not use the sandbox.

I also  tracked the source code, and find org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:545) is
Path partitionsPath = new Path("/tmp", "partitions_" + UUID.randomUUID());
which is indeed impossible to produce any null value. And can't find out environmental problems.

> when running the cube job at "Convert Cuboid Data to HFile" step, an error is throw
> -----------------------------------------------------------------------------------
>
>                 Key: KYLIN-953
>                 URL: https://issues.apache.org/jira/browse/KYLIN-953
>             Project: Kylin
>          Issue Type: Bug
>          Components: Job Engine
>    Affects Versions: v0.7.2
>            Reporter: JerryShao
>            Assignee: ZhouQianhao
>
> when cube job run at the "Convert Cuboid Data to HFile" step, throws an error like bellow:
> [pool-5-thread-8]:[2015-08-18 09:43:15,854][ERROR][org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:98)] - error in CubeHFileJ
> ob
> java.lang.IllegalArgumentException: Can not create a Path from a null string
>         at org.apache.hadoop.fs.Path.checkPathArg(Path.java:123)
>         at org.apache.hadoop.fs.Path.<init>(Path.java:135)
>         at org.apache.hadoop.fs.Path.<init>(Path.java:89)
>         at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:545)
>         at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:394)
>         at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureIncrementalLoad(HFileOutputFormat.java:88)
>         at org.apache.kylin.job.hadoop.cube.CubeHFileJob.run(CubeHFileJob.java:89)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>         at org.apache.kylin.job.common.MapReduceExecutable.doWork(MapReduceExecutable.java:112)
>         at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
>         at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
>         at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
>         at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:133)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)