You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by yu feng <ol...@gmail.com> on 2016/01/19 15:23:42 UTC

Convert Cuboid Data to HFile failed when hbase in different HDFS

In the step 'Convert Cuboid Data to HFile' execute failed, error log is :


java.io.IOException: Failed to run job : Unable to map logical nameservice
URI 'hdfs://A' to a NameNode. Local configuration does not have a failover
proxy provide
r configured.
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at
org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:129)
at
org.apache.kylin.storage.hbase.steps.CubeHFileJob.run(CubeHFileJob.java:93)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at
org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:119)
at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
at
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

I think it is because node manager in hadoop cluster can not recognition
hdfs://A in they config. So, I have to tranform the path
hdfs://A/path/to/hfile to hdfs://namenode_ip:port/path/to/hfile before
execute this step. and it works for me.

I have create a jira here : https://issues.apache.org/jira/browse/KYLIN-1280

If you have a better solution, reply me please.

Re: Convert Cuboid Data to HFile failed when hbase in different HDFS

Posted by yu feng <ol...@gmail.com>.
this problem caused by deploy hbase apart from hadoop, However we hadoop
cluster can not recognize name service of HDFS which hbase depend on.

2016-01-22 15:55 GMT+08:00 Li Yang <li...@apache.org>:

> Sounds the same as https://issues.apache.org/jira/browse/KYLIN-957
>
>
> On Tue, Jan 19, 2016 at 10:23 PM, yu feng <ol...@gmail.com> wrote:
>
> > In the step 'Convert Cuboid Data to HFile' execute failed, error log is :
> >
> >
> > java.io.IOException: Failed to run job : Unable to map logical
> nameservice
> > URI 'hdfs://A' to a NameNode. Local configuration does not have a
> failover
> > proxy provide
> > r configured.
> > at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
> > at
> >
> >
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:415)
> > at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> > at
> >
> >
> org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:129)
> > at
> >
> org.apache.kylin.storage.hbase.steps.CubeHFileJob.run(CubeHFileJob.java:93)
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> > at
> >
> >
> org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:119)
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> > at
> >
> >
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
> > at
> >
> >
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> > at
> >
> >
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:745)
> >
> > I think it is because node manager in hadoop cluster can not recognition
> > hdfs://A in they config. So, I have to tranform the path
> > hdfs://A/path/to/hfile to hdfs://namenode_ip:port/path/to/hfile before
> > execute this step. and it works for me.
> >
> > I have create a jira here :
> > https://issues.apache.org/jira/browse/KYLIN-1280
> >
> > If you have a better solution, reply me please.
> >
>

Re: Convert Cuboid Data to HFile failed when hbase in different HDFS

Posted by Li Yang <li...@apache.org>.
Sounds the same as https://issues.apache.org/jira/browse/KYLIN-957


On Tue, Jan 19, 2016 at 10:23 PM, yu feng <ol...@gmail.com> wrote:

> In the step 'Convert Cuboid Data to HFile' execute failed, error log is :
>
>
> java.io.IOException: Failed to run job : Unable to map logical nameservice
> URI 'hdfs://A' to a NameNode. Local configuration does not have a failover
> proxy provide
> r configured.
> at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
> at
>
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> at
>
> org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:129)
> at
> org.apache.kylin.storage.hbase.steps.CubeHFileJob.run(CubeHFileJob.java:93)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at
>
> org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:119)
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> at
>
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
> at
>
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
> at
>
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> I think it is because node manager in hadoop cluster can not recognition
> hdfs://A in they config. So, I have to tranform the path
> hdfs://A/path/to/hfile to hdfs://namenode_ip:port/path/to/hfile before
> execute this step. and it works for me.
>
> I have create a jira here :
> https://issues.apache.org/jira/browse/KYLIN-1280
>
> If you have a better solution, reply me please.
>