You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by "fengYu (JIRA)" <ji...@apache.org> on 2016/01/04 07:21:39 UTC
[jira] [Created] (KYLIN-1280) Convert Cuboid Data to HFile failed
when hbase in different HDFS
fengYu created KYLIN-1280:
-----------------------------
Summary: Convert Cuboid Data to HFile failed when hbase in different HDFS
Key: KYLIN-1280
URL: https://issues.apache.org/jira/browse/KYLIN-1280
Project: Kylin
Issue Type: Bug
Affects Versions: 2.0
Reporter: fengYu
I deploy kylin-2.0 with hbase which rely on a different HDFS with hadoop cluster, so I config this property 'kylin.hbase.cluster.fs' = hdfs://A, the name service is different with 'fs.defaultFS' in hadoop cluster which is hdfs://B.
In the step 'Convert Cuboid Data to HFile' execute failed, error log is :
java.io.IOException: Failed to run job : Unable to map logical nameservice URI 'hdfs://A' to a NameNode. Local configuration does not have a failover proxy provide
r configured.
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(AbstractHadoopJob.java:129)
at org.apache.kylin.storage.hbase.steps.CubeHFileJob.run(CubeHFileJob.java:93)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:119)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I think it is because node manager in hadoop cluster can not recognition hdfs://A in they config. So, I have to tranform the path hdfs://A/path/to/hfile to hdfs://namenode_ip:port/path/to/hfile before execute this step. and it works for me.
Here is my patch.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)