You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Lars Hofhansl (JIRA)" <ji...@apache.org> on 2013/10/28 05:56:32 UTC
[jira] [Reopened] (HBASE-9825) LoadIncrementalHFiles fails to load
from remote cluster in hadoop 2
[ https://issues.apache.org/jira/browse/HBASE-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lars Hofhansl reopened HBASE-9825:
----------------------------------
> LoadIncrementalHFiles fails to load from remote cluster in hadoop 2
> -------------------------------------------------------------------
>
> Key: HBASE-9825
> URL: https://issues.apache.org/jira/browse/HBASE-9825
> Project: HBase
> Issue Type: Bug
> Components: hadoop2
> Affects Versions: 0.94.12
> Reporter: Jerry He
> Assignee: Ted Yu
> Attachments: HBASE-9825-0.94-only.patch
>
>
> Running on hadoop 2, LoadIncrementalHFiles gives the following exception when loading from a remote cluster.
> {code}
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@455e455e, java.io.IOException: j ava.io.IOException: java.lang.UnsupportedOperationException: Immutable Configuration
> at org.apache.hadoop.hbase.regionserver.CompoundConfiguration.setClass(CompoundConfiguration.java:516)
> at org.apache.hadoop.ipc.RPC.setProtocolEngine(RPC.java:195)
> at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:250)
> at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:169)
> at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:130)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:482)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:445)
> at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2429)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2463)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2445)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:363)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:283)
> at org.apache.hadoop.hbase.regionserver.Store.assertBulkLoadHFileOk(Store.java:571)
> at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3689)
> at org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3637)
> at org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2939)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> at java.lang.reflect.Method.invoke(Method.java:611)
> at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
> at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:186)
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.tryAtomicRegionLoad(LoadIncrementalHFiles.java:567)
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:317)
> at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:315)
> {code}
> This does not happen when loading from the same FileSystem.
--
This message was sent by Atlassian JIRA
(v6.1#6144)