You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Ted Yu (JIRA)" <ji...@apache.org> on 2013/05/25 17:57:21 UTC
[jira] [Resolved] (HDFS-4853) Handle unresolved InetAddress for
favored nodes in DFSClient#create()
[ https://issues.apache.org/jira/browse/HDFS-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ted Yu resolved HDFS-4853.
--------------------------
Resolution: Fixed
Duplicate of HDFS-4827
> Handle unresolved InetAddress for favored nodes in DFSClient#create()
> ---------------------------------------------------------------------
>
> Key: HDFS-4853
> URL: https://issues.apache.org/jira/browse/HDFS-4853
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.0.5-beta
> Reporter: Ted Yu
> Priority: Minor
>
> The underlying API is:
> {code}
> public DFSOutputStream create(String src,
> FsPermission permission,
> EnumSet<CreateFlag> flag,
> boolean createParent,
> short replication,
> long blockSize,
> Progressable progress,
> int buffersize,
> ChecksumOpt checksumOpt,
> InetSocketAddress[] favoredNodes) throws IOException {
> {code}
> Here is related code:
> {code}
> favoredNodeStrs[i] =
> favoredNodes[i].getAddress().getHostAddress() + ":"
> + favoredNodes[i].getPort();
> {code}
> I observed the following stack trace in HBase unit test:
> {code}
> testFavoredNodes(org.apache.hadoop.hbase.regionserver.TestRegionFavoredNodes) Time elapsed: 0.106 sec <<< ERROR!
> org.apache.hadoop.hbase.exceptions.DroppedSnapshotException: region: table,rrr,1369355298031.1fdb1b446b02b497f0869a08adad7745.
> at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1568)
> at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1429)
> at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1347)
> at org.apache.hadoop.hbase.MiniHBaseCluster.flushcache(MiniHBaseCluster.java:531)
> at org.apache.hadoop.hbase.HBaseTestingUtility.flush(HBaseTestingUtility.java:961)
> at org.apache.hadoop.hbase.regionserver.TestRegionFavoredNodes.testFavoredNodes(TestRegionFavoredNodes.java:132)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runners.Suite.runChild(Suite.java:127)
> at org.junit.runners.Suite.runChild(Suite.java:26)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: java.lang.NullPointerException
> at org.apache.hadoop.hbase.util.FSUtils.create(FSUtils.java:293)
> at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.createOutputStream(AbstractHFileWriter.java:268)
> at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:427)
> at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.<init>(StoreFile.java:791)
> at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.<init>(StoreFile.java:733)
> at org.apache.hadoop.hbase.regionserver.StoreFile$WriterBuilder.build(StoreFile.java:671)
> at org.apache.hadoop.hbase.regionserver.HStore.createWriterInTmp(HStore.java:799)
> at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:704)
> at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1813)
> at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1543)
> ... 33 more
> Caused by: java.lang.NullPointerException
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1306)
> at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:283)
> {code}
> Looks like one of the favored nodes couldn't resolve favoredNodes[i].getAddress(), leading to NullPointerException.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira