You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "Josh Elser (JIRA)" <ji...@apache.org> on 2014/01/08 23:14:51 UTC

[jira] [Resolved] (ACCUMULO-1895) FILE_READ table error with HDFS trash enable and user's directory not existent

     [ https://issues.apache.org/jira/browse/ACCUMULO-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Josh Elser resolved ACCUMULO-1895.
----------------------------------

       Resolution: Invalid
    Fix Version/s:     (was: 1.5.1)
                       (was: 1.6.0)

Looked through the compaction code again. Looks like we're doing the right thing. I'm guessing the READ failures, which is what previously threw me off, is from a new flush file being created after a recovery, meanwhile there was a scan on the old file?

Not positive, but, either way, I couldn't find anything we were doing wrong. 

> FILE_READ table error with HDFS trash enable and user's directory not existent
> ------------------------------------------------------------------------------
>
>                 Key: ACCUMULO-1895
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1895
>             Project: Accumulo
>          Issue Type: Bug
>          Components: tserver
>    Affects Versions: 1.5.0
>         Environment: Apache Hadoop-2.2.0, Centos 6.4
>            Reporter: Josh Elser
>
> Found the following odd situation. 
> # Create a new user for Accumulo to run as, HDFS is running with trash enable and permissions. The accumulo user is *not* in the HDFS superusergroup.
> # Create the $instance.dfs.dir in Accumulo and chown it to the accumulo user as the HDFS superuser
> # `accumulo init` successfully
> # `bin/start-all.sh`
> # Wait or compact -t !METADATA
> # FILE_READ error on !METADATA
> {noformat}
> 2013-11-14 17:09:01,637 [tabletserver.TabletServer] DEBUG: Got flush message from user: !SYSTEM
> 2013-11-14 17:09:01,706 [tabletserver.TabletServer] DEBUG: ScanSess tid 192.168.56.170:43295 !0 5 entries in 0.02 secs, nbTimes = [22 22 22.00 1]
> 2013-11-14 17:09:01,800 [tabletserver.NativeMap] DEBUG: Allocated native map 0x0000000002835310
> 2013-11-14 17:09:01,804 [tabletserver.Tablet] DEBUG: MinC initiate lock 0.00 secs
> 2013-11-14 17:09:01,892 [tabletserver.TabletServer] DEBUG: ScanSess tid 192.168.56.170:43295 !0 5 entries in 0.03 secs, nbTimes = [34 34 34.00 1]
> 2013-11-14 17:09:01,936 [tabletserver.LargestFirstMemoryManager] DEBUG: BEFORE compactionThreshold = 0.500 maxObserved = 1,364
> 2013-11-14 17:09:01,937 [tabletserver.LargestFirstMemoryManager] DEBUG: AFTER compactionThreshold = 0.550
> 2013-11-14 17:09:02,005 [tabletserver.TabletServer] DEBUG: Got flush message from user: !SYSTEM
> 2013-11-14 17:09:02,026 [tabletserver.MinorCompactor] DEBUG: Begin minor compaction /foobar/tables/!0/root_tablet/F0000000.rf_tmp !0;!0<<
> 2013-11-14 17:09:02,053 [tabletserver.TabletServer] DEBUG: Got flush message from user: !SYSTEM
> 2013-11-14 17:09:02,063 [Configuration.deprecation] INFO : dfs.block.size is deprecated. Instead, use dfs.blocksize
> 2013-11-14 17:09:02,066 [tabletserver.TabletServer] DEBUG: ScanSess tid 192.168.56.170:43295 !0 7 entries in 0.05 secs, nbTimes = [47 47 47.00 1]
> 2013-11-14 17:09:02,571 [tabletserver.TabletServer] DEBUG: ScanSess tid 192.168.56.170:43909 !0 0 entries in 0.02 secs, nbTimes = [3 3 3.00 1]
> 2013-11-14 17:09:02,954 [tabletserver.Compactor] DEBUG: Compaction !0;!0<< 18 read | 4 written |     87 entries/sec |  0.206 secs
> 2013-11-14 17:09:03,067 [tabletserver.Tablet] DEBUG: Logs for memory compacted: !0;!0<< 192.168.56.170+9998/8b43794d-0400-405b-a5f4-690017787063
> 2013-11-14 17:09:03,070 [tabletserver.Tablet] DEBUG: Logs for current memory: !0;!0<< 192.168.56.170+9998/8b43794d-0400-405b-a5f4-690017787063
> 2013-11-14 17:09:03,082 [log.TabletServerLogger] DEBUG:  wrote MinC finish  8: writeTime:9ms
> 2013-11-14 17:09:03,084 [tabletserver.Tablet] TABLET_HIST: !0;!0<< MinC [memory] -> /root_tablet/F0000000.rf
> 2013-11-14 17:09:03,086 [tabletserver.Tablet] DEBUG: MinC finish lock 0.00 secs !0;!0<<
> 2013-11-14 17:09:03,090 [tabletserver.NativeMap] DEBUG: Deallocating native map 0x000000000205b210
> 2013-11-14 17:09:03,161 [tabletserver.Tablet] DEBUG: MajC initiate lock 0.01 secs, wait 0.00 secs
> 2013-11-14 17:09:03,212 [tabletserver.Tablet] DEBUG: Starting MajC !0;!0<< (NORMAL) [/root_tablet/00000_00000.rf, /root_tablet/F0000000.rf] --> /root_tablet/A0000001.rf_tmp  []
> 2013-11-14 17:09:04,297 [tabletserver.TabletServer] DEBUG: ScanSess tid 192.168.56.170:43295 !0 4 entries in 0.21 secs, nbTimes = [205 205 205.00 1]
> 2013-11-14 17:09:04,475 [tabletserver.TabletServer] DEBUG: Got compact message from user: !SYSTEM
> 2013-11-14 17:09:04,727 [tabletserver.Compactor] DEBUG: Compaction !0;!0<< 20 read | 10 written |     16 entries/sec |  1.220 secs
> 2013-11-14 17:09:04,785 [fs.TrashPolicyDefault] INFO : Namenode trash configuration: Deletion interval = 21600000 minutes, Emptier interval = 0 minutes.
> 2013-11-14 17:09:05,066 [fs.TrashPolicyDefault] WARN : Can't create trash directory: hdfs://magmst.ctolab.hortonworks.com:8020/user/foobar/.Trash/Current/foobar/tables/!0/root_tablet
> 2013-11-14 17:09:05,067 [tabletserver.Tablet] ERROR: MajC Failed, extent = !0;!0<<
> 2013-11-14 17:09:05,075 [tabletserver.Tablet] ERROR: MajC Failed, message = Failed to move to trash: /foobar/tables/!0/root_tablet/delete+A0000001.rf+00000_00000.rf
> java.io.IOException: Failed to move to trash: /foobar/tables/!0/root_tablet/delete+A0000001.rf+00000_00000.rf
>         at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:160)
>         at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109)
>         at org.apache.accumulo.server.tabletserver.Tablet$DatafileManager.bringMajorCompactionOnline(Tablet.java:1045)
>         at org.apache.accumulo.server.tabletserver.Tablet$DatafileManager.bringMajorCompactionOnline(Tablet.java:977)
>         at org.apache.accumulo.server.tabletserver.Tablet._majorCompact(Tablet.java:3342)
>         at org.apache.accumulo.server.tabletserver.Tablet.majorCompact(Tablet.java:3419)
>         at org.apache.accumulo.server.tabletserver.Tablet.access$4800(Tablet.java:152)
>         at org.apache.accumulo.server.tabletserver.Tablet$CompactionRunner.run(Tablet.java:2901)
>         at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:42)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:42)
>         at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=foobar, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:214)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:158)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5193)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5175)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5149)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3396)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3366)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3340)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:724)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:502)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59598)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2396)
>         at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2365)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:817)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:813)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:813)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:806)
>         at org.apache.accumulo.server.trace.TraceFileSystem.mkdirs(TraceFileSystem.java:794)
>         at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:136)
>         ... 13 more
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=foobar, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:214)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:158)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5193)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5175)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5149)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3396)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3366)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3340)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:724)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:502)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59598)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:467)
>         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2394)
>         ... 21 more
> {noformat}
> Then, a short while later
> {noformat}
> 2013-11-14 17:09:30,545 [tabletserver.Tablet] DEBUG: MajC initiate lock 0.25 secs, wait 0.00 secs
> 2013-11-14 17:09:30,552 [tabletserver.Tablet] DEBUG: Starting MajC !0;!0<< (NORMAL) [/root_tablet/00000_00000.rf, /root_tablet/F0000000.rf] --> /root_tablet/A0000002.rf_tmp  []
> 2013-11-14 17:09:30,605 [tabletserver.Compactor] WARN : Some problem opening map file /foobar/tables/!0/root_tablet/00000_00000.rf File does not exist: /foobar/tables/!0/root_tablet/00000_00000.rf
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1540)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1483)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1463)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1437)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
> java.io.FileNotFoundException: File does not exist: /foobar/tables/!0/root_tablet/00000_00000.rf
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1540)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1483)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1463)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1437)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>         at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1066)
>         at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1054)
>         at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1044)
>         at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
>         at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
>         at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1212)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
>         at org.apache.accumulo.server.trace.TraceFileSystem.open(TraceFileSystem.java:81)
>         at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBCFile(CachableBlockFile.java:256)
>         at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.access$000(CachableBlockFile.java:143)
>         at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader$MetaBlockLoader.get(CachableBlockFile.java:212)
>         at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getBlock(CachableBlockFile.java:313)
>         at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:367)
>         at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$Reader.getMetaBlock(CachableBlockFile.java:143)
>         at org.apache.accumulo.core.file.rfile.RFile$Reader.<init>(RFile.java:834)
>         at org.apache.accumulo.core.file.rfile.RFileOperations.openReader(RFileOperations.java:79)
>         at org.apache.accumulo.core.file.DispatchingFileFactory.openReader(FileOperations.java:71)
>         at org.apache.accumulo.server.tabletserver.Compactor.openMapDataFiles(Compactor.java:376)
>         at org.apache.accumulo.server.tabletserver.Compactor.compactLocalityGroup(Compactor.java:419)
>         at org.apache.accumulo.server.tabletserver.Compactor.call(Compactor.java:308)
>         at org.apache.accumulo.server.tabletserver.Tablet._majorCompact(Tablet.java:3335)
>         at org.apache.accumulo.server.tabletserver.Tablet.majorCompact(Tablet.java:3419)
>         at org.apache.accumulo.server.tabletserver.Tablet.access$4800(Tablet.java:152)
>         at org.apache.accumulo.server.tabletserver.Tablet$CompactionRunner.run(Tablet.java:2901)
>         at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:42)
>         at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /foobar/tables/!0/root_tablet/00000_00000.rf
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1540)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1483)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1463)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1437)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:468)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2053)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2047)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:188)
>         at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1064)
>         ... 34 more
> 2013-11-14 17:09:30,607 [problems.ProblemReports] DEBUG: Filing problem report !0 FILE_READ /foobar/tables/!0/root_tablet/00000_00000.rf
> {noformat}
> Seems to be reproducible for me.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)