You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "Andrew Hulbert (JIRA)" <ji...@apache.org> on 2016/03/11 17:13:01 UTC

[jira] [Commented] (ACCUMULO-3727) FileNotFoundException on failed/data during recovery

    [ https://issues.apache.org/jira/browse/ACCUMULO-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15191091#comment-15191091 ] 

Andrew Hulbert commented on ACCUMULO-3727:
------------------------------------------

I believe we are seeing a similar things happen repeatedly on Accumulo 1.6.1

Looks like it tried to recovery 4 logs for an extent and failed on the 3rd.

FYI in HDFS the structure is inappropriate for what is expected:
this is a file...not a directory: hdfs://nn:port/accumulo/recovery/<recoveryid>/failed
so of course  hdfs://nn:port/accumulo/recovery/<recoveryid>/failed/data cannot be found since failed is a file.

I tried to type a mutilated stack trace for you:

{code}
[tserver.TabletServer] INFO : adding tablet tableid;startkey:endkey back to assignment pool (retry 30)
[tserver.TabletServer] INFO : tservername:9997: got assignment from master: tableid;startkey:endkey
[tserver.TabletServer] DEBUG : Loading extent: tableid;startkey:endkey
[tserver.TabletServer] DEBUG : verifying extent: tableid;startkey:endkey
[tserver.Tablet] DEBUG: Looking at metadata {tableid:......}
[tserver.Tablet] DEBUG: got {tableid;startkey; hdfs://nn:port/accumulo/wal/server+9997/<a different recovery id> (4115), tableid;startkey; hdfs://nn:port/accumulo/wal/server+9997/<a different recovery id> (4115),
tableid;startkey; hdfs://nn:port/accumulo/wal/server+9997/<recovery id> (4115),tableid;startkey; hdfs://nn:port/accumulo/wal/server+9997/<a different recovery id> (4115) for logs  tableid;startkey:endkey)
[contraints.ConstraintChecker] INFO : Loaded constraint org.apache.accumulo.core.constraints.DefaultKeySizeContraint for tableid
[tserver.Tablet] INFO : Started Write-Ahead Log recovery for tableid;startkey:endkey
[tserver.TabletServer] INFO : Looking for hdfs://nn:port/accumulo/recovery/<a different recovery id>/finished
[tserver.TabletServer] INFO : Looking for hdfs://nn:port/accumulo/recovery/<a different recovery id>/finished
[tserver.TabletServer] INFO : Looking for hdfs://nn:port/accumulo/recovery/<recoveryid>/finished
[log.SortedLogRecovery] INFO : Looking at mutations for hdfs://nn:port/accumulo/recovery/<a different recovery id> for tableid;startkey:endkey
[log.SortedLogRecovery] DEBUG : Found tid, seq 4115 73
[log.SortedLogRecovery] DEBUG : minor compaction into hdfs://nn:port/accumulo/tables/tableid/t-xxxxx/Fxxxx.rf finished, but was still in the METADATA
[log.SortedLogRecovery] INFO : Looking at mutations for hdfs://nn:port/accumulo/recovery/<a different recovery id> for tableid;startkey:endkey
[log.SortedLogRecovery] DEBUG : Found tid, seq 4115 73
[log.SortedLogRecovery] INFO : Looking at mutations for hdfs://nn:port/accumulo/recovery/<recoveryid> for tableid;startkey:endkey
[tserver.TabletServer] WARN : exception trying to assign tablet tableid;startkey:endkey hdfs://nn:port/accumulo/recovery/<recoveryid>/failed/data
java.lang.RuntimeException: java.io.Exception: java.io.FileNotFoundException: File does not exist: hdfs://nn:port/accumulo/recovery/<recoveryid>/failed/data
     at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1410)
     at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1233)
     at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1089)
     at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1066)
     at org.apache.accumulo.tserver.TabletServer$AssignmentHandler.run(TabletServer.java:2923)
     ...more
Caused by: java.io.IOException: java.io.FileNotFoundException: File does not exist: hdfs://nn:port/accumulo/recovery/<recoveryid>/failed/data
     at org.apache.accumulo.tserver.log.TabletServerLogger.recover(TableServerLogger.java:427)
     at org.apache.accumulo.tserver.TabletServer.recover(TabletServer.java:3714)
     at org.apache.accumulo.tserver.Tablet.<init>(Tablet.java:1378)
     ...more
Caused by: java.io.FileNotFoundException: File does not exist: hdfs://nn:port/accumulo/recovery/<recoveryid>/failed/data
     at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1132)
     at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124)
     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver:81)
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile:1750)
     at org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:455)
     at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:428)
     at org.apache.hadoop.io.MapFile$Reader.<init>(MapFile.java:398)
     at org.apache.hadoop.io.MapFile$Reader.<init>(MapFile.java:407)
     at org.apache.accumulo.tserver.log.MultiReader.<init>(MultiReader.java:101)
     at org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:100)
     at org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:425)
     ...more
[tserver.TabletServer] WARN :java.io.IOException: java.io.FileNotFoundException: File does not exist: hdfs://nn:port/accumulo/recovery/<recoveryid>/failed/data
[tserver.TabletServer] WARN :failed to open tablet tableid;startkey:endkey reporting failure to master
[tserver.TabletServer] WARN : rescheduling tablet load in 600.00 seconds
{code}

Any ideas?

> FileNotFoundException on failed/data during recovery
> ----------------------------------------------------
>
>                 Key: ACCUMULO-3727
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3727
>             Project: Accumulo
>          Issue Type: Bug
>          Components: tserver
>    Affects Versions: 1.5.2
>            Reporter: William Slacum
>
> Over night there was a mass failure of Accumulo (most likely due to too many mappers for a job). After restarting Accumulo, one of the metadata tablets failed to load. There was a log message showing a `FileNotFoundException` on the file `hdfs:///accumulo/recovery/<log id>/failed/data`. Removing the `<log id>` directory from HDFS seemed to unclog the jam and things came back (though potentially with data loss).
> I wanted to investigate why somewhere in the plumbing of `TabletServer`, `TabletServerLogger`, and `SortedLogRecovery`, an attempt was made to use the `failure` file.
> I see in `SortedLogRecovery#sort` where the marker file gets created:
> {code}
> public void sort(String name, Path srcPath, String destPath) {
> ...
>       } catch (Throwable t) {
>         try {
>           // parent dir may not exist
>           fs.mkdirs(new Path(destPath));
>           fs.create(new Path(destPath, "failed")).close();
>         } catch (IOException e) {
>           log.error("Error creating failed flag file " + name, e);
>         }
>         log.error(t, t);
>       } finally {
> ...
> {code}
> I have not stepped out to figure out where/why the `failed` files gets included in the list of recovered data dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)