You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/03/05 20:56:00 UTC

[jira] [Commented] (HBASE-20123) Backup test fails against hadoop 3

    [ https://issues.apache.org/jira/browse/HBASE-20123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386757#comment-16386757 ] 

Steve Loughran commented on HBASE-20123:
----------------------------------------

That looks like branch-2 stack trace; HADOOP-13626 changed CopyListingFileStatus to not be a subclass of FileStatus, instead explictly marshalling the permissions.

At the same time, that getSymlink() in readFields() call is a branch-3 operation; it's in an assert at the end
{code}
    assert (isDirectory() && getSymlink() == null) || !isDirectory();
{code}

I believe that assertion is wrong. It's assuming that getSymlink() returns null if there is no symlink, but instead it raises and exception.

And as its an assert(), it's only going to show up in JVMs with assert turned on.

I'd suggest that someone (you?) files a JIRA against Hadoop with a patch that changes the exception to something like 

{code}
    assert (!(isDirectory() && isSymlink())
{code}

that is, you can't be both a dir and a symlink.




> Backup test fails against hadoop 3
> ----------------------------------
>
>                 Key: HBASE-20123
>                 URL: https://issues.apache.org/jira/browse/HBASE-20123
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>            Priority: Major
>
> When running backup unit test against hadoop3, I saw:
> {code}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 88.862 s <<< FAILURE! - in org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes
> [ERROR] testBackupMultipleDeletes(org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes)  Time elapsed: 86.206 s  <<< ERROR!
> java.io.IOException: java.io.IOException: Failed copy from hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to hdfs://localhost:40578/backupUT
>   at org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> Caused by: java.io.IOException: Failed copy from hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to hdfs://localhost:40578/backupUT
>   at org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> {code}
> In the test output, I found:
> {code}
> 2018-03-03 14:46:10,858 ERROR [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
> java.io.IOException: Path hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
>   at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
>   at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
>   at org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
>   at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
>   at org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
>   at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
>   at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:196)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:408)
>   at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:348)
>   at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:290)
>   at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605)
> {code}
> It seems the failure was related to how we use distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)