You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Ted Yu (JIRA)" <ji...@apache.org> on 2018/03/03 20:28:00 UTC

[jira] [Created] (HBASE-20123) Backup test fails against hadoop 3

Ted Yu created HBASE-20123:
------------------------------

             Summary: Backup test fails against hadoop 3
                 Key: HBASE-20123
                 URL: https://issues.apache.org/jira/browse/HBASE-20123
             Project: HBase
          Issue Type: Bug
            Reporter: Ted Yu


When running backup unit test against hadoop3, I saw:
{code}
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 88.862 s <<< FAILURE! - in org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes
[ERROR] testBackupMultipleDeletes(org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes)  Time elapsed: 86.206 s  <<< ERROR!
java.io.IOException: java.io.IOException: Failed copy from hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to hdfs://localhost:40578/backupUT
  at org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
Caused by: java.io.IOException: Failed copy from hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to hdfs://localhost:40578/backupUT
  at org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
{code}
In the test output, I found:
{code}
2018-03-03 14:46:10,858 ERROR [Time-limited test] mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
java.io.IOException: Path hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
  at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
  at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
  at org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
  at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
  at org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
  at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
  at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
  at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
  at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
  at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
  at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
  at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
  at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:196)
  at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
  at org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:408)
  at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:348)
  at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:290)
  at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605)
{code}
It seems the failure was related to how we use distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)