You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Brahma Reddy Battula (JIRA)" <ji...@apache.org> on 2014/07/08 15:29:04 UTC
[jira] [Created] (HDFS-6641) [ HDFS- File Concat ] Concat will fail
when block is not full
Brahma Reddy Battula created HDFS-6641:
------------------------------------------
Summary: [ HDFS- File Concat ] Concat will fail when block is not full
Key: HDFS-6641
URL: https://issues.apache.org/jira/browse/HDFS-6641
Project: Hadoop HDFS
Issue Type: Bug
Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
sually we can't ensure lastblock alwaysfull...please let me know purpose of following check..
long blockSize = trgInode.getPreferredBlockSize();
// check the end block to be full
final BlockInfo last = trgInode.getLastBlock();
if(blockSize != last.getNumBytes()) {
throw new HadoopIllegalArgumentException("The last block in " + target
+ " is not full; last block size = " + last.getNumBytes()
+ " but file block size = " + blockSize);
}
If it is issue, I'll file jira.
Following is the trace..
exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.HadoopIllegalArgumentException): The last block in /Test.txt is not full; last block size = 14 but file block size = 134217728
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:1887)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:1833)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:1795)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:704)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:512)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
--
This message was sent by Atlassian JIRA
(v6.2#6252)