You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by ch huang <ju...@gmail.com> on 2014/07/25 03:15:11 UTC
issue about distcp " Source and target differ in block-size. Use -pb
to preserve block-sizes during copy."
hi,maillist:
i try to copy data from my old cluster to new cluster,i get
error ,how to handle this?
14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
attempt_1406182801379_0004_m_000000_1, Status : FAILED
Error: java.io.IOException: File copy failed:
webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
webhdfs://develop/tmp/pipe_url_bak/part-m-00001
at
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
at
org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying
webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
webhdfs://develop/tmp/pipe_url_bak/part-m-00001
at
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
... 10 more
Caused by: java.io.IOException: Error writing request body to server
at
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
at
sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
at
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
at
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
at
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
at
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... 11 more
14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
attempt_1406182801379_0004_m_000000_2, Status : FAILED
Error: java.io.IOException: File copy failed:
webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
webhdfs://develop/tmp/pipe_url_bak/part-m-00001
at
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
at
org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying
webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
webhdfs://develop/tmp/pipe_url_bak/part-m-00001
at
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Your client side was running at "14/07/24 18:35:58 INFO mapreduce.Job:
T***", But you are pasting NN log at "2014-07-24 17:39:34,255";
By the way, which version of HDFS are you using?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 10:36 AM, ch huang <ju...@gmail.com> wrote:
> 2014-07-24 17:33:04,783 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:05,742 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:33,179 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
> expecting start txid #62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
> 192.168.10.51:50010
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> .................................
>
> 2014-07-24 17:35:33,573 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
> expecting start txid #62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
> 192.168.10.50:50010
> 2014-07-24 17:35:33,871 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> of size 1385 edits # 16 loaded in 0 seconds
> 2014-07-24 17:35:33,872 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
> starting from txid 62720
> 2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:37:33,915 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:37:34,194 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
> expecting start txid #62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:37:34,224 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
> :
> 2014-07-24 17:37:34,225 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
> starting from txid 62736
> 2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:39:34,255 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
>
>
> On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Would you please also past the corresponding namenode log?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i try to copy data from my old cluster to new cluster,i get
>>> error ,how to handle this?
>>>
>>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>>> ... 10 more
>>> Caused by: java.io.IOException: Error writing request body to server
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>>> ... 11 more
>>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>>
>>
>>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Your client side was running at "14/07/24 18:35:58 INFO mapreduce.Job:
T***", But you are pasting NN log at "2014-07-24 17:39:34,255";
By the way, which version of HDFS are you using?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 10:36 AM, ch huang <ju...@gmail.com> wrote:
> 2014-07-24 17:33:04,783 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:05,742 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:33,179 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
> expecting start txid #62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
> 192.168.10.51:50010
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> .................................
>
> 2014-07-24 17:35:33,573 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
> expecting start txid #62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
> 192.168.10.50:50010
> 2014-07-24 17:35:33,871 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> of size 1385 edits # 16 loaded in 0 seconds
> 2014-07-24 17:35:33,872 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
> starting from txid 62720
> 2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:37:33,915 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:37:34,194 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
> expecting start txid #62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:37:34,224 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
> :
> 2014-07-24 17:37:34,225 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
> starting from txid 62736
> 2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:39:34,255 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
>
>
> On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Would you please also past the corresponding namenode log?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i try to copy data from my old cluster to new cluster,i get
>>> error ,how to handle this?
>>>
>>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>>> ... 10 more
>>> Caused by: java.io.IOException: Error writing request body to server
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>>> ... 11 more
>>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>>
>>
>>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Your client side was running at "14/07/24 18:35:58 INFO mapreduce.Job:
T***", But you are pasting NN log at "2014-07-24 17:39:34,255";
By the way, which version of HDFS are you using?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 10:36 AM, ch huang <ju...@gmail.com> wrote:
> 2014-07-24 17:33:04,783 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:05,742 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:33,179 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
> expecting start txid #62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
> 192.168.10.51:50010
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> .................................
>
> 2014-07-24 17:35:33,573 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
> expecting start txid #62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
> 192.168.10.50:50010
> 2014-07-24 17:35:33,871 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> of size 1385 edits # 16 loaded in 0 seconds
> 2014-07-24 17:35:33,872 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
> starting from txid 62720
> 2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:37:33,915 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:37:34,194 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
> expecting start txid #62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:37:34,224 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
> :
> 2014-07-24 17:37:34,225 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
> starting from txid 62736
> 2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:39:34,255 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
>
>
> On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Would you please also past the corresponding namenode log?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i try to copy data from my old cluster to new cluster,i get
>>> error ,how to handle this?
>>>
>>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>>> ... 10 more
>>> Caused by: java.io.IOException: Error writing request body to server
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>>> ... 11 more
>>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>>
>>
>>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Your client side was running at "14/07/24 18:35:58 INFO mapreduce.Job:
T***", But you are pasting NN log at "2014-07-24 17:39:34,255";
By the way, which version of HDFS are you using?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 10:36 AM, ch huang <ju...@gmail.com> wrote:
> 2014-07-24 17:33:04,783 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:05,742 WARN
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
> Operation category READ is not supported in state standby
> 2014-07-24 17:33:33,179 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
> expecting start txid #62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,442 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62525
> 2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
> 192.168.10.51:50010
> 2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> .................................
>
> 2014-07-24 17:35:33,573 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
> expecting start txid #62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,826 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62721
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.49:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.51:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
> blockMap updated: 192.168.10.50:50010 is added to blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION,
> primaryNodeIndex=-1,
> replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
> ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
> size 0
> 2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
> 192.168.10.50:50010
> 2014-07-24 17:35:33,871 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> of size 1385 edits # 16 loaded in 0 seconds
> 2014-07-24 17:35:33,872 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
> starting from txid 62720
> 2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
> 2014-07-24 17:37:33,915 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
> 2014-07-24 17:37:34,194 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
> expecting start txid #62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
>
> http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,195 INFO
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
> stream '
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
> to transaction ID 62737
> 2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
> blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
> 192.168.10.50:50010
> 2014-07-24 17:37:34,224 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
> http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
> :
> 2014-07-24 17:37:34,225 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
> starting from txid 62736
> 2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
> 2014-07-24 17:39:34,255 INFO
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
> roll on remote NameNode hz24/192.168.10.24:8020
>
>
> On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Would you please also past the corresponding namenode log?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i try to copy data from my old cluster to new cluster,i get
>>> error ,how to handle this?
>>>
>>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>>> ... 10 more
>>> Caused by: java.io.IOException: Error writing request body to server
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>>> at
>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>>> at
>>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>>> ... 11 more
>>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>>> Error: java.io.IOException: File copy failed:
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>>> at
>>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>>> at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>>> at
>>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>>
>>
>>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by ch huang <ju...@gmail.com>.
2014-07-24 17:33:04,783 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:05,742 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:33,179 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
expecting start txid #62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
192.168.10.51:50010
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
.................................
2014-07-24 17:35:33,573 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
expecting start txid #62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
192.168.10.50:50010
2014-07-24 17:35:33,871 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
of size 1385 edits # 16 loaded in 0 seconds
2014-07-24 17:35:33,872 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
starting from txid 62720
2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
2014-07-24 17:37:33,915 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:37:34,194 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
expecting start txid #62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:37:34,224 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
:
2014-07-24 17:37:34,225 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
starting from txid 62736
2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
2014-07-24 17:39:34,255 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
> Would you please also past the corresponding namenode log?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i try to copy data from my old cluster to new cluster,i get
>> error ,how to handle this?
>>
>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>> ... 10 more
>> Caused by: java.io.IOException: Error writing request body to server
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>> ... 11 more
>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>
>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by ch huang <ju...@gmail.com>.
2014-07-24 17:33:04,783 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:05,742 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:33,179 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
expecting start txid #62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
192.168.10.51:50010
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
.................................
2014-07-24 17:35:33,573 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
expecting start txid #62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
192.168.10.50:50010
2014-07-24 17:35:33,871 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
of size 1385 edits # 16 loaded in 0 seconds
2014-07-24 17:35:33,872 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
starting from txid 62720
2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
2014-07-24 17:37:33,915 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:37:34,194 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
expecting start txid #62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:37:34,224 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
:
2014-07-24 17:37:34,225 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
starting from txid 62736
2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
2014-07-24 17:39:34,255 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
> Would you please also past the corresponding namenode log?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i try to copy data from my old cluster to new cluster,i get
>> error ,how to handle this?
>>
>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>> ... 10 more
>> Caused by: java.io.IOException: Error writing request body to server
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>> ... 11 more
>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>
>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by ch huang <ju...@gmail.com>.
2014-07-24 17:33:04,783 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:05,742 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:33,179 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
expecting start txid #62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
192.168.10.51:50010
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
.................................
2014-07-24 17:35:33,573 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
expecting start txid #62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
192.168.10.50:50010
2014-07-24 17:35:33,871 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
of size 1385 edits # 16 loaded in 0 seconds
2014-07-24 17:35:33,872 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
starting from txid 62720
2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
2014-07-24 17:37:33,915 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:37:34,194 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
expecting start txid #62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:37:34,224 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
:
2014-07-24 17:37:34,225 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
starting from txid 62736
2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
2014-07-24 17:39:34,255 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
> Would you please also past the corresponding namenode log?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i try to copy data from my old cluster to new cluster,i get
>> error ,how to handle this?
>>
>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>> ... 10 more
>> Caused by: java.io.IOException: Error writing request body to server
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>> ... 11 more
>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>
>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by ch huang <ju...@gmail.com>.
2014-07-24 17:33:04,783 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:05,742 WARN
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException:
Operation category READ is not supported in state standby
2014-07-24 17:33:33,179 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@67698344
expecting start txid #62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,442 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62525&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62525
2014-07-24 17:33:33,480 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753268_12641 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,482 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753337_12710{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,484 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753338_12711{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW]]}
size 0
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753338_12711 192.168.10.50:50010 192.168.10.49:50010
192.168.10.51:50010
2014-07-24 17:33:33,485 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753339_12712{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
.................................
2014-07-24 17:35:33,573 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a7ff649
expecting start txid #62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,826 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62721
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.49:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.51:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,868 INFO BlockStateChange: BLOCK* addStoredBlock:
blockMap updated: 192.168.10.50:50010 is added to
blk_1073753367_12740{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a4cfa75c-28f4-4e73-9e17-b6e3f129864f:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-23f57228-24d8-4e51-afe9-c13a8b47a0a5:NORMAL|RBW],
ReplicaUnderConstruction[[DISK]DS-7496d6a7-2a8f-4884-8a8f-f3a0f3037c0e:NORMAL|RBW]]}
size 0
2014-07-24 17:35:33,869 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753270_12643 192.168.10.49:50010 192.168.10.51:50010
192.168.10.50:50010
2014-07-24 17:35:33,871 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz23:8480/getJournal?jid=develop&segmentTxId=62721&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
of size 1385 edits # 16 loaded in 0 seconds
2014-07-24 17:35:33,872 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 16 edits
starting from txid 62720
2014-07-24 17:35:34,042 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:37,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753270_12643]
2014-07-24 17:35:40,043 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753270_12643]
2014-07-24 17:37:33,915 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
2014-07-24 17:37:34,194 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5ed5ecda
expecting start txid #62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c,
http://hz23:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,195 INFO
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding
stream '
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0%3ACID-a140fb1a-ac10-4053-8b91-8f19f2809b7c'
to transaction ID 62737
2014-07-24 17:37:34,223 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073753271_12644 192.168.10.51:50010 192.168.10.49:50010
192.168.10.50:50010
2014-07-24 17:37:34,224 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file
http://hz24:8480/getJournal?jid=develop&segmentTxId=62737&storageInfo=-55%3A466484546%3A0
:
2014-07-24 17:37:34,225 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Loaded 3 edits
starting from txid 62736
2014-07-24 17:37:37,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.51:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:40,050 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.49:50010 to delete [blk_1073753271_12644]
2014-07-24 17:37:43,051 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
InvalidateBlocks: ask 192.168.10.50:50010 to delete [blk_1073753271_12644]
2014-07-24 17:39:34,255 INFO
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log
roll on remote NameNode hz24/192.168.10.24:8020
On Fri, Jul 25, 2014 at 10:25 AM, Stanley Shi <ss...@gopivotal.com> wrote:
> Would you please also past the corresponding namenode log?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i try to copy data from my old cluster to new cluster,i get
>> error ,how to handle this?
>>
>> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_1, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
>> ... 10 more
>> Caused by: java.io.IOException: Error writing request body to server
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
>> at
>> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>> at java.io.DataOutputStream.write(DataOutputStream.java:107)
>> at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
>> at
>> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>> ... 11 more
>> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
>> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
>> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
>> attempt_1406182801379_0004_m_000000_2, Status : FAILED
>> Error: java.io.IOException: File copy failed:
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
>> at
>> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
>> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
>> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
>> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
>> at
>> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>>
>
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Would you please also past the corresponding namenode log?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i try to copy data from my old cluster to new cluster,i get
> error ,how to handle this?
>
> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_1, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
> ... 10 more
> Caused by: java.io.IOException: Error writing request body to server
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
> ... 11 more
> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_2, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Would you please also past the corresponding namenode log?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i try to copy data from my old cluster to new cluster,i get
> error ,how to handle this?
>
> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_1, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
> ... 10 more
> Caused by: java.io.IOException: Error writing request body to server
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
> ... 11 more
> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_2, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Would you please also past the corresponding namenode log?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i try to copy data from my old cluster to new cluster,i get
> error ,how to handle this?
>
> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_1, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
> ... 10 more
> Caused by: java.io.IOException: Error writing request body to server
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
> ... 11 more
> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_2, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>
Re: issue about distcp " Source and target differ in block-size. Use
-pb to preserve block-sizes during copy."
Posted by Stanley Shi <ss...@gopivotal.com>.
Would you please also past the corresponding namenode log?
Regards,
*Stanley Shi,*
On Fri, Jul 25, 2014 at 9:15 AM, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i try to copy data from my old cluster to new cluster,i get
> error ,how to handle this?
>
> 14/07/24 18:35:58 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_1, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
> ... 10 more
> Caused by: java.io.IOException: Error writing request body to server
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:231)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToTmpFile(RetriableFileCopyCommand.java:164)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:118)
> at
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
> ... 11 more
> 14/07/24 18:35:59 INFO mapreduce.Job: map 16% reduce 0%
> 14/07/24 18:39:39 INFO mapreduce.Job: map 17% reduce 0%
> 14/07/24 19:04:27 INFO mapreduce.Job: Task Id :
> attempt_1406182801379_0004_m_000000_2, Status : FAILED
> Error: java.io.IOException: File copy failed:
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 -->
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying
> webhdfs://CH22:50070/mytest/pipe_url_bak/part-m-00001 to
> webhdfs://develop/tmp/pipe_url_bak/part-m-00001
> at
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>