You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Ramesh Rocky <rm...@outlook.com> on 2015/03/27 05:48:52 UTC

Data doesn't write in HDFS

Hi,
I try to write the data in hdfs using flume on windows machine. Here I configure flume and hadoop on same machine and write data into hdfs its works perfectly.
But configure hadoop and flume on different machines (both are windows machines). I try to write data in hdfs it shows the following error.
15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM15/03/27 09:46:36 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 28 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 4615/03/27 09:46:37 WARN security.UserGroupInformation: No groups available for user SYSTEM15/03/27 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. BP-412829692-192.168.56.1-1427371070417 blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf871f4f12:NORMAL:192.168.56.1:50010|RBW]]}15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[],storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}15/03/27 09:46:42 INFO ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.15.242:57416 Call#7 Retry#0java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no secret manager running
Please anybody know about this issue..Thanks & RegardsRamesh 		 	   		  

Re: Data doesn't write in HDFS

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi 

Have a closer look at:

java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

BR,
 AL


> On 27 Mar 2015, at 05:48, Ramesh Rocky <rm...@outlook.com> wrote:
> 
> Hi,
> 
> I try to write the data in hdfs using flume on windows machine. Here I configure flume and hadoop on same machine and write data into hdfs its works perfectly.
> 
> But configure hadoop and flume on different machines (both are windows machines). I try to write data in hdfs it shows the following error.
> 
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:36 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 28 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 46
> 15/03/27 09:46:37 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. BP-412829692-192.168.56.1-1427371070417
>  blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf871f4f12:NORMAL:192.168.56.1:50010|RBW]]}
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[],
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) 
> For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1,
>  selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storage
> Policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unava
> ilable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> 15/03/27 09:46:42 INFO ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.15.242:57416 Call#7 Retry#0
> java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no secret manager running
> 
> Please anybody know about this issue..
> Thanks & Regards
> Ramesh


Re: Data doesn't write in HDFS

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi 

Have a closer look at:

java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

BR,
 AL


> On 27 Mar 2015, at 05:48, Ramesh Rocky <rm...@outlook.com> wrote:
> 
> Hi,
> 
> I try to write the data in hdfs using flume on windows machine. Here I configure flume and hadoop on same machine and write data into hdfs its works perfectly.
> 
> But configure hadoop and flume on different machines (both are windows machines). I try to write data in hdfs it shows the following error.
> 
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:36 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 28 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 46
> 15/03/27 09:46:37 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. BP-412829692-192.168.56.1-1427371070417
>  blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf871f4f12:NORMAL:192.168.56.1:50010|RBW]]}
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[],
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) 
> For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1,
>  selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storage
> Policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unava
> ilable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> 15/03/27 09:46:42 INFO ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.15.242:57416 Call#7 Retry#0
> java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no secret manager running
> 
> Please anybody know about this issue..
> Thanks & Regards
> Ramesh


Re: Data doesn't write in HDFS

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi 

Have a closer look at:

java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

BR,
 AL


> On 27 Mar 2015, at 05:48, Ramesh Rocky <rm...@outlook.com> wrote:
> 
> Hi,
> 
> I try to write the data in hdfs using flume on windows machine. Here I configure flume and hadoop on same machine and write data into hdfs its works perfectly.
> 
> But configure hadoop and flume on different machines (both are windows machines). I try to write data in hdfs it shows the following error.
> 
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:36 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 28 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 46
> 15/03/27 09:46:37 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. BP-412829692-192.168.56.1-1427371070417
>  blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf871f4f12:NORMAL:192.168.56.1:50010|RBW]]}
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[],
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) 
> For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1,
>  selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storage
> Policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unava
> ilable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> 15/03/27 09:46:42 INFO ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.15.242:57416 Call#7 Retry#0
> java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no secret manager running
> 
> Please anybody know about this issue..
> Thanks & Regards
> Ramesh


Re: Data doesn't write in HDFS

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi 

Have a closer look at:

java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

BR,
 AL


> On 27 Mar 2015, at 05:48, Ramesh Rocky <rm...@outlook.com> wrote:
> 
> Hi,
> 
> I try to write the data in hdfs using flume on windows machine. Here I configure flume and hadoop on same machine and write data into hdfs its works perfectly.
> 
> But configure hadoop and flume on different machines (both are windows machines). I try to write data in hdfs it shows the following error.
> 
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:36 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 28 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 46
> 15/03/27 09:46:37 WARN security.UserGroupInformation: No groups available for user SYSTEM
> 15/03/27 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. BP-412829692-192.168.56.1-1427371070417
>  blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf871f4f12:NORMAL:192.168.56.1:50010|RBW]]}
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[],
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) 
> For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1,
>  selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storage
> Policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unava
> ilable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> 15/03/27 09:46:42 INFO ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.15.242:57416 Call#7 Retry#0
> java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no secret manager running
> 
> Please anybody know about this issue..
> Thanks & Regards
> Ramesh