You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Budianto Lie <po...@gmail.com> on 2009/10/16 10:46:29 UTC

How to handle "Exception in createBlockOutputStream"

Hello,

I setup hadoop cluster (1 master 2 slaves). In hdfs-site.xml, I set
dfs.replication = 3
When I shutdown one node slave, and then try to put a file into hdfs through
master node. The file can't be saved, hadoop generate an error message like
this
"hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.0.48:50010 ...."
When I checked into hdfs, the filesize is zero.

The questions are:
1. How to configure hadoop so they can still operate if one or more slave
node is down. And after they are up again, it automatically synchronize the
files.
2. Which command to replace / append a file in hdfs? Currently I delete it
first and then put it again :)

Regards,
Budianto