You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by neuron005 <ne...@gmail.com> on 2012/01/26 18:17:15 UTC

Error While using hbase with hadoop

Hii there
I earlier used hbase locally , using my ext3 as filesystem for hbase. That
worked ok :) . Now I moved on to next step of setting it up on hdfs. I am
using hadoop-0.20.2 and hbase0.90.4 in pseudo distributed mode
I an getting this error in my log

2012-01-26 22:37:50,629 DEBUG org.apache.hadoop.hbase.util.FSUtils: Created
version file at hdfs://89neuron:9000/hbase set its version at:7
2012-01-26 22:37:50,637 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/hbase/hbase.version could only be replicated to 0 nodes, instead of 1
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
	at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

	at org.apache.hadoop.ipc.Client.call(Client.java:740)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
	at $Proxy6.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
	at $Proxy6.addBlock(Unknown Source)
	at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
	at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
	at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
	at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Error
Recovery for block null bad datanode[0] nodes == null
2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Could not get
block locations. Source file "/hbase/hbase.version" - Aborting...
2012-01-26 22:37:50,638 WARN org.apache.hadoop.hbase.util.FSUtils: Unable to
create version file at hdfs://89neuron:9000/hbase, retrying:
java.io.IOException: File /hbase/hbase.version could only be replicated to 0
nodes, instead of 1
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
	at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

Looks like that dfs.replication which is set to 1 is the problem but I can
not confirm it actually is. Please help me out.
Thanks in advance
-- 
View this message in context: http://old.nabble.com/Error-While-using-hbase-with-hadoop-tp33208913p33208913.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Error While using hbase with hadoop

Posted by neuron005 <ne...@gmail.com>.
I did not know what what was the issue and was unable to solve it. Now, I
installed Cloudera's distribution of hadoop cdh3u2. That works good . But I
want to know installation steps with tarball ! That I think will be easy to
put and deploy. Once the system is formatted all data is gone. With that
tarball I will be able to save configuration. Can anyone guide me installing
cloudera's hadoop from tarball?
Thanks in advance

neuron005 wrote:
> 
> Thanks for your reply Harsh 
> I checked my logs and found that namenode and datanode ID were different.
> May be datanode got corrupted. I  removed all files from /app/tmp/dfs
> folder and formatted -namenode again . It started working :)
> Then I ran HBase on it . It worked but I got something like 
> "You are currently running the Hmaster without HDFS append support
> enabled"
> I downloaded cloudera's cdh3 update 3 and replaced it with hadoop-0.20 .
> But now it says
> starting namenode, logging to
> /usr/local/hadoop-0.20.2/logs/hadoop-root-namenode-89neuron.out
> May not run daemons as root. Please specify HADOOP_NAMENODE_USER
> same for datanode,tasktracker,jobtraacker and secondary name node
> 
> I did not found documentation of installation using a tarball. I am using
> ubuntu 10.04. I know there are deb files for installation but I want to
> install from tarball. Please help me out
> Thanks:-):-) 
> 
> Harsh J wrote:
>> 
>> Try the options listed here:
>> http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
>> 
>> On Thu, Jan 26, 2012 at 10:47 PM, neuron005 <ne...@gmail.com> wrote:
>>>
>>> Hii there
>>> I earlier used hbase locally , using my ext3 as filesystem for hbase.
>>> That
>>> worked ok :) . Now I moved on to next step of setting it up on hdfs. I
>>> am
>>> using hadoop-0.20.2 and hbase0.90.4 in pseudo distributed mode
>>> I an getting this error in my log
>>>
>>> 2012-01-26 22:37:50,629 DEBUG org.apache.hadoop.hbase.util.FSUtils:
>>> Created
>>> version file at hdfs://89neuron:9000/hbase set its version at:7
>>> 2012-01-26 22:37:50,637 WARN org.apache.hadoop.hdfs.DFSClient:
>>> DataStreamer
>>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> File
>>> /hbase/hbase.version could only be replicated to 0 nodes, instead of 1
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:616)
>>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>        at $Proxy6.addBlock(Unknown Source)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:616)
>>>        at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>        at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>        at $Proxy6.addBlock(Unknown Source)
>>>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>>>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>>>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>>
>>> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Error
>>> Recovery for block null bad datanode[0] nodes == null
>>> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Could not
>>> get
>>> block locations. Source file "/hbase/hbase.version" - Aborting...
>>> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hbase.util.FSUtils:
>>> Unable to
>>> create version file at hdfs://89neuron:9000/hbase, retrying:
>>> java.io.IOException: File /hbase/hbase.version could only be replicated
>>> to 0
>>> nodes, instead of 1
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:616)
>>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>> Looks like that dfs.replication which is set to 1 is the problem but I
>>> can
>>> not confirm it actually is. Please help me out.
>>> Thanks in advance
>>> --
>>> View this message in context:
>>> http://old.nabble.com/Error-While-using-hbase-with-hadoop-tp33208913p33208913.html
>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>
>> 
>> 
>> 
>> -- 
>> Harsh J
>> Customer Ops. Engineer, Cloudera
>> 
>> 
> 
> 

-- 
View this message in context: http://old.nabble.com/Error-While-using-hbase-with-hadoop-tp33208913p33223599.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Error While using hbase with hadoop

Posted by neuron005 <ne...@gmail.com>.
Thanks for your reply Harsh 
I checked my logs and found that namenode and datanode ID were different.
May be datanode got corrupted. I  removed all files from /app/tmp/dfs folder
and formatted -namenode again . It started working :)
Then I ran HBase on it . It worked but I got something like 
"You are currently running the Hmaster without HDFS append support enabled"
I downloaded cloudera's cdh3 update 3 and replaced it with hadoop-0.20 . But
now it says
starting namenode, logging to
/usr/local/hadoop-0.20.2/logs/hadoop-root-namenode-89neuron.out
May not run daemons as root. Please specify HADOOP_NAMENODE_USER
same for datanode,tasktracker,jobtraacker and secondary name node

I did not found documentation of installation using a tarball. I am using
ubuntu 10.04. I know there are deb files for installation but I want to
install from tarball. Please help me out
Thanks:-):-) 

Harsh J wrote:
> 
> Try the options listed here:
> http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
> 
> On Thu, Jan 26, 2012 at 10:47 PM, neuron005 <ne...@gmail.com> wrote:
>>
>> Hii there
>> I earlier used hbase locally , using my ext3 as filesystem for hbase.
>> That
>> worked ok :) . Now I moved on to next step of setting it up on hdfs. I am
>> using hadoop-0.20.2 and hbase0.90.4 in pseudo distributed mode
>> I an getting this error in my log
>>
>> 2012-01-26 22:37:50,629 DEBUG org.apache.hadoop.hbase.util.FSUtils:
>> Created
>> version file at hdfs://89neuron:9000/hbase set its version at:7
>> 2012-01-26 22:37:50,637 WARN org.apache.hadoop.hdfs.DFSClient:
>> DataStreamer
>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> File
>> /hbase/hbase.version could only be replicated to 0 nodes, instead of 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:616)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>        at $Proxy6.addBlock(Unknown Source)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:616)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>        at $Proxy6.addBlock(Unknown Source)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>        at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>
>> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Error
>> Recovery for block null bad datanode[0] nodes == null
>> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Could not
>> get
>> block locations. Source file "/hbase/hbase.version" - Aborting...
>> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hbase.util.FSUtils: Unable
>> to
>> create version file at hdfs://89neuron:9000/hbase, retrying:
>> java.io.IOException: File /hbase/hbase.version could only be replicated
>> to 0
>> nodes, instead of 1
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:616)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>> Looks like that dfs.replication which is set to 1 is the problem but I
>> can
>> not confirm it actually is. Please help me out.
>> Thanks in advance
>> --
>> View this message in context:
>> http://old.nabble.com/Error-While-using-hbase-with-hadoop-tp33208913p33208913.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
> 
> 
> 
> -- 
> Harsh J
> Customer Ops. Engineer, Cloudera
> 
> 

-- 
View this message in context: http://old.nabble.com/Error-While-using-hbase-with-hadoop-tp33208913p33216495.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Error While using hbase with hadoop

Posted by Harsh J <ha...@cloudera.com>.
Try the options listed here:
http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F

On Thu, Jan 26, 2012 at 10:47 PM, neuron005 <ne...@gmail.com> wrote:
>
> Hii there
> I earlier used hbase locally , using my ext3 as filesystem for hbase. That
> worked ok :) . Now I moved on to next step of setting it up on hdfs. I am
> using hadoop-0.20.2 and hbase0.90.4 in pseudo distributed mode
> I an getting this error in my log
>
> 2012-01-26 22:37:50,629 DEBUG org.apache.hadoop.hbase.util.FSUtils: Created
> version file at hdfs://89neuron:9000/hbase set its version at:7
> 2012-01-26 22:37:50,637 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /hbase/hbase.version could only be replicated to 0 nodes, instead of 1
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:616)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:416)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy6.addBlock(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:616)
>        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>        at $Proxy6.addBlock(Unknown Source)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>
> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for block null bad datanode[0] nodes == null
> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Could not get
> block locations. Source file "/hbase/hbase.version" - Aborting...
> 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hbase.util.FSUtils: Unable to
> create version file at hdfs://89neuron:9000/hbase, retrying:
> java.io.IOException: File /hbase/hbase.version could only be replicated to 0
> nodes, instead of 1
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:616)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:416)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> Looks like that dfs.replication which is set to 1 is the problem but I can
> not confirm it actually is. Please help me out.
> Thanks in advance
> --
> View this message in context: http://old.nabble.com/Error-While-using-hbase-with-hadoop-tp33208913p33208913.html
> Sent from the HBase User mailing list archive at Nabble.com.
>



-- 
Harsh J
Customer Ops. Engineer, Cloudera