You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by lztaomin <lz...@163.com> on 2013/04/17 16:55:05 UTC
hbase hbase.rootdir configuration
HI
I I use the Hadoop HA,hadoop ha work very well ,but my hbase configuration
<property>
<name>hbase.rootdir</name>
<value>hdfs://cluster/hbase</value>
</property>
hbase can not access HDFS , How should I the configuration hbase.rootdir is correct ? thanks very much?
My core-site.xml configuration
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster</value>
</property>
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
My hdfs-site.xml configuration
<property>
<name>dfs.federation.nameservices</name>
<value>cluster</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>8192</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/ytxt/hadoopData</value>
</property>
<property>
<name>dfs.ha.namenodes.cluster</name>
<value>nn0,nn1</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster.nn0</name>
<value>sy-hadoop-namenode1.189read.com:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster.nn1</name>
<value>sy-hadoop-namenode2.189read.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster.nn0</name>
<value>sy-hadoop-namenode1.189read.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster.nn1</name>
<value>sy-hadoop-namenode2.189read.com:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>/HAshared</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.cluster</name>
<value>org.apache.Hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>sy-hadoop-namenode1.189read.com,sy-hadoop-namenode2.189read.com,datanode1:2181,datanode2:2181,datanode3:2181</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/ytxt/hadoopData</value>
</property>
lztaomin
回复: hbase hbase.rootdir configuration
Posted by lztaomin <lz...@163.com>.
HI
now my hbase configuration hdfs://cluster:9000/hbase
But the connection address is not my internal and external address ,How the configuration hbase.rootdir can guarantee HBase connection address Hadoop HA's active namenode.
thanks very much.
My hdfs-site.xml configuration
<property>
<name>dfs.federation.nameservices</name>
<value>cluster</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster.nn0</name>
<value>sy-hadoop-namenode1.189read.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster.nn1</name>
<value>sy-hadoop-namenode2.189read.com:50070</value>
</property>
lztaomin
发件人: lztaomin
发送时间: 2013-04-17 22:55
收件人: user
主题: hbase hbase.rootdir configuration
HI
I I use the Hadoop HA,hadoop ha work very well ,but my hbase configuration
<property>
<name>hbase.rootdir</name>
<value>hdfs://cluster/hbase</value>
</property>
hbase can not access HDFS , How should I the configuration hbase.rootdir is correct ? thanks very much?
My core-site.xml configuration
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster</value>
</property>
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
My hdfs-site.xml configuration
<property>
<name>dfs.federation.nameservices</name>
<value>cluster</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>8192</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/ytxt/hadoopData</value>
</property>
<property>
<name>dfs.ha.namenodes.cluster</name>
<value>nn0,nn1</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster.nn0</name>
<value>sy-hadoop-namenode1.189read.com:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster.nn1</name>
<value>sy-hadoop-namenode2.189read.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster.nn0</name>
<value>sy-hadoop-namenode1.189read.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster.nn1</name>
<value>sy-hadoop-namenode2.189read.com:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>/HAshared</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.cluster</name>
<value>org.apache.Hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>sy-hadoop-namenode1.189read.com,sy-hadoop-namenode2.189read.com,datanode1:2181,datanode2:2181,datanode3:2181</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/ytxt/hadoopData</value>
</property>
lztaomin
Re: hbase hbase.rootdir configuration
Posted by shashwat shriparv <dw...@gmail.com>.
On Wed, Apr 17, 2013 at 8:25 PM, lztaomin <lz...@163.com> wrote:
> hdfs://cluster/hbase</val
>
Where is the port number bro????
*Thanks & Regards *
∞
Shashwat Shriparv