You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Jason Huang <ja...@icare.com> on 2012/09/14 00:42:16 UTC

Can not access HBase Shell.

Hello,

I am trying to set up HBase at pseudo-distributed mode on my Macbook.
I was able to installed hadoop and HBase and started the nodes.

$JPS
5417 TaskTracker
5083 NameNode
5761 HRegionServer
5658 HMaster
6015 Jps
5613 HQuorumPeer
5171 DataNode
5327 JobTracker
5262 SecondaryNameNode

However, when I tried ./hbase shell I got the following error:
Trace/BPT trap: 5

Loooking at the log from master server I found:
2012-09-13 18:33:46,842 DEBUG
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Looked up root region location,
connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
serverName=192.168.1.124,60020,1347575067207
2012-09-13 18:34:18,981 DEBUG
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Looked up root region location,
connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
serverName=192.168.1.124,60020,1347575067207
2012-09-13 18:34:18,982 DEBUG
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
locateRegionInMeta parentTable=-ROOT-,
metaLocation={region=-ROOT-,,0.70236052, hostname=192.168.1.124,
port=60020}, attempt=14 of 100 failed; retrying after sleep of 32044
because: HRegionInfo was null or empty in -ROOT-,
row=keyvalues={.META.,,1/info:server/1347575458668/Put/vlen=19/ts=0,
.META.,,1/info:serverstartcode/1347575458668/Put/vlen=8/ts=0}

I don't quite understand what this error is and how to fix it. Any
suggestions?  Thanks!

Here are my config files:

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>localhost</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
     <name>hbase.master</name>
     <value>localhost:60000</value>
  </property>
  <property>
    <name>dfs.support.append</name>
    <value>true</value>
  </property>
</configuration>

hdfs-site.xml
<configuration>
  <property>
     <name>fs.default.name</name>
     <value>localhost:9000</value>
  </property>
  <property>
     <name>dfs.replication</name>
     <value>1</value>
  </property>
  <property>
     <name>dfs.namenode.name.dir</name>
     <value>/Users/jasonhuang/hdfs/name</value>
  </property>
  <property>
     <name>dfs.datanode.data.dir</name>
     <value>/Users/jasonhuang/hdfs/data</value>
  </property>
  <property>
     <name>dfs.datanode.max.xcievers</name>
     <value>4096</value>
  </property>
</configuration>

mapred-site.xml
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
    </property>
    <property>
        <name>mapred.child.java.opts</name>
        <value>-Xmx512m</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>hdfs://localhost:54311</value>
    </property>
</configuration>

Re: Can not access HBase Shell.

Posted by Xiang Hua <be...@gmail.com>.
Hi,

     not  root user can not find all the hadoop process.

     1. root use find all the process, below:
[root@hadoop1 ~]# jps
17452 SecondaryNameNode
18266 Main
7759 Jps
32095 QuorumPeerMain
17108 JobTracker
16955 TaskTracker
17566 HMaster
17177 NameNode
17765 HRegionServer
19424 ThriftServer
17303 DataNode

   2.  but nonroot user only find some, below:


-bash-4.1$ jps
17452 SecondaryNameNode
7736 Jps
17177 NameNode
17303 DataNode

3. maybe .bash_profiel problem?

Thanks!

beatls@gmail.com



On Fri, Sep 14, 2012 at 6:42 AM, Jason Huang <ja...@icare.com> wrote:

> Hello,
>
> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> I was able to installed hadoop and HBase and started the nodes.
>
> $JPS
> 5417 TaskTracker
> 5083 NameNode
> 5761 HRegionServer
> 5658 HMaster
> 6015 Jps
> 5613 HQuorumPeer
> 5171 DataNode
> 5327 JobTracker
> 5262 SecondaryNameNode
>
> However, when I tried ./hbase shell I got the following error:
> Trace/BPT trap: 5
>
> Loooking at the log from master server I found:
> 2012-09-13 18:33:46,842 DEBUG
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Looked up root region location,
>
> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6
> ;
> serverName=192.168.1.124,60020,1347575067207
> 2012-09-13 18:34:18,981 DEBUG
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Looked up root region location,
>
> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6
> ;
> serverName=192.168.1.124,60020,1347575067207
> 2012-09-13 18:34:18,982 DEBUG
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> locateRegionInMeta parentTable=-ROOT-,
> metaLocation={region=-ROOT-,,0.70236052, hostname=192.168.1.124,
> port=60020}, attempt=14 of 100 failed; retrying after sleep of 32044
> because: HRegionInfo was null or empty in -ROOT-,
> row=keyvalues={.META.,,1/info:server/1347575458668/Put/vlen=19/ts=0,
> .META.,,1/info:serverstartcode/1347575458668/Put/vlen=8/ts=0}
>
> I don't quite understand what this error is and how to fix it. Any
> suggestions?  Thanks!
>
> Here are my config files:
>
> <configuration>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://localhost:9000/hbase</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>localhost</value>
>   </property>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>dfs.replication</name>
>     <value>1</value>
>   </property>
>   <property>
>      <name>hbase.master</name>
>      <value>localhost:60000</value>
>   </property>
>   <property>
>     <name>dfs.support.append</name>
>     <value>true</value>
>   </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>   <property>
>      <name>fs.default.name</name>
>      <value>localhost:9000</value>
>   </property>
>   <property>
>      <name>dfs.replication</name>
>      <value>1</value>
>   </property>
>   <property>
>      <name>dfs.namenode.name.dir</name>
>      <value>/Users/jasonhuang/hdfs/name</value>
>   </property>
>   <property>
>      <name>dfs.datanode.data.dir</name>
>      <value>/Users/jasonhuang/hdfs/data</value>
>   </property>
>   <property>
>      <name>dfs.datanode.max.xcievers</name>
>      <value>4096</value>
>   </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>localhost:9001</value>
>     </property>
>     <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx512m</value>
>     </property>
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>hdfs://localhost:54311</value>
>     </property>
> </configuration>
>

Re: Can not access HBase Shell.

Posted by Jason Huang <ja...@icare.com>.
I've done several reinstallation's and hadoop seems to be fine. However, I
still get similar error when I tried to access HBase shell.

$ jps
274 NameNode
514 JobTracker
1532 HMaster
1588 Jps
604 TaskTracker
450 SecondaryNameNode
362 DataNode

$ ./bin/hbase shell
Trace/BPT trap: 5

I looked at the log file and found errors in the HMaster node logs:

2012-09-17 17:06:54,384 INFO
org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
memsize=360.0, into tmp file
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2
2012-09-17 17:06:54,389 WARN org.apache.hadoop.hdfs.DFSClient:
Exception while reading from blk_-8714444718437861427_1016 of
/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2 from
127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
        at
org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
        at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
        at
org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
        at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1457)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2172)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:582)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1364)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1869)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1637)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1286)
        at
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1294)
        at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:137)
        at
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:533)
        at
org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:563)
        at
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1252)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:516)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:606)
        at
org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1590)
        at
org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:769)
        at
org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:108)
        at
org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2204)
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1429)
        at
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:2685)
        at
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:535)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3682)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3630)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)

2012-09-17 17:06:54,389 INFO org.apache.hadoop.hdfs.DFSClient: Could
not obtain block blk_-8714444718437861427_1016 from any node:
java.io.IOException: No live nodes contain current block. Will get new
block locations from namenode and retry...

I checked the file system using fsck and that seems to be healthy:
./bin/hadoop fsck / -files
Warning: $HADOOP_HOME is deprecated.

FSCK started by jasonhuang from /192.168.1.124 for path / at Mon Sep 17
17:24:46 EDT 2012
/ <dir>
/hbase <dir>
/hbase/-ROOT- <dir>
/hbase/-ROOT-/.tableinfo.0000000001 727 bytes, 1 block(s):  OK
/hbase/-ROOT-/.tmp <dir>
/hbase/-ROOT-/70236052 <dir>
/hbase/-ROOT-/70236052/.logs <dir>
/hbase/-ROOT-/70236052/.logs/hlog.1347915355095 309 bytes, 1 block(s):  OK
/hbase/-ROOT-/70236052/.oldlogs <dir>
/hbase/-ROOT-/70236052/.regioninfo 109 bytes, 1 block(s):  OK
/hbase/-ROOT-/70236052/.tmp <dir>
/hbase/-ROOT-/70236052/.tmp/2f094a87dd314072b1eb464761639c0c 859 bytes, 1
block(s):  OK
/hbase/-ROOT-/70236052/info <dir>
/hbase/-ROOT-/70236052/recovered.edits <dir>
/hbase/-ROOT-/70236052/recovered.edits/0000000000000000002 310 bytes, 1
block(s):  OK
/hbase/.META. <dir>
/hbase/.META./1028785192 <dir>
/hbase/.META./1028785192/.logs <dir>
/hbase/.META./1028785192/.logs/hlog.1347915355190 134 bytes, 1 block(s):  OK
/hbase/.META./1028785192/.oldlogs <dir>
/hbase/.META./1028785192/.regioninfo 111 bytes, 1 block(s):  OK
/hbase/.META./1028785192/info <dir>
/hbase/.corrupt <dir>
/hbase/.logs <dir>
/hbase/.oldlogs <dir>
/hbase/.oldlogs/192.168.1.124%2C50887%2C1347915939955.1347915972194 134
bytes, 1 block(s):  OK
/hbase/.oldlogs/192.168.1.124%2C51177%2C1347916254506.1347916283458 134
bytes, 1 block(s):  OK
/hbase/hbase.id 38 bytes, 1 block(s):  OK
/hbase/hbase.version 3 bytes, 1 block(s):  OK
/hbase/splitlog <dir>
/test <dir>
/tmp <dir>
/tmp/hadoop-jasonhuang <dir>
/tmp/hadoop-jasonhuang/mapred <dir>
/tmp/hadoop-jasonhuang/mapred/system <dir>
/tmp/hadoop-jasonhuang/mapred/system/jobtracker.info 4 bytes, 1 block(s):
 OK
Status: HEALTHY


However, the file mentioned in the error log
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/0212db15465842b38cc63eb9ef8b73d2
doesn't seem to exist in my fsck report. (Not sure if that matters).

I have no idea where to go next.. Any suggestions?

thanks!

Jason


On Fri, Sep 14, 2012 at 4:25 PM, Jason Huang <ja...@icare.com> wrote:

> Thanks Marcos.
>
> I applied the change you mentioned but it still gave me error. I then stop
> everything and restart Hadoop and tried to run a simple Map-Reduce job with
> the provided example jar. (./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10
> 100)
>
> That gave me an error of:
> 12/09/14 15:59:50 INFO mapred.JobClient: Task Id :
> attempt_201209141539_0001_m_000011_0, Status : FAILED
> Error initializing attempt_201209141539_0001_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>
> I think there is something wrong with my Hadoop setup. I will do more
> research and see if I can find out why.
>
> thanks,
>
> Jason
>
> On Thu, Sep 13, 2012 at 7:56 PM, Marcos Ortiz <ml...@uci.cu> wrote:
>
>>
>> Regards, Jason.
>> Answers in line
>>
>>
>> On 09/13/2012 06:42 PM, Jason Huang wrote:
>>
>> Hello,
>>
>> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
>> I was able to installed hadoop and HBase and started the nodes.
>>
>> $JPS
>> 5417 TaskTracker
>> 5083 NameNode
>> 5761 HRegionServer
>> 5658 HMaster
>> 6015 Jps
>> 5613 HQuorumPeer
>> 5171 DataNode
>> 5327 JobTracker
>> 5262 SecondaryNameNode
>>
>> However, when I tried ./hbase shell I got the following error:
>> Trace/BPT trap: 5
>>
>> Loooking at the log from master server I found:
>> 2012-09-13 18:33:46,842 DEBUG
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> Looked up root region location,
>> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
>> serverName=192.168.1.124,60020,1347575067207
>> 2012-09-13 18:34:18,981 DEBUG
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> Looked up root region location,
>> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
>> serverName=192.168.1.124,60020,1347575067207
>> 2012-09-13 18:34:18,982 DEBUG
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> locateRegionInMeta parentTable=-ROOT-,
>> metaLocation={region=-ROOT-,,0.70236052, hostname=192.168.1.124,
>> port=60020}, attempt=14 of 100 failed; retrying after sleep of 32044
>> because: HRegionInfo was null or empty in -ROOT-,
>> row=keyvalues={.META.,,1/info:server/1347575458668/Put/vlen=19/ts=0,
>> .META.,,1/info:serverstartcode/1347575458668/Put/vlen=8/ts=0}
>>
>> I don't quite understand what this error is and how to fix it. Any
>> suggestions?  Thanks!
>>
>> Here are my config files:
>>
>> <configuration>
>>   <property>
>>     <name>hbase.rootdir</name>
>>     <value>hdfs://localhost:9000/hbase</value>
>>   </property>
>>   <property>
>>     <name>hbase.zookeeper.quorum</name>
>>     <value>localhost</value>
>>   </property>
>>   <property>
>>     <name>hbase.cluster.distributed</name>
>>     <value>true</value>
>>
>>  If you want to use HBase in pseudo-distributed mode, you can not put
>> this property here, because the HRegionMaster thinks that the cluster is on
>> full distributed mode, and tries to find the region servers, and this error
>> come to light because in pseudo-distributed mode, you don´t have to include
>> that.
>>
>> So, remove the hbase.cluster.distributed property, and restart all
>> daemons.
>>
>> Another thing, for the pseudo-distributed mode, you don´t need a running
>> ZooKeeper cluster, you need that for a full distributed cluster.
>>
>>   </property>
>>   <property>
>>     <name>dfs.replication</name>
>>     <value>1</value>
>>   </property>
>>   <property>
>>      <name>hbase.master</name>
>>      <value>localhost:60000</value>
>>   </property>
>>   <property>
>>     <name>dfs.support.append</name>
>>     <value>true</value>
>>   </property>
>> </configuration>
>>
>> hdfs-site.xml
>> <configuration>
>>   <property>
>>      <name>fs.default.name</name>
>>      <value>localhost:9000</value>
>>   </property>
>>   <property>
>>      <name>dfs.replication</name>
>>      <value>1</value>
>>   </property>
>>   <property>
>>      <name>dfs.namenode.name.dir</name>
>>      <value>/Users/jasonhuang/hdfs/name</value>
>>   </property>
>>   <property>
>>      <name>dfs.datanode.data.dir</name>
>>      <value>/Users/jasonhuang/hdfs/data</value>
>>   </property>
>>   <property>
>>      <name>dfs.datanode.max.xcievers</name>
>>      <value>4096</value>
>>   </property>
>> </configuration>
>>
>> mapred-site.xml
>> <configuration>
>>     <property>
>>         <name>mapred.job.tracker</name>
>>         <value>localhost:9001</value>
>>     </property>
>>     <property>
>>         <name>mapred.child.java.opts</name>
>>         <value>-Xmx512m</value>
>>     </property>
>>     <property>
>>         <name>mapred.job.tracker</name>
>>         <value>hdfs://localhost:54311</value>
>>     </property>
>> </configuration>
>>
>>
>>
>> --
>> **
>>
>> Marcos Luis Ortíz Valmaseda
>> *Data Engineer && Sr. System Administrator at UCI*
>> about.me/marcosortiz
>> My Blog <http://marcosluis2186.posterous.com>
>> Tumblr's blog <http://marcosortiz.tumblr.com/>
>> @marcosluis2186 <http://twitter.com/marcosluis2186>
>>  **
>>
>>
>>
>>   <http://www.uci.cu/>
>>
>>
>

Re: Can not access HBase Shell.

Posted by Jason Huang <ja...@icare.com>.
Thanks Marcos.

I applied the change you mentioned but it still gave me error. I then stop
everything and restart Hadoop and tried to run a simple Map-Reduce job with
the provided example jar. (./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10
100)

That gave me an error of:
12/09/14 15:59:50 INFO mapred.JobClient: Task Id :
attempt_201209141539_0001_m_000011_0, Status : FAILED
Error initializing attempt_201209141539_0001_m_000011_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))

I think there is something wrong with my Hadoop setup. I will do more
research and see if I can find out why.

thanks,

Jason

On Thu, Sep 13, 2012 at 7:56 PM, Marcos Ortiz <ml...@uci.cu> wrote:

>
> Regards, Jason.
> Answers in line
>
>
> On 09/13/2012 06:42 PM, Jason Huang wrote:
>
> Hello,
>
> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> I was able to installed hadoop and HBase and started the nodes.
>
> $JPS
> 5417 TaskTracker
> 5083 NameNode
> 5761 HRegionServer
> 5658 HMaster
> 6015 Jps
> 5613 HQuorumPeer
> 5171 DataNode
> 5327 JobTracker
> 5262 SecondaryNameNode
>
> However, when I tried ./hbase shell I got the following error:
> Trace/BPT trap: 5
>
> Loooking at the log from master server I found:
> 2012-09-13 18:33:46,842 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Looked up root region location,
> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
> serverName=192.168.1.124,60020,1347575067207
> 2012-09-13 18:34:18,981 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Looked up root region location,
> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
> serverName=192.168.1.124,60020,1347575067207
> 2012-09-13 18:34:18,982 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> locateRegionInMeta parentTable=-ROOT-,
> metaLocation={region=-ROOT-,,0.70236052, hostname=192.168.1.124,
> port=60020}, attempt=14 of 100 failed; retrying after sleep of 32044
> because: HRegionInfo was null or empty in -ROOT-,
> row=keyvalues={.META.,,1/info:server/1347575458668/Put/vlen=19/ts=0,
> .META.,,1/info:serverstartcode/1347575458668/Put/vlen=8/ts=0}
>
> I don't quite understand what this error is and how to fix it. Any
> suggestions?  Thanks!
>
> Here are my config files:
>
> <configuration>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://localhost:9000/hbase</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>localhost</value>
>   </property>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>
>  If you want to use HBase in pseudo-distributed mode, you can not put this
> property here, because the HRegionMaster thinks that the cluster is on full
> distributed mode, and tries to find the region servers, and this error come
> to light because in pseudo-distributed mode, you don´t have to include that.
>
> So, remove the hbase.cluster.distributed property, and restart all daemons.
>
> Another thing, for the pseudo-distributed mode, you don´t need a running
> ZooKeeper cluster, you need that for a full distributed cluster.
>
>   </property>
>   <property>
>     <name>dfs.replication</name>
>     <value>1</value>
>   </property>
>   <property>
>      <name>hbase.master</name>
>      <value>localhost:60000</value>
>   </property>
>   <property>
>     <name>dfs.support.append</name>
>     <value>true</value>
>   </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>   <property>
>      <name>fs.default.name</name>
>      <value>localhost:9000</value>
>   </property>
>   <property>
>      <name>dfs.replication</name>
>      <value>1</value>
>   </property>
>   <property>
>      <name>dfs.namenode.name.dir</name>
>      <value>/Users/jasonhuang/hdfs/name</value>
>   </property>
>   <property>
>      <name>dfs.datanode.data.dir</name>
>      <value>/Users/jasonhuang/hdfs/data</value>
>   </property>
>   <property>
>      <name>dfs.datanode.max.xcievers</name>
>      <value>4096</value>
>   </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>localhost:9001</value>
>     </property>
>     <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx512m</value>
>     </property>
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>hdfs://localhost:54311</value>
>     </property>
> </configuration>
>
>
>
> --
> **
>
> Marcos Luis Ortíz Valmaseda
> *Data Engineer && Sr. System Administrator at UCI*
> about.me/marcosortiz
> My Blog <http://marcosluis2186.posterous.com>
> Tumblr's blog <http://marcosortiz.tumblr.com/>
> @marcosluis2186 <http://twitter.com/marcosluis2186>
>  **
>
>
>
>   <http://www.uci.cu/>
>
>

Re: Can not access HBase Shell.

Posted by Marcos Ortiz <ml...@uci.cu>.
Regards, Jason.
Answers in line

On 09/13/2012 06:42 PM, Jason Huang wrote:
> Hello,
>
> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> I was able to installed hadoop and HBase and started the nodes.
>
> $JPS
> 5417 TaskTracker
> 5083 NameNode
> 5761 HRegionServer
> 5658 HMaster
> 6015 Jps
> 5613 HQuorumPeer
> 5171 DataNode
> 5327 JobTracker
> 5262 SecondaryNameNode
>
> However, when I tried ./hbase shell I got the following error:
> Trace/BPT trap: 5
>
> Loooking at the log from master server I found:
> 2012-09-13 18:33:46,842 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Looked up root region location,
> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
> serverName=192.168.1.124,60020,1347575067207
> 2012-09-13 18:34:18,981 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Looked up root region location,
> connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6;
> serverName=192.168.1.124,60020,1347575067207
> 2012-09-13 18:34:18,982 DEBUG
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> locateRegionInMeta parentTable=-ROOT-,
> metaLocation={region=-ROOT-,,0.70236052, hostname=192.168.1.124,
> port=60020}, attempt=14 of 100 failed; retrying after sleep of 32044
> because: HRegionInfo was null or empty in -ROOT-,
> row=keyvalues={.META.,,1/info:server/1347575458668/Put/vlen=19/ts=0,
> .META.,,1/info:serverstartcode/1347575458668/Put/vlen=8/ts=0}
>
> I don't quite understand what this error is and how to fix it. Any
> suggestions?  Thanks!
>
> Here are my config files:
>
> <configuration>
>    <property>
>      <name>hbase.rootdir</name>
>      <value>hdfs://localhost:9000/hbase</value>
>    </property>
>    <property>
>      <name>hbase.zookeeper.quorum</name>
>      <value>localhost</value>
>    </property>
>    <property>
>      <name>hbase.cluster.distributed</name>
>      <value>true</value>
If you want to use HBase in pseudo-distributed mode, you can not put 
this property here, because the HRegionMaster thinks that the cluster is 
on full distributed mode, and tries to find the region servers, and this 
error come to light because in pseudo-distributed mode, you don´t have 
to include that.

So, remove the hbase.cluster.distributed property, and restart all daemons.

Another thing, for the pseudo-distributed mode, you don´t need a running 
ZooKeeper cluster, you need that for a full distributed cluster.
>    </property>
>    <property>
>      <name>dfs.replication</name>
>      <value>1</value>
>    </property>
>    <property>
>       <name>hbase.master</name>
>       <value>localhost:60000</value>
>    </property>
>    <property>
>      <name>dfs.support.append</name>
>      <value>true</value>
>    </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>    <property>
>       <name>fs.default.name</name>
>       <value>localhost:9000</value>
>    </property>
>    <property>
>       <name>dfs.replication</name>
>       <value>1</value>
>    </property>
>    <property>
>       <name>dfs.namenode.name.dir</name>
>       <value>/Users/jasonhuang/hdfs/name</value>
>    </property>
>    <property>
>       <name>dfs.datanode.data.dir</name>
>       <value>/Users/jasonhuang/hdfs/data</value>
>    </property>
>    <property>
>       <name>dfs.datanode.max.xcievers</name>
>       <value>4096</value>
>    </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>      <property>
>          <name>mapred.job.tracker</name>
>          <value>localhost:9001</value>
>      </property>
>      <property>
>          <name>mapred.child.java.opts</name>
>          <value>-Xmx512m</value>
>      </property>
>      <property>
>          <name>mapred.job.tracker</name>
>          <value>hdfs://localhost:54311</value>
>      </property>
> </configuration>
>

-- 

Marcos Luis Ortíz Valmaseda
*Data Engineer && Sr. System Administrator at UCI*
about.me/marcosortiz <http://about.me/marcosortiz>
My Blog <http://marcosluis2186.posterous.com>
Tumblr's blog <http://marcosortiz.tumblr.com/>
@marcosluis2186 <http://twitter.com/marcosluis2186>





10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Re: Can not access HBase Shell.

Posted by Jason Huang <ja...@icare.com>.
Just to report back (in case someone else ran into similar issues
during install) - I noticed that one of my friends use Sun's JDK (and
I was using openJDK). I then replaced my JDK and started a new install
with hadoop 1.0.3 + Hbase 0.94.1. Now it works in my MacBook!

Jason

On Wed, Sep 19, 2012 at 11:41 AM, Jason Huang <ja...@icare.com> wrote:
> Thanks JD and Shumin for the responses.
>
> I realized that this chain is getting longer and longer and I've tried
> many different things in an between. I will clean out every previous
> installs and start a fresh one with the newest version following your
> instructions step by step. Hopefully that will work.
>
> thanks again for all your time,
>
> Jason
>
> On Tue, Sep 18, 2012 at 1:34 PM, Jean-Daniel Cryans <jd...@apache.org> wrote:
>> On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang <ja...@icare.com> wrote:
>>> I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
>>> but that had already been updated (someone else pointed that out)
>>> before I ran this test today.
>>
>> I see. In the future it'd be best if you specify which version you're
>> running when you need help, in this case both Hadoop and HBase, and
>> also point out all the changes that you made off-list.
>>
>> Regarding that 3c6caf495a1743eca405a5f59edaef13 file, it's coming from
>> a flush after the -ROOT- region gets created. It could be helpful to
>> see the Datanode log at the time of that operation, maybe it has more
>> info for us.
>>
>> J-D

Re: Can not access HBase Shell.

Posted by Jason Huang <ja...@icare.com>.
Thanks JD and Shumin for the responses.

I realized that this chain is getting longer and longer and I've tried
many different things in an between. I will clean out every previous
installs and start a fresh one with the newest version following your
instructions step by step. Hopefully that will work.

thanks again for all your time,

Jason

On Tue, Sep 18, 2012 at 1:34 PM, Jean-Daniel Cryans <jd...@apache.org> wrote:
> On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang <ja...@icare.com> wrote:
>> I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
>> but that had already been updated (someone else pointed that out)
>> before I ran this test today.
>
> I see. In the future it'd be best if you specify which version you're
> running when you need help, in this case both Hadoop and HBase, and
> also point out all the changes that you made off-list.
>
> Regarding that 3c6caf495a1743eca405a5f59edaef13 file, it's coming from
> a flush after the -ROOT- region gets created. It could be helpful to
> see the Datanode log at the time of that operation, maybe it has more
> info for us.
>
> J-D

Re: Can not access HBase Shell.

Posted by Jean-Daniel Cryans <jd...@apache.org>.
On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang <ja...@icare.com> wrote:
> I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
> but that had already been updated (someone else pointed that out)
> before I ran this test today.

I see. In the future it'd be best if you specify which version you're
running when you need help, in this case both Hadoop and HBase, and
also point out all the changes that you made off-list.

Regarding that 3c6caf495a1743eca405a5f59edaef13 file, it's coming from
a flush after the -ROOT- region gets created. It could be helpful to
see the Datanode log at the time of that operation, maybe it has more
info for us.

J-D

Re: Can not access HBase Shell.

Posted by Shumin Wu <sh...@gmail.com>.
Hi Jason,

In a pesudo-distributed environment, you should start zookeeper and
hbase-regionserver. I don't see them in your process list.

"$ jps
274 NameNode
514 JobTracker
1532 HMaster
1588 Jps
604 TaskTracker
450 SecondaryNameNode
362 DataNode

$ ./bin/hbase shell
Trace/BPT trap: 5
"
Shumin

On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang <ja...@icare.com> wrote:

> Hi J-D,
>
> I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
> but that had already been updated (someone else pointed that out)
> before I ran this test today.
>
> thanks,
>
> Jason
>
> On Tue, Sep 18, 2012 at 1:05 PM, Jean-Daniel Cryans <jd...@apache.org>
> wrote:
> > Which Hadoop version are you using exactly? I see you are setting
> > dfs.datanode.data.dir which is a post 1.0 setting (from what I can
> > tell by googling, since I didn't recognize it), but you are using a
> > "hadoop-examples-1.0.3.jar" file that seems to imply you are on 1.0.3
> > which would probably not pick up dfs.datanode.data.dir
> >
> > J-D
> >
> > On Tue, Sep 18, 2012 at 9:21 AM, Jason Huang <ja...@icare.com>
> wrote:
> >> I've done some more research but still can't start the HMaster node
> >> (with similar error). Here is what I found in the Master Server log:
> >>
> >> Tue Sep 18 11:50:22 EDT 2012 Starting master on Jasons-MacBook-Pro.local
> >> core file size          (blocks, -c) 0
> >> data seg size           (kbytes, -d) unlimited
> >> file size               (blocks, -f) unlimited
> >> max locked memory       (kbytes, -l) unlimited
> >> max memory size         (kbytes, -m) unlimited
> >> open files                      (-n) 65536
> >> pipe size            (512 bytes, -p) 1
> >> stack size              (kbytes, -s) 8192
> >> cpu time               (seconds, -t) unlimited
> >> max user processes              (-u) 1064
> >> virtual memory          (kbytes, -v) unlimited
> >>
> >>
> >> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> >> HBase 0.94.0
> >> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> >> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r
> >> 1332822
> >> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> >> Compiled by jenkins on Tue May  1 21:43:54 UTC 2012
> >> 2012-09-18 11:50:23,395 INFO
> >> org.apache.zookeeper.server.ZooKeeperServer: Server
> >> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
> >> GMT
> >>
> >> ........
> >>
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Updates disabled for
> >> region -ROOT-,,0.70236052
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush
> >> for -ROOT-,,0.70236052, current region memstore size 360.0
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting
> >> -ROOT-,,0.70236052, commencing wait for mvcc, flushsize=360
> >> 2012-09-18 11:50:56,671 DEBUG
> >> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting,
> >> commencing flushing stores
> >> 2012-09-18 11:50:56,684 DEBUG org.apache.hadoop.hbase.util.FSUtils:
> >> Creating
> file:hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13with
> >> permission:rwxrwxrwx
> >> 2012-09-18 11:50:56,692 DEBUG
> >> org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with
> >> CacheConfig:enabled [cacheDataOnRead=false] [cacheDataOnWrite=false]
> >> [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false]
> >> [cacheEvictOnClose=false] [cacheCompressed=false]
> >> 2012-09-18 11:50:56,694 INFO
> >> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
> >> filter type for
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13:
> >> CompoundBloomFilterWriter
> >> 2012-09-18 11:50:56,703 INFO
> >> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
> >> NO DeleteFamily was added to HFile
> >>
> (hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13)
> >> 2012-09-18 11:50:56,703 INFO
> >> org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
> >> memsize=360.0, into tmp file
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >> 2012-09-18 11:50:56,716 WARN org.apache.hadoop.hdfs.DFSClient:
> >> Exception while reading from blk_8430779885801230139_1008 of
> >> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
> >> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
> >> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
> >>         at
> org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
> >>         at
> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> >>
> >>
> >> Since my colleges can follow the same setup instruction and install it
> >> in another machine (non-mac) I think this might be an issue with my
> >> Macbook Pro?
> >>
> >> One thing I am not sure is if the system settings (max open files /
> >> max user proc) needs to be adjusted. I've increased the max open files
> >> # to 65536 already (as you can see from the beginning of the log).
> >>
> >>
> >> The other thing I am not sure is why/how the file
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >> is created. After failure to start HMaster, I checked that file with
> >> dfs cat and get the same error:
> >>
> >> $ ./bin/hadoop dfs -cat
> >>
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >> Warning: $HADOOP_HOME is deprecated.
> >> 12/09/18 12:01:59 WARN hdfs.DFSClient: Exception while reading from
> >> blk_8430779885801230139_1008 of
> >> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
> >> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
> >> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
> >>         at
> org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
> >>
> >>
> >> And this file definitely exists:
> >> ./bin/hadoop dfs -ls hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/
> >> Warning: $HADOOP_HOME is deprecated.
> >> Found 1 items
> >> -rw-r--r--   1 jasonhuang supergroup        848 2012-09-18 11:50
> >> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> >>
> >>
> >> Also, when I look at some other dfs files they seem to be OK:
> >>  ./bin/hadoop dfs -cat
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.regioninfo
> >> Warning: $HADOOP_HOME is deprecated.
> >>
> >>         -ROOT-,,0-ROOT-?Y??
> >>
> >> {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED =>
> 70236052,}
> >>
> >>
> >>
> >> $ ./bin/hadoop dfs -cat
> >> hdfs://localhost:54310/hbase/-ROOT-/70236052/.logs/hlog.1347983456546
> >> Warning: $HADOOP_HOME is deprecated.
> >>
> SEQ0org.apache.hadoop.hbase.regionserver.wal.HLogKey0org.apache.hadoop.hbase.regionserver.wal.WALEditversion1g둣?????%???bV?"70236052-ROOT-9?9?????M#"
>   .META.,,1inforegioninfo9?9?     .META.,,1.META.+???$    .META.,,1infov9?9?
> >>
> >>
> >> Sorry for the lengthy email. Any help will be greatly appreciated!
> >>
> >> Jason
> >>
> >> On Thu, Sep 13, 2012 at 6:42 PM, Jason Huang <ja...@icare.com>
> wrote:
> >>> Hello,
> >>>
> >>> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> >>> I was able to installed hadoop and HBase and started the nodes.
> >>>
> >>> $JPS
> >>> 5417 TaskTracker
> >>> 5083 NameNode
> >>> 5761 HRegionServer
> >>> 5658 HMaster
> >>> 6015 Jps
> >>> 5613 HQuorumPeer
> >>> 5171 DataNode
> >>> 5327 JobTracker
> >>> 5262 SecondaryNameNode
> >>>
> >>> However, when I tried ./hbase shell I got the following error:
> >>> Trace/BPT trap: 5
> >>>
>

Re: Can not access HBase Shell.

Posted by Jason Huang <ja...@icare.com>.
Hi J-D,

I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
but that had already been updated (someone else pointed that out)
before I ran this test today.

thanks,

Jason

On Tue, Sep 18, 2012 at 1:05 PM, Jean-Daniel Cryans <jd...@apache.org> wrote:
> Which Hadoop version are you using exactly? I see you are setting
> dfs.datanode.data.dir which is a post 1.0 setting (from what I can
> tell by googling, since I didn't recognize it), but you are using a
> "hadoop-examples-1.0.3.jar" file that seems to imply you are on 1.0.3
> which would probably not pick up dfs.datanode.data.dir
>
> J-D
>
> On Tue, Sep 18, 2012 at 9:21 AM, Jason Huang <ja...@icare.com> wrote:
>> I've done some more research but still can't start the HMaster node
>> (with similar error). Here is what I found in the Master Server log:
>>
>> Tue Sep 18 11:50:22 EDT 2012 Starting master on Jasons-MacBook-Pro.local
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> file size               (blocks, -f) unlimited
>> max locked memory       (kbytes, -l) unlimited
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 65536
>> pipe size            (512 bytes, -p) 1
>> stack size              (kbytes, -s) 8192
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) 1064
>> virtual memory          (kbytes, -v) unlimited
>>
>>
>> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> HBase 0.94.0
>> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r
>> 1332822
>> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Compiled by jenkins on Tue May  1 21:43:54 UTC 2012
>> 2012-09-18 11:50:23,395 INFO
>> org.apache.zookeeper.server.ZooKeeperServer: Server
>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
>> GMT
>>
>> ........
>>
>> 2012-09-18 11:50:56,671 DEBUG
>> org.apache.hadoop.hbase.regionserver.HRegion: Updates disabled for
>> region -ROOT-,,0.70236052
>> 2012-09-18 11:50:56,671 DEBUG
>> org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush
>> for -ROOT-,,0.70236052, current region memstore size 360.0
>> 2012-09-18 11:50:56,671 DEBUG
>> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting
>> -ROOT-,,0.70236052, commencing wait for mvcc, flushsize=360
>> 2012-09-18 11:50:56,671 DEBUG
>> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting,
>> commencing flushing stores
>> 2012-09-18 11:50:56,684 DEBUG org.apache.hadoop.hbase.util.FSUtils:
>> Creating file:hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13with
>> permission:rwxrwxrwx
>> 2012-09-18 11:50:56,692 DEBUG
>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with
>> CacheConfig:enabled [cacheDataOnRead=false] [cacheDataOnWrite=false]
>> [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false]
>> [cacheEvictOnClose=false] [cacheCompressed=false]
>> 2012-09-18 11:50:56,694 INFO
>> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
>> filter type for
>> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13:
>> CompoundBloomFilterWriter
>> 2012-09-18 11:50:56,703 INFO
>> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
>> NO DeleteFamily was added to HFile
>> (hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13)
>> 2012-09-18 11:50:56,703 INFO
>> org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
>> memsize=360.0, into tmp file
>> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
>> 2012-09-18 11:50:56,716 WARN org.apache.hadoop.hdfs.DFSClient:
>> Exception while reading from blk_8430779885801230139_1008 of
>> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
>> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
>> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
>>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>
>>
>> Since my colleges can follow the same setup instruction and install it
>> in another machine (non-mac) I think this might be an issue with my
>> Macbook Pro?
>>
>> One thing I am not sure is if the system settings (max open files /
>> max user proc) needs to be adjusted. I've increased the max open files
>> # to 65536 already (as you can see from the beginning of the log).
>>
>>
>> The other thing I am not sure is why/how the file
>> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
>> is created. After failure to start HMaster, I checked that file with
>> dfs cat and get the same error:
>>
>> $ ./bin/hadoop dfs -cat
>> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
>> Warning: $HADOOP_HOME is deprecated.
>> 12/09/18 12:01:59 WARN hdfs.DFSClient: Exception while reading from
>> blk_8430779885801230139_1008 of
>> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
>> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
>> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
>>
>>
>> And this file definitely exists:
>> ./bin/hadoop dfs -ls hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/
>> Warning: $HADOOP_HOME is deprecated.
>> Found 1 items
>> -rw-r--r--   1 jasonhuang supergroup        848 2012-09-18 11:50
>> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
>>
>>
>> Also, when I look at some other dfs files they seem to be OK:
>>  ./bin/hadoop dfs -cat hdfs://localhost:54310/hbase/-ROOT-/70236052/.regioninfo
>> Warning: $HADOOP_HOME is deprecated.
>>
>>         -ROOT-,,0-ROOT-?Y??
>>
>> {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052,}
>>
>>
>>
>> $ ./bin/hadoop dfs -cat
>> hdfs://localhost:54310/hbase/-ROOT-/70236052/.logs/hlog.1347983456546
>> Warning: $HADOOP_HOME is deprecated.
>> SEQ0org.apache.hadoop.hbase.regionserver.wal.HLogKey0org.apache.hadoop.hbase.regionserver.wal.WALEditversion1g둣?????%???bV?"70236052-ROOT-9?9?????M#"   .META.,,1inforegioninfo9?9?     .META.,,1.META.+???$    .META.,,1infov9?9?
>>
>>
>> Sorry for the lengthy email. Any help will be greatly appreciated!
>>
>> Jason
>>
>> On Thu, Sep 13, 2012 at 6:42 PM, Jason Huang <ja...@icare.com> wrote:
>>> Hello,
>>>
>>> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
>>> I was able to installed hadoop and HBase and started the nodes.
>>>
>>> $JPS
>>> 5417 TaskTracker
>>> 5083 NameNode
>>> 5761 HRegionServer
>>> 5658 HMaster
>>> 6015 Jps
>>> 5613 HQuorumPeer
>>> 5171 DataNode
>>> 5327 JobTracker
>>> 5262 SecondaryNameNode
>>>
>>> However, when I tried ./hbase shell I got the following error:
>>> Trace/BPT trap: 5
>>>

Re: Can not access HBase Shell.

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Which Hadoop version are you using exactly? I see you are setting
dfs.datanode.data.dir which is a post 1.0 setting (from what I can
tell by googling, since I didn't recognize it), but you are using a
"hadoop-examples-1.0.3.jar" file that seems to imply you are on 1.0.3
which would probably not pick up dfs.datanode.data.dir

J-D

On Tue, Sep 18, 2012 at 9:21 AM, Jason Huang <ja...@icare.com> wrote:
> I've done some more research but still can't start the HMaster node
> (with similar error). Here is what I found in the Master Server log:
>
> Tue Sep 18 11:50:22 EDT 2012 Starting master on Jasons-MacBook-Pro.local
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> file size               (blocks, -f) unlimited
> max locked memory       (kbytes, -l) unlimited
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 65536
> pipe size            (512 bytes, -p) 1
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 1064
> virtual memory          (kbytes, -v) unlimited
>
>
> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> HBase 0.94.0
> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r
> 1332822
> 2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Compiled by jenkins on Tue May  1 21:43:54 UTC 2012
> 2012-09-18 11:50:23,395 INFO
> org.apache.zookeeper.server.ZooKeeperServer: Server
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
> GMT
>
> ........
>
> 2012-09-18 11:50:56,671 DEBUG
> org.apache.hadoop.hbase.regionserver.HRegion: Updates disabled for
> region -ROOT-,,0.70236052
> 2012-09-18 11:50:56,671 DEBUG
> org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush
> for -ROOT-,,0.70236052, current region memstore size 360.0
> 2012-09-18 11:50:56,671 DEBUG
> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting
> -ROOT-,,0.70236052, commencing wait for mvcc, flushsize=360
> 2012-09-18 11:50:56,671 DEBUG
> org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting,
> commencing flushing stores
> 2012-09-18 11:50:56,684 DEBUG org.apache.hadoop.hbase.util.FSUtils:
> Creating file:hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13with
> permission:rwxrwxrwx
> 2012-09-18 11:50:56,692 DEBUG
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with
> CacheConfig:enabled [cacheDataOnRead=false] [cacheDataOnWrite=false]
> [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false]
> [cacheEvictOnClose=false] [cacheCompressed=false]
> 2012-09-18 11:50:56,694 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
> filter type for
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13:
> CompoundBloomFilterWriter
> 2012-09-18 11:50:56,703 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
> NO DeleteFamily was added to HFile
> (hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13)
> 2012-09-18 11:50:56,703 INFO
> org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
> memsize=360.0, into tmp file
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> 2012-09-18 11:50:56,716 WARN org.apache.hadoop.hdfs.DFSClient:
> Exception while reading from blk_8430779885801230139_1008 of
> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>
>
> Since my colleges can follow the same setup instruction and install it
> in another machine (non-mac) I think this might be an issue with my
> Macbook Pro?
>
> One thing I am not sure is if the system settings (max open files /
> max user proc) needs to be adjusted. I've increased the max open files
> # to 65536 already (as you can see from the beginning of the log).
>
>
> The other thing I am not sure is why/how the file
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> is created. After failure to start HMaster, I checked that file with
> dfs cat and get the same error:
>
> $ ./bin/hadoop dfs -cat
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
> Warning: $HADOOP_HOME is deprecated.
> 12/09/18 12:01:59 WARN hdfs.DFSClient: Exception while reading from
> blk_8430779885801230139_1008 of
> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
> 127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
> header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
>
>
> And this file definitely exists:
> ./bin/hadoop dfs -ls hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/
> Warning: $HADOOP_HOME is deprecated.
> Found 1 items
> -rw-r--r--   1 jasonhuang supergroup        848 2012-09-18 11:50
> /hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
>
>
> Also, when I look at some other dfs files they seem to be OK:
>  ./bin/hadoop dfs -cat hdfs://localhost:54310/hbase/-ROOT-/70236052/.regioninfo
> Warning: $HADOOP_HOME is deprecated.
>
>         -ROOT-,,0-ROOT-?Y??
>
> {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052,}
>
>
>
> $ ./bin/hadoop dfs -cat
> hdfs://localhost:54310/hbase/-ROOT-/70236052/.logs/hlog.1347983456546
> Warning: $HADOOP_HOME is deprecated.
> SEQ0org.apache.hadoop.hbase.regionserver.wal.HLogKey0org.apache.hadoop.hbase.regionserver.wal.WALEditversion1g둣?????%???bV?"70236052-ROOT-9?9?????M#"   .META.,,1inforegioninfo9?9?     .META.,,1.META.+???$    .META.,,1infov9?9?
>
>
> Sorry for the lengthy email. Any help will be greatly appreciated!
>
> Jason
>
> On Thu, Sep 13, 2012 at 6:42 PM, Jason Huang <ja...@icare.com> wrote:
>> Hello,
>>
>> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
>> I was able to installed hadoop and HBase and started the nodes.
>>
>> $JPS
>> 5417 TaskTracker
>> 5083 NameNode
>> 5761 HRegionServer
>> 5658 HMaster
>> 6015 Jps
>> 5613 HQuorumPeer
>> 5171 DataNode
>> 5327 JobTracker
>> 5262 SecondaryNameNode
>>
>> However, when I tried ./hbase shell I got the following error:
>> Trace/BPT trap: 5
>>

Re: Can not access HBase Shell.

Posted by Jason Huang <ja...@icare.com>.
I've done some more research but still can't start the HMaster node
(with similar error). Here is what I found in the Master Server log:

Tue Sep 18 11:50:22 EDT 2012 Starting master on Jasons-MacBook-Pro.local
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1064
virtual memory          (kbytes, -v) unlimited


2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
HBase 0.94.0
2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r
1332822
2012-09-18 11:50:23,306 INFO org.apache.hadoop.hbase.util.VersionInfo:
Compiled by jenkins on Tue May  1 21:43:54 UTC 2012
2012-09-18 11:50:23,395 INFO
org.apache.zookeeper.server.ZooKeeperServer: Server
environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
GMT

........

2012-09-18 11:50:56,671 DEBUG
org.apache.hadoop.hbase.regionserver.HRegion: Updates disabled for
region -ROOT-,,0.70236052
2012-09-18 11:50:56,671 DEBUG
org.apache.hadoop.hbase.regionserver.HRegion: Started memstore flush
for -ROOT-,,0.70236052, current region memstore size 360.0
2012-09-18 11:50:56,671 DEBUG
org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting
-ROOT-,,0.70236052, commencing wait for mvcc, flushsize=360
2012-09-18 11:50:56,671 DEBUG
org.apache.hadoop.hbase.regionserver.HRegion: Finished snapshotting,
commencing flushing stores
2012-09-18 11:50:56,684 DEBUG org.apache.hadoop.hbase.util.FSUtils:
Creating file:hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13with
permission:rwxrwxrwx
2012-09-18 11:50:56,692 DEBUG
org.apache.hadoop.hbase.io.hfile.HFileWriterV2: Initialized with
CacheConfig:enabled [cacheDataOnRead=false] [cacheDataOnWrite=false]
[cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false]
[cacheEvictOnClose=false] [cacheCompressed=false]
2012-09-18 11:50:56,694 INFO
org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
filter type for
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13:
CompoundBloomFilterWriter
2012-09-18 11:50:56,703 INFO
org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
NO DeleteFamily was added to HFile
(hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13)
2012-09-18 11:50:56,703 INFO
org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=2,
memsize=360.0, into tmp file
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
2012-09-18 11:50:56,716 WARN org.apache.hadoop.hdfs.DFSClient:
Exception while reading from blk_8430779885801230139_1008 of
/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)


Since my colleges can follow the same setup instruction and install it
in another machine (non-mac) I think this might be an issue with my
Macbook Pro?

One thing I am not sure is if the system settings (max open files /
max user proc) needs to be adjusted. I've increased the max open files
# to 65536 already (as you can see from the beginning of the log).


The other thing I am not sure is why/how the file
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
is created. After failure to start HMaster, I checked that file with
dfs cat and get the same error:

$ ./bin/hadoop dfs -cat
hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13
Warning: $HADOOP_HOME is deprecated.
12/09/18 12:01:59 WARN hdfs.DFSClient: Exception while reading from
blk_8430779885801230139_1008 of
/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13 from
127.0.0.1:50010: java.io.IOException: BlockReader: error in packet
header(chunkOffset : 512, dataLen : 0, seqno : 0 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1577)


And this file definitely exists:
./bin/hadoop dfs -ls hdfs://localhost:54310/hbase/-ROOT-/70236052/.tmp/
Warning: $HADOOP_HOME is deprecated.
Found 1 items
-rw-r--r--   1 jasonhuang supergroup        848 2012-09-18 11:50
/hbase/-ROOT-/70236052/.tmp/3c6caf495a1743eca405a5f59edaef13


Also, when I look at some other dfs files they seem to be OK:
 ./bin/hadoop dfs -cat hdfs://localhost:54310/hbase/-ROOT-/70236052/.regioninfo
Warning: $HADOOP_HOME is deprecated.

	-ROOT-,,0-ROOT-?Y??

{NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052,}



$ ./bin/hadoop dfs -cat
hdfs://localhost:54310/hbase/-ROOT-/70236052/.logs/hlog.1347983456546
Warning: $HADOOP_HOME is deprecated.
SEQ0org.apache.hadoop.hbase.regionserver.wal.HLogKey0org.apache.hadoop.hbase.regionserver.wal.WALEditversion1g둣?????%???bV?"70236052-ROOT-9?9?????M#"	.META.,,1inforegioninfo9?9?	.META.,,1.META.+???$	.META.,,1infov9?9?


Sorry for the lengthy email. Any help will be greatly appreciated!

Jason

On Thu, Sep 13, 2012 at 6:42 PM, Jason Huang <ja...@icare.com> wrote:
> Hello,
>
> I am trying to set up HBase at pseudo-distributed mode on my Macbook.
> I was able to installed hadoop and HBase and started the nodes.
>
> $JPS
> 5417 TaskTracker
> 5083 NameNode
> 5761 HRegionServer
> 5658 HMaster
> 6015 Jps
> 5613 HQuorumPeer
> 5171 DataNode
> 5327 JobTracker
> 5262 SecondaryNameNode
>
> However, when I tried ./hbase shell I got the following error:
> Trace/BPT trap: 5
>