You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Umesh Chaudhary <um...@jci.com> on 2014/03/13 12:13:37 UTC

FW: Cannot find row in .META. for table

Hi,
I am able to scan Hbase tables with Thrift API but I am getting 1015 rows in 4 seconds.
I am using hadoop 1.2.1 with hbase 0.94. I have 4 region servers(data nodes) and 1 Hmaster(namenode) all have 4 GB RAM. I have configured on Hbase side as below:

<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>

<property>
<name>hbase.zookeeper.distributed</name>
<value>true</value>
</property>

<property>
    <name>zookeeper.session.timeout</name>
    <value>1200000</value>
  </property>

  <property>
    <name>hbase.zookeeper.property.tickTime</name>
    <value>6000</value>
  </property>

   <property>
    <name>hbase.client.scanner.caching</name>
    <value>500</value>
    <description>Number of rows that will be fetched when calling next
    on a scanner if it is not served from (local, client) memory. Higher
    caching values will enable faster scanners but will eat up more memory
    and some calls of next may take longer and longer times when the cache is empty.
    Do not set this value such that the time between invocations is greater
    than the scanner timeout; i.e. hbase.regionserver.lease.period
    </description>
  </property>

  <property>
    <name>hbase.storescanner.parallel.seek.enable</name>
    <value>true</value>
  </property>

<property>
   <name>hbase.hregion.majorcompaction</name>
   <value>0</value>
</property>

<property>
   <name>hbase.regionserver.handler.count</name>
   <value>100</value>
</property>

<property>
   <name>hfile.min.blocksize.size</name>
   <value>65536</value>
</property>

<property>
   <name>ipc.server.tcpnodelay</name>
   <value>true</value>
</property>

<property>
   <name>hfile.block.cache.size</name>
   <value>0.4</value>
</property>

<property>
<name>hbase.regionserver.global.memstore.upperLimit</name>
<value>0.4</value>
<description>Maximum size of all memstores in a region server before new
updates are blocked and flushes are forced. Defaults to 40% of heap.
Updates are blocked and flushes are forced until size of all memstores
in a region server hits hbase.regionserver.global.memstore.lowerLimit.
</description>
</property>

<property>
<name>hbase.hstore.blockingStoreFiles</name>
<value>30</value>
</property>

<property>
   <name>hbase.ipc.client.tcpnodelay</name>
   <value>true</value>
</property>
<property>
   <name>hbase.block.cache.size</name>
   <value>0</value>
</property>

Please help me to get better performance.



From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Wednesday, March 12, 2014 10:18 PM
To: Umesh Chaudhary
Cc: user@hbase.apache.org<ma...@hbase.apache.org>
Subject: Re: Cannot find row in .META. for table

Looking at src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java :

    filterHashMap.put("SingleColumnValueFilter", ParseConstants.FILTER_PACKAGE + "." +
                      "SingleColumnValueFilter");

SingleColumnValueFilter should be supported.

Can you show the complete stack trace ?

On Tue, Mar 11, 2014 at 11:04 PM, Umesh Chaudhary <um...@jci.com>> wrote:
Hi TED,
Now, I can successfully scan tables from Hbase but when I am using Filter like :

scanFilter.FilterString = GetBytesFromString("SingleColumnValueFilter(' ', 'COND_P', = , 'binary:0.0')"); { I have given <space> as Column family }

I an getting this error:
" java.lang.IllegalArgumentException: Filter Name SingleColumnValueFilter not supported", but I need SingleColumnValueFilter due to my requirement.

Please help that how can I use filters with scannerOpen?



From: Ted Yu [mailto:yuzhihong@gmail.com<ma...@gmail.com>]
Sent: Wednesday, March 12, 2014 8:37 AM
To: Umesh Chaudhary; user@hbase.apache.org<ma...@hbase.apache.org>

Subject: Re: Cannot find row in .META. for table

Adding back user@

Can you look at the example in this post and compose startRow accordingly ?

http://stackoverflow.com/questions/18040012/what-is-the-equivalent-of-javas-bytebuffer-wrap-in-c

On Tue, Mar 11, 2014 at 7:19 PM, Umesh Chaudhary <um...@jci.com>> wrote:
Hi Ted,
By giving null to attributes, scanner is working now, thanks for the idea.
But, when I give Guid.Empty.ToByteArray() as ByteBuffer startRow parametes's value, I am getting no rows in scannerGet_result.
Please let me know what value should I pass for start row parameter.
.



From: Ted Yu [mailto:yuzhihong@gmail.com<ma...@gmail.com>]
Sent: Tuesday, March 11, 2014 10:15 PM
To: Umesh Chaudhary

Subject: Re: Cannot find row in .META. for table

Have you seen this ?

http://stackoverflow.com/questions/10078348/byte-collection-based-similar-with-bytebuffer-from-java

Looking at src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java :

  private static void addAttributes(OperationWithAttributes op,
    Map<ByteBuffer, ByteBuffer> attributes) {
    if (attributes == null || attributes.size() == 0) {
      return;

You can pass C# equivalent of null for attributes.

Cheers
On Tue, Mar 11, 2014 at 8:19 AM, Umesh Chaudhary <um...@jci.com>> wrote:
Thanks for the reply Ted. I am using Hbase-sharp dll which is portd with kind-of old scannerOpen() method which has no Map<ByteBuffer,ByteBuffer> attributes parameter.
I have also generated c# code from new Thrift server in which I am getting 4 arguments as you have listed.
Now , my concern is in what way I should give {ByteBuffer startRow} and {Map<ByteBuffer,ByteBuffer> attributes } parameter because I want to get all rows from the specified table.


-----Original Message-----
From: Ted Yu [mailto:yuzhihong@gmail.com<ma...@gmail.com>]
Sent: Tuesday, March 11, 2014 8:21 PM
To: user@hbase.apache.org<ma...@hbase.apache.org>
Subject: Re: Cannot find row in .META. for table
In src/main//java/org/apache/hadoop/hbase/thrift/generated/Hbase.java , I found the following scannerOpen() methods:

    public int scannerOpen(ByteBuffer tableName, ByteBuffer startRow, List<ByteBuffer> columns, Map<ByteBuffer,ByteBuffer> attributes) throws IOError, org.apache.thrift.TException;
    public void scannerOpen(ByteBuffer tableName, ByteBuffer startRow, List<ByteBuffer> columns, Map<ByteBuffer,ByteBuffer> attributes, org.apache.thrift.async.AsyncMethodCallback<AsyncClient.scannerOpen_call>
resultHandler) throws org.apache.thrift.TException;
    public int scannerOpen(ByteBuffer tableName, ByteBuffer startRow, List<ByteBuffer> columns, Map<ByteBuffer,ByteBuffer> attributes) throws IOError, org.apache.thrift.TException
    public void scannerOpen(ByteBuffer tableName, ByteBuffer startRow, List<ByteBuffer> columns, Map<ByteBuffer,ByteBuffer> attributes, org.apache.thrift.async.AsyncMethodCallback<scannerOpen_call>
resultHandler) throws org.apache.thrift.TException {

None of the above takes 3 parameters.


On Tue, Mar 11, 2014 at 6:05 AM, Umesh Chaudhary <um...@jci.com>>wrote:

> I am getting below message while running hbck with/without parameters:
>
>   Number of regions: 7
>     Deployed on:  jci0.jci.com<http://jci0.jci.com>,60020,1394472660266
> jci1.jci.com<http://jci1.jci.com>,60020,1394472671945
> jci2.jci.com<http://jci2.jci.com>,60020,1394472679477 jci3.jci.com<http://jci3.jci.com>,60020,1394472703951
> 0 inconsistencies detected.
>
> If there are 0 inconsistencies then why I am facing this issue?
> Please check my code:
>
> var rows = _hbase.getRow(table_name,
> BitConverter.GetBytes("Asset"));--->
> where "Asset" is my column family.
>
> OR
>
> var scanner =
> _hbase.scannerOpen(table_name,BitConverter.GetBytes(1),columnsListinBy
> teArray);
>
> Because I am newbie to Thrift API for C#, please suggest how can I
> provide arguments for the same.
>
>
> -----Original Message-----
> From: Jean-Marc Spaggiari [mailto:jean-marc@spaggiari.org<ma...@spaggiari.org>]
> Sent: Tuesday, March 11, 2014 5:13 PM
> To: user
> Subject: Re: Cannot find row in .META. for table
>
> Before using -repair or any other parameter, I will recommend you to
> run it without any parameter to have a sense of what hbck will find.
>
> JM
>
>
> 2014-03-11 7:36 GMT-04:00 divye sheth <di...@gmail.com>>:
>
> > You can use the hbck utility to repair these kinds of problems.
> >
> > $ hbase hbck -repair
> > OR
> > $ hbase hbck -fixMeta
> >
> > Thanks
> > Divye Sheth
> >
> >
> > On Tue, Mar 11, 2014 at 4:55 PM, Umesh Chaudhary
> > <um...@jci.com>
> > >wrote:
> >
> > >
> > > Hi,
> > > I am using Hbase 0.94.1 with Hadoop 1.2.1 and using Thrift API to
> > > access tables stored in Hbase from my C# application. I am able to
> > > connect to Server but while going to perform any operation from
> > > client it gives following error in CLI-log:
> > >
> > > 14/03/11 12:18:53 WARN
> > > client.HConnectionManager$HConnectionImplementation: Encountered
> > > problems when prefetch META table:
> > > org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in
> .META.
> > > for table: tblAssetsView,
> > >
> > row=t\x00\x00\x00b\x00\x00\x00l\x00\x00\x00A\x00\x00\x00s\x00\x00\x0
> > 0s
> > \x00\x00\x00e\x00\x00\x00t\x00\x00\x00s\x00\x00\x00V\x00\x00\x00i\x0
> > 0\
> > x00\x00e\x00\x00\x00w\x00\x00\x00,,99999999999999
> > >     at
> > >
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:1
> 51)
> > >     at
> > >
> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
> > nt
> > ation.prefetchRegionCache(HConnectionManager.java:1059)
> > >     at
> > >
> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
> > nt
> > ation.locateRegionInMeta(HConnectionManager.java:1121)
> > >     at
> > >
> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
> > nt
> > ation.locateRegion(HConnectionManager.java:1001)
> > >     at
> > >
> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
> > nt
> > ation.locateRegion(HConnectionManager.java:958)
> > >     at
> org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251)
> > >     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:155)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getTa
> > bl
> > e(ThriftServerRunner.java:458)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getTa
> > bl
> > e(ThriftServerRunner.java:464)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getRo
> > wW
> > ithColumnsTs(ThriftServerRunner.java:766)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getRo
> > w(
> > ThriftServerRunner.java:739)
> > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >     at
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
> > .j
> > ava:57)
> > >     at
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
> > ss
> > orImpl.java:43)
> > >     at java.lang.reflect.Method.invoke(Method.java:606)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(Hbase
> > Ha
> > ndlerMetricsProxy.java:65)
> > >     at com.sun.proxy.$Proxy6.getRow(Unknown Source)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$getRow.getR
> > es
> > ult(Hbase.java:3906)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$getRow.getR
> > es
> > ult(Hbase.java:3894)
> > >     at
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
> > >     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnne
> > ct
> > ion.run(TBoundedThreadPoolServer.java:287)
> > >     at
> > >
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor
> > .j
> > ava:1145)
> > >     at
> > >
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> > java:615)
> > >     at java.lang.Thread.run(Thread.java:744)
> > > 14/03/11 12:18:53 WARN thrift.ThriftServerRunner$HBaseHandler:
> > > tblAssetsView
> > > org.apache.hadoop.hbase.TableNotFoundException: tblAssetsView
> > >     at
> > >
> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
> > nt
> > ation.locateRegionInMeta(HConnectionManager.java:1139)
> > >     at
> > >
> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
> > nt
> > ation.locateRegion(HConnectionManager.java:1001)
> > >     at
> > >
> > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
> > nt
> > ation.locateRegion(HConnectionManager.java:958)
> > >     at
> org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251)
> > >     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:155)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getTa
> > bl
> > e(ThriftServerRunner.java:458)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getTa
> > bl
> > e(ThriftServerRunner.java:464)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getRo
> > wW
> > ithColumnsTs(ThriftServerRunner.java:766)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.getRo
> > w(
> > ThriftServerRunner.java:739)
> > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >     at
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
> > .j
> > ava:57)
> > >     at
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
> > ss
> > orImpl.java:43)
> > >     at java.lang.reflect.Method.invoke(Method.java:606)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(Hbase
> > Ha
> > ndlerMetricsProxy.java:65)
> > >     at com.sun.proxy.$Proxy6.getRow(Unknown Source)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$getRow.getR
> > es
> > ult(Hbase.java:3906)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$getRow.getR
> > es
> > ult(Hbase.java:3894)
> > >     at
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
> > >     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
> > >     at
> > >
> > org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnne
> > ct
> > ion.run(TBoundedThreadPoolServer.java:287)
> > >     at
> > >
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor
> > .j
> > ava:1145)
> > >     at
> > >
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> > java:615)
> > >     at java.lang.Thread.run(Thread.java:744)
> > >
> > >
> > >
> > > But I can access my table from Hbase Shell with all shell
> > > operations. I
> > am
> > > totally stuck here, please devise some methods to overcome this issue.
> > >
> > >
> > > Umesh Chaudhary
> > >
> >
>