You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Peter Haidinyak <ph...@local.com> on 2011/03/09 20:27:55 UTC

Table Flush and then Close

HBase Version 	0.89.20100924+28
Hadoop Version	0.20.2+737
Java Version 	1.6.22

I am inserting data from a SQL Server Table into an HBase table via a Java Client. Everything is working well but I discovered that if I call a table.flush() and then immediately call table.close() the row count in HBase ends up being around 4900 short out of 390,000 rows. But, if I call table.flush() and then later (micro-seconds) call table.close() all of the rows are written. The number of rows that are missing is constant across many tries.

-Pete

Re: Table Flush and then Close

Posted by Stack <st...@duboce.net>.
Sounds like a bug Pete.  Can you dig in more to try and figure whats happening?
Thank you,
St.Ack

On Wed, Mar 9, 2011 at 11:27 AM, Peter Haidinyak <ph...@local.com> wrote:
> HBase Version   0.89.20100924+28
> Hadoop Version  0.20.2+737
> Java Version    1.6.22
>
> I am inserting data from a SQL Server Table into an HBase table via a Java Client. Everything is working well but I discovered that if I call a table.flush() and then immediately call table.close() the row count in HBase ends up being around 4900 short out of 390,000 rows. But, if I call table.flush() and then later (micro-seconds) call table.close() all of the rows are written. The number of rows that are missing is constant across many tries.
>
> -Pete
>