You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Rohit Kelkar <ro...@gmail.com> on 2013/06/19 19:05:46 UTC

Problem with HFile lexical comparison

Here is a problem that I am facing while creating an HFile outside of a MR
job.
My column family is "sd"
For a given rowKey=10011-2-0000000000000000703, this is the sequence in
which I am writing KeyValue pairs to the HFile,
key=sd:dt, value="dummy value 1"
key=sd:dth, value="dummy value 2"

When I print the keys it prints the following for the above 2 entries
\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
\x00\x1B10011-2-0000000000000000703\x02sddth\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04

When I write the KV to HFile it throws an IOException:
Added a key not lexically larger than previous
key=\x00\x1B10011-2-0000000000000000703\x02sddth\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04,
lastkey=\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04

Lexically "sd:dt" is smaller than "sd:dth" then why should HFile
(AbstractHFileWriter.checkKey) complain? The checkKey method is using the
hadoop RawComparator
Am I missing something?

- R

Re: Problem with HFile lexical comparison

Posted by Stack <st...@duboce.net>.
On Thu, Jun 20, 2013 at 3:28 PM, Rohit Kelkar <ro...@gmail.com> wrote:

>
> Out of curiosity, for my learning, why does LATEST_TIMESTAMP make the table
> not see the actual rows?
>


Because these values will be in the future relative to the host.

When you send a RS an edit w/ LATEST_TIMESTAMP, it will replace
LATEST_TIMESTAMP w/ currentTimeMillis before persisting it. You  by-passed
this facility when you wrote the hfiles yourself.

St.Ack

Re: Problem with HFile lexical comparison

Posted by Rohit Kelkar <ro...@gmail.com>.
I got this sorted out. Earlier, I was writing the KeyValues in the HFile
without the timestamp like so -
KeyValue kv = new KeyValue(rowBytes, "CF".getBytes(), key,
value.getBytes());
So when I printed the Key Values of the HFile using the command -
bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -p -f
hdfs://localhost:9000/ROOT_DIR/TABLE_NAME/REGION_NAME/CF_NAME/HFILE

It gave the following output
K: row1/d:c1/LATEST_TIMESTAMP/Put/vlen=2/ts=0 V: v1
K: row2/d:c2/LATEST_TIMESTAMP/Put/vlen=2/ts=0 V: v2
and the count 'mytable' returned zero rows.

Once I started writing in the KeyValues with System.currentTimeMillis()
like this -
KeyValue kv = new KeyValue(rowBytes, "CF".getBytes(), key,
System.currentTimeMillis(), value.getBytes());
I could see the actual timestamp in the Key Values of the HFile. And the
count 'mytable' returned the correct values.

Out of curiosity, for my learning, why does LATEST_TIMESTAMP make the table
not see the actual rows?

- R




On Thu, Jun 20, 2013 at 2:53 PM, Rohit Kelkar <ro...@gmail.com> wrote:

> Ok. So I was able to write the HFile on hdfs but when I try loading it in
> to an existing HTable the code completes without failing but when I do a
> count on the HTable from the hbase shell it still shows a zero count. This
> is the command I am using
>
> hbase-0.94.2/bin/hbase
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
> hdfs://localhost:9000/path/to/myhfiles mytablename
>
> Also once the code completes the file on hdfs gets deleted. I guess this
> is the expected behaviour but I am not sure why the table is still empty.
>
> Alternately I ran the ImportTsv example and it correctly put entries in my
> HTable. But the ImportTsv is a MR job but in my use case the process that
> is generating my data is not map-reducible. So I cannot use the ImportTsv
> or any other MR job to bulk load in to the HTable. But what I could do is
> make the process write to a tmp TSV file and then use ImportTsv. But given
> the volume of data I am inclined to save this extra IO operation.
>
> - R
>
>
> On Wed, Jun 19, 2013 at 11:08 PM, Rohit Kelkar <ro...@gmail.com>wrote:
>
>> Perfect. That worked. Thanks.
>>
>> - R
>>
>>
>> On Wed, Jun 19, 2013 at 7:23 PM, Jeff Kolesky <je...@opower.com> wrote:
>>
>>> Last time I wrote directly to an HFile, I instantiated an HFile.Writer
>>> using this statement:
>>>
>>>         HFile.Writer writer = HFile.getWriterFactory(config)
>>>             .createWriter(fs, hfilePath,
>>>                     (bytesPerBlock * 1024),
>>>                     Compression.Algorithm.GZ,
>>>                     KeyValue.KEY_COMPARATOR);
>>>
>>> Perhaps you need the declaration of the comparator in the create
>>> statement
>>> for the writer.
>>>
>>> Jeff
>>>
>>>
>>>
>>> On Wed, Jun 19, 2013 at 5:11 PM, Rohit Kelkar <ro...@gmail.com>
>>> wrote:
>>>
>>> > Thanks for the replies, I tried the KeyValue.KVComparator but still no
>>> > luck. So I commented the comparator and played around with the
>>> sequence of
>>> > writing the qualifiers to the HFile. (see code here:
>>> > https://gist.github.com/anonymous/5819254)
>>> >
>>> > If I set the variable String[] starr = new String[]{"a", "d", "dt",
>>> "dth"}
>>> > then the code breaks while writing the qualifier "dt" to the HFile.
>>> > If I set the variable String[] starr = new String[]{"a", "dth", "dt",
>>> "d"}
>>> > then the code runs successfully.
>>> > If I set the variable String[] starr = new String[]{"dth", "dt", "d",
>>> "a"}
>>> > then the code breaks while writing "a" to the HFile
>>> >
>>> > Does this mean that if the qualifiers start with the same character
>>> then
>>> > the longest qualifier should be written first? Else the usual lexical
>>> order
>>> > is honoured?
>>> >
>>> > The code throws following stack trace
>>> > Added a key not lexically larger than previous
>>> >
>>> >
>>> key=\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04,
>>> >
>>> >
>>> lastkey=\x00\x1B10011-2-0000000000000000703\x02sdd\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
>>> >  at
>>> >
>>> >
>>> org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:207)
>>> > at
>>> >
>>> >
>>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:317)
>>> >  at
>>> >
>>> >
>>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:282)
>>> > at com.mycompany.hbase.process.myprocess.myFunction(MyClass.java:1492)
>>> >
>>> > I am using hbase-0.94.2
>>> >
>>> > - Rohit Kelkar
>>> >
>>> >
>>> > On Wed, Jun 19, 2013 at 1:15 PM, Jeff Kolesky <je...@opower.com> wrote:
>>> >
>>> > > I believe you need to use the KVComparator:
>>> > >
>>> > >
>>> > >
>>> >
>>> https://github.com/apache/hbase/blob/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java#L88
>>> > >
>>> > > Jeff
>>> > >
>>> > >
>>> > > On Wed, Jun 19, 2013 at 10:32 AM, Rohit Kelkar <
>>> rohitkelkar@gmail.com
>>> > > >wrote:
>>> > >
>>> > > > Here is the code - https://gist.github.com/anonymous/5816180
>>> > > >
>>> > > > I guess the issue is with my use of the comparator function.
>>> > > >
>>> > > > - R
>>> > > >
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > *Jeff Kolesky*
>>> > > Chief Software Architect
>>> > > *Opower*
>>> > >
>>> >
>>>
>>>
>>>
>>> --
>>> *Jeff Kolesky*
>>> Chief Software Architect
>>> *Opower*
>>>
>>
>>
>

Re: Problem with HFile lexical comparison

Posted by Rohit Kelkar <ro...@gmail.com>.
Ok. So I was able to write the HFile on hdfs but when I try loading it in
to an existing HTable the code completes without failing but when I do a
count on the HTable from the hbase shell it still shows a zero count. This
is the command I am using

hbase-0.94.2/bin/hbase
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
hdfs://localhost:9000/path/to/myhfiles mytablename

Also once the code completes the file on hdfs gets deleted. I guess this is
the expected behaviour but I am not sure why the table is still empty.

Alternately I ran the ImportTsv example and it correctly put entries in my
HTable. But the ImportTsv is a MR job but in my use case the process that
is generating my data is not map-reducible. So I cannot use the ImportTsv
or any other MR job to bulk load in to the HTable. But what I could do is
make the process write to a tmp TSV file and then use ImportTsv. But given
the volume of data I am inclined to save this extra IO operation.

- R


On Wed, Jun 19, 2013 at 11:08 PM, Rohit Kelkar <ro...@gmail.com>wrote:

> Perfect. That worked. Thanks.
>
> - R
>
>
> On Wed, Jun 19, 2013 at 7:23 PM, Jeff Kolesky <je...@opower.com> wrote:
>
>> Last time I wrote directly to an HFile, I instantiated an HFile.Writer
>> using this statement:
>>
>>         HFile.Writer writer = HFile.getWriterFactory(config)
>>             .createWriter(fs, hfilePath,
>>                     (bytesPerBlock * 1024),
>>                     Compression.Algorithm.GZ,
>>                     KeyValue.KEY_COMPARATOR);
>>
>> Perhaps you need the declaration of the comparator in the create statement
>> for the writer.
>>
>> Jeff
>>
>>
>>
>> On Wed, Jun 19, 2013 at 5:11 PM, Rohit Kelkar <ro...@gmail.com>
>> wrote:
>>
>> > Thanks for the replies, I tried the KeyValue.KVComparator but still no
>> > luck. So I commented the comparator and played around with the sequence
>> of
>> > writing the qualifiers to the HFile. (see code here:
>> > https://gist.github.com/anonymous/5819254)
>> >
>> > If I set the variable String[] starr = new String[]{"a", "d", "dt",
>> "dth"}
>> > then the code breaks while writing the qualifier "dt" to the HFile.
>> > If I set the variable String[] starr = new String[]{"a", "dth", "dt",
>> "d"}
>> > then the code runs successfully.
>> > If I set the variable String[] starr = new String[]{"dth", "dt", "d",
>> "a"}
>> > then the code breaks while writing "a" to the HFile
>> >
>> > Does this mean that if the qualifiers start with the same character then
>> > the longest qualifier should be written first? Else the usual lexical
>> order
>> > is honoured?
>> >
>> > The code throws following stack trace
>> > Added a key not lexically larger than previous
>> >
>> >
>> key=\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04,
>> >
>> >
>> lastkey=\x00\x1B10011-2-0000000000000000703\x02sdd\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
>> >  at
>> >
>> >
>> org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:207)
>> > at
>> >
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:317)
>> >  at
>> >
>> >
>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:282)
>> > at com.mycompany.hbase.process.myprocess.myFunction(MyClass.java:1492)
>> >
>> > I am using hbase-0.94.2
>> >
>> > - Rohit Kelkar
>> >
>> >
>> > On Wed, Jun 19, 2013 at 1:15 PM, Jeff Kolesky <je...@opower.com> wrote:
>> >
>> > > I believe you need to use the KVComparator:
>> > >
>> > >
>> > >
>> >
>> https://github.com/apache/hbase/blob/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java#L88
>> > >
>> > > Jeff
>> > >
>> > >
>> > > On Wed, Jun 19, 2013 at 10:32 AM, Rohit Kelkar <rohitkelkar@gmail.com
>> > > >wrote:
>> > >
>> > > > Here is the code - https://gist.github.com/anonymous/5816180
>> > > >
>> > > > I guess the issue is with my use of the comparator function.
>> > > >
>> > > > - R
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > *Jeff Kolesky*
>> > > Chief Software Architect
>> > > *Opower*
>> > >
>> >
>>
>>
>>
>> --
>> *Jeff Kolesky*
>> Chief Software Architect
>> *Opower*
>>
>
>

Re: Problem with HFile lexical comparison

Posted by Rohit Kelkar <ro...@gmail.com>.
Perfect. That worked. Thanks.

- R


On Wed, Jun 19, 2013 at 7:23 PM, Jeff Kolesky <je...@opower.com> wrote:

> Last time I wrote directly to an HFile, I instantiated an HFile.Writer
> using this statement:
>
>         HFile.Writer writer = HFile.getWriterFactory(config)
>             .createWriter(fs, hfilePath,
>                     (bytesPerBlock * 1024),
>                     Compression.Algorithm.GZ,
>                     KeyValue.KEY_COMPARATOR);
>
> Perhaps you need the declaration of the comparator in the create statement
> for the writer.
>
> Jeff
>
>
>
> On Wed, Jun 19, 2013 at 5:11 PM, Rohit Kelkar <ro...@gmail.com>
> wrote:
>
> > Thanks for the replies, I tried the KeyValue.KVComparator but still no
> > luck. So I commented the comparator and played around with the sequence
> of
> > writing the qualifiers to the HFile. (see code here:
> > https://gist.github.com/anonymous/5819254)
> >
> > If I set the variable String[] starr = new String[]{"a", "d", "dt",
> "dth"}
> > then the code breaks while writing the qualifier "dt" to the HFile.
> > If I set the variable String[] starr = new String[]{"a", "dth", "dt",
> "d"}
> > then the code runs successfully.
> > If I set the variable String[] starr = new String[]{"dth", "dt", "d",
> "a"}
> > then the code breaks while writing "a" to the HFile
> >
> > Does this mean that if the qualifiers start with the same character then
> > the longest qualifier should be written first? Else the usual lexical
> order
> > is honoured?
> >
> > The code throws following stack trace
> > Added a key not lexically larger than previous
> >
> >
> key=\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04,
> >
> >
> lastkey=\x00\x1B10011-2-0000000000000000703\x02sdd\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
> >  at
> >
> >
> org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:207)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:317)
> >  at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:282)
> > at com.mycompany.hbase.process.myprocess.myFunction(MyClass.java:1492)
> >
> > I am using hbase-0.94.2
> >
> > - Rohit Kelkar
> >
> >
> > On Wed, Jun 19, 2013 at 1:15 PM, Jeff Kolesky <je...@opower.com> wrote:
> >
> > > I believe you need to use the KVComparator:
> > >
> > >
> > >
> >
> https://github.com/apache/hbase/blob/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java#L88
> > >
> > > Jeff
> > >
> > >
> > > On Wed, Jun 19, 2013 at 10:32 AM, Rohit Kelkar <rohitkelkar@gmail.com
> > > >wrote:
> > >
> > > > Here is the code - https://gist.github.com/anonymous/5816180
> > > >
> > > > I guess the issue is with my use of the comparator function.
> > > >
> > > > - R
> > > >
> > >
> > >
> > >
> > > --
> > > *Jeff Kolesky*
> > > Chief Software Architect
> > > *Opower*
> > >
> >
>
>
>
> --
> *Jeff Kolesky*
> Chief Software Architect
> *Opower*
>

Re: Problem with HFile lexical comparison

Posted by Jeff Kolesky <je...@opower.com>.
Last time I wrote directly to an HFile, I instantiated an HFile.Writer
using this statement:

        HFile.Writer writer = HFile.getWriterFactory(config)
            .createWriter(fs, hfilePath,
                    (bytesPerBlock * 1024),
                    Compression.Algorithm.GZ,
                    KeyValue.KEY_COMPARATOR);

Perhaps you need the declaration of the comparator in the create statement
for the writer.

Jeff



On Wed, Jun 19, 2013 at 5:11 PM, Rohit Kelkar <ro...@gmail.com> wrote:

> Thanks for the replies, I tried the KeyValue.KVComparator but still no
> luck. So I commented the comparator and played around with the sequence of
> writing the qualifiers to the HFile. (see code here:
> https://gist.github.com/anonymous/5819254)
>
> If I set the variable String[] starr = new String[]{"a", "d", "dt", "dth"}
> then the code breaks while writing the qualifier "dt" to the HFile.
> If I set the variable String[] starr = new String[]{"a", "dth", "dt", "d"}
> then the code runs successfully.
> If I set the variable String[] starr = new String[]{"dth", "dt", "d", "a"}
> then the code breaks while writing "a" to the HFile
>
> Does this mean that if the qualifiers start with the same character then
> the longest qualifier should be written first? Else the usual lexical order
> is honoured?
>
> The code throws following stack trace
> Added a key not lexically larger than previous
>
> key=\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04,
>
> lastkey=\x00\x1B10011-2-0000000000000000703\x02sdd\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
>  at
>
> org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:207)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:317)
>  at
>
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:282)
> at com.mycompany.hbase.process.myprocess.myFunction(MyClass.java:1492)
>
> I am using hbase-0.94.2
>
> - Rohit Kelkar
>
>
> On Wed, Jun 19, 2013 at 1:15 PM, Jeff Kolesky <je...@opower.com> wrote:
>
> > I believe you need to use the KVComparator:
> >
> >
> >
> https://github.com/apache/hbase/blob/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java#L88
> >
> > Jeff
> >
> >
> > On Wed, Jun 19, 2013 at 10:32 AM, Rohit Kelkar <rohitkelkar@gmail.com
> > >wrote:
> >
> > > Here is the code - https://gist.github.com/anonymous/5816180
> > >
> > > I guess the issue is with my use of the comparator function.
> > >
> > > - R
> > >
> >
> >
> >
> > --
> > *Jeff Kolesky*
> > Chief Software Architect
> > *Opower*
> >
>



-- 
*Jeff Kolesky*
Chief Software Architect
*Opower*

Re: Problem with HFile lexical comparison

Posted by Rohit Kelkar <ro...@gmail.com>.
Thanks for the replies, I tried the KeyValue.KVComparator but still no
luck. So I commented the comparator and played around with the sequence of
writing the qualifiers to the HFile. (see code here:
https://gist.github.com/anonymous/5819254)

If I set the variable String[] starr = new String[]{"a", "d", "dt", "dth"}
then the code breaks while writing the qualifier "dt" to the HFile.
If I set the variable String[] starr = new String[]{"a", "dth", "dt", "d"}
then the code runs successfully.
If I set the variable String[] starr = new String[]{"dth", "dt", "d", "a"}
then the code breaks while writing "a" to the HFile

Does this mean that if the qualifiers start with the same character then
the longest qualifier should be written first? Else the usual lexical order
is honoured?

The code throws following stack trace
Added a key not lexically larger than previous
key=\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04,
lastkey=\x00\x1B10011-2-0000000000000000703\x02sdd\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
 at
org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:207)
at
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:317)
 at
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:282)
at com.mycompany.hbase.process.myprocess.myFunction(MyClass.java:1492)

I am using hbase-0.94.2

- Rohit Kelkar


On Wed, Jun 19, 2013 at 1:15 PM, Jeff Kolesky <je...@opower.com> wrote:

> I believe you need to use the KVComparator:
>
>
> https://github.com/apache/hbase/blob/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java#L88
>
> Jeff
>
>
> On Wed, Jun 19, 2013 at 10:32 AM, Rohit Kelkar <rohitkelkar@gmail.com
> >wrote:
>
> > Here is the code - https://gist.github.com/anonymous/5816180
> >
> > I guess the issue is with my use of the comparator function.
> >
> > - R
> >
>
>
>
> --
> *Jeff Kolesky*
> Chief Software Architect
> *Opower*
>

Re: Problem with HFile lexical comparison

Posted by Jeff Kolesky <je...@opower.com>.
I believe you need to use the KVComparator:

https://github.com/apache/hbase/blob/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java#L88

Jeff


On Wed, Jun 19, 2013 at 10:32 AM, Rohit Kelkar <ro...@gmail.com>wrote:

> Here is the code - https://gist.github.com/anonymous/5816180
>
> I guess the issue is with my use of the comparator function.
>
> - R
>



-- 
*Jeff Kolesky*
Chief Software Architect
*Opower*

Re: Problem with HFile lexical comparison

Posted by Rohit Kelkar <ro...@gmail.com>.
Here is the code - https://gist.github.com/anonymous/5816180

I guess the issue is with my use of the comparator function.

- R

Re: Problem with HFile lexical comparison

Posted by Stack <st...@duboce.net>.
RawComparator just does raw bytes?  You need a Comparator that understands
the KV format.  See KV class.  Otherwise, post your code. It seems like you
are skirting checks the hfile hosting Store in hbase does.

St.Ack


On Wed, Jun 19, 2013 at 10:05 AM, Rohit Kelkar <ro...@gmail.com>wrote:

> Here is a problem that I am facing while creating an HFile outside of a MR
> job.
> My column family is "sd"
> For a given rowKey=10011-2-0000000000000000703, this is the sequence in
> which I am writing KeyValue pairs to the HFile,
> key=sd:dt, value="dummy value 1"
> key=sd:dth, value="dummy value 2"
>
> When I print the keys it prints the following for the above 2 entries
>
> \x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
>
> \x00\x1B10011-2-0000000000000000703\x02sddth\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
>
> When I write the KV to HFile it throws an IOException:
> Added a key not lexically larger than previous
>
> key=\x00\x1B10011-2-0000000000000000703\x02sddth\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04,
>
> lastkey=\x00\x1B10011-2-0000000000000000703\x02sddt\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04
>
> Lexically "sd:dt" is smaller than "sd:dth" then why should HFile
> (AbstractHFileWriter.checkKey) complain? The checkKey method is using the
> hadoop RawComparator
> Am I missing something?
>
> - R
>