You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Jonathan Gray (JIRA)" <ji...@apache.org> on 2009/03/10 20:04:50 UTC

[jira] Created: (HBASE-1252) Make atomic increment perform a binary increment

Make atomic increment perform a binary increment
------------------------------------------------

                 Key: HBASE-1252
                 URL: https://issues.apache.org/jira/browse/HBASE-1252
             Project: Hadoop HBase
          Issue Type: Improvement
    Affects Versions: 0.19.0
            Reporter: Jonathan Gray
            Assignee: Jonathan Gray
            Priority: Minor
             Fix For: 0.19.1, 0.20.0


A few issues related to recently committed HBASE-803

- The HTable api still takes an integer amount rather than long, mismatching HRI.
- Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
- Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?

- Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray updated HBASE-1252:
---------------------------------

    Attachment: hbase-1252-v2.patch

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>         Attachments: hbase-1252-v1.patch, hbase-1252-v2.patch
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray updated HBASE-1252:
---------------------------------

    Status: Patch Available  (was: Open)

Applies to 0.19 branch and 0.20 trunk.

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>         Attachments: hbase-1252-v1.patch, hbase-1252-v2.patch
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray updated HBASE-1252:
---------------------------------

    Attachment: hbase-1252-v2.patch

removed an unused variable.

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>         Attachments: hbase-1252-v1.patch, hbase-1252-v2.patch
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HBASE-1252:
-------------------------

    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

Committed branch and trunk.  Thanks for the patch Jon.

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>         Attachments: hbase-1252-v1.patch, hbase-1252-v2.patch
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12680604#action_12680604 ] 

Jonathan Gray commented on HBASE-1252:
--------------------------------------

Okay.  Got a far more optimized binary increment written now.  Beats long method across the board.  Two methods of benchmarking, one which just runs a bunch of tests in succession, the other includes my attempts at trying to prevent optimizations (especially starting a new jvm for each test).  It made no difference in timings which order I tested them in.

4-10X performance improvement across the board now.

Doing my best to reduce optimizations:

{code}
[hbase@mb0 src]$ java IncrementTest 1000000 1
incrementAsLong : Incremented 1000000 times by 1 in 209 ms (current value is 1000000)
incrementAsBytes : Incremened 1000000 times by 1 in 20 ms (current value is 1000000)
[hbase@mb0 src]$ java IncrementTest 1000000 10
incrementAsLong : Incremented 1000000 times by 10 in 210 ms (current value is 10000000)
incrementAsBytes : Incremened 1000000 times by 10 in 20 ms (current value is 10000000)
[hbase@mb0 src]$ java IncrementTest 1000000 100
incrementAsLong : Incremented 1000000 times by 100 in 210 ms (current value is 100000000)
incrementAsBytes : Incremened 1000000 times by 100 in 26 ms (current value is 100000000)
[hbase@mb0 src]$ java IncrementTest 1000000 1000
incrementAsLong : Incremented 1000000 times by 1000 in 209 ms (current value is 1000000000)
incrementAsBytes : Incremened 1000000 times by 1000 in 30 ms (current value is 1000000000)
[hbase@mb0 src]$ java IncrementTest 1000000 10000
incrementAsLong : Incremented 1000000 times by 10000 in 211 ms (current value is 10000000000)
incrementAsBytes : Incremened 1000000 times by 10000 in 31 ms (current value is 10000000000)
[hbase@mb0 src]$ java IncrementTest 1000000 100000
incrementAsLong : Incremented 1000000 times by 100000 in 211 ms (current value is 100000000000)
incrementAsBytes : Incremened 1000000 times by 100000 in 41 ms (current value is 100000000000)
[hbase@mb0 src]$ java IncrementTest 1000000 1000000
incrementAsLong : Incremented 1000000 times by 1000000 in 209 ms (current value is 1000000000000)
incrementAsBytes : Incremened 1000000 times by 1000000 in 37 ms (current value is 1000000000000)
[hbase@mb0 src]$ java IncrementTest 1000000 10000000
incrementAsLong : Incremented 1000000 times by 10000000 in 213 ms (current value is 10000000000000)
incrementAsBytes : Incremened 1000000 times by 10000000 in 46 ms (current value is 10000000000000)
[hbase@mb0 src]$ java IncrementTest 1000000 100000000
incrementAsLong : Incremented 1000000 times by 100000000 in 213 ms (current value is 100000000000000)
incrementAsBytes : Incremened 1000000 times by 100000000 in 43 ms (current value is 100000000000000)
[hbase@mb0 src]$ java IncrementTest 1000000 1000000000
incrementAsLong : Incremented 1000000 times by 1000000000 in 211 ms (current value is 1000000000000000)
incrementAsBytes : Incremened 1000000 times by 1000000000 in 55 ms (current value is 1000000000000000)
{code}

Just a big sequence of tests in a single method:

{code}
incrementAsLong : Incremented 100000 times by 1 in 91 ms (current value is 100000)
incrementAsBytes : Incremened 100000 times by 1 in 11 ms (current value is 100000)
incrementAsLong : Incremented 100000 times by 10 in 26 ms (current value is 1000000)
incrementAsBytes : Incremened 100000 times by 10 in 1 ms (current value is 1000000)
incrementAsLong : Incremented 100000 times by 100 in 19 ms (current value is 10000000)
incrementAsBytes : Incremened 100000 times by 100 in 1 ms (current value is 10000000)
incrementAsLong : Incremented 100000 times by 1000 in 19 ms (current value is 100000000)
incrementAsBytes : Incremened 100000 times by 1000 in 1 ms (current value is 100000000)
incrementAsLong : Incremented 100000 times by 10000 in 20 ms (current value is 1000000000)
incrementAsBytes : Incremened 100000 times by 10000 in 1 ms (current value is 1000000000)
incrementAsLong : Incremented 100000 times by 100000 in 9 ms (current value is 10000000000)
incrementAsBytes : Incremened 100000 times by 100000 in 2 ms (current value is 10000000000)
incrementAsLong : Incremented 100000 times by 1000000 in 9 ms (current value is 100000000000)
incrementAsBytes : Incremened 100000 times by 1000000 in 2 ms (current value is 100000000000)
incrementAsLong : Incremented 100000 times by 10000000 in 9 ms (current value is 1000000000000)
incrementAsBytes : Incremened 100000 times by 10000000 in 4 ms (current value is 1000000000000)
incrementAsLong : Incremented 100000 times by 100000000 in 9 ms (current value is 10000000000000)
incrementAsBytes : Incremened 100000 times by 100000000 in 3 ms (current value is 10000000000000)
incrementAsLong : Incremented 100000 times by 1000000000 in 10 ms (current value is 100000000000000)
incrementAsBytes : Incremened 100000 times by 1000000000 in 4 ms (current value is 100000000000000)
{code}

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12680589#action_12680589 ] 

Jonathan Gray commented on HBASE-1252:
--------------------------------------

Benchmarking results:

{code}
incrementAsLong : Incremented 100000 times by 1 in 99 ms (current value is 100000)
incrementAsBytes : Incremened 100000 times by 1 in 9 ms (current value is 200000)

incrementAsLong : Incremented 100000 times by 100 in 100 ms (current value is 10000000)
incrementAsBytes : Incremened 100000 times by 100 in 46 ms (current value is 20000000)
{code}

When the number gets over 500 or so, the byte increment gets slower.  Going to work on an optimization to try and make them at least approx the same for high incrementing values.

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12680591#action_12680591 ] 

Jonathan Gray commented on HBASE-1252:
--------------------------------------

Trying my best to get compiler optimizations out the way... Anyways, the binary increment means we can do things in place down the road and also that we don't have to care about the size of the existing value.

Current results with optimizations for larger incrementing amounts:

{code}
incrementAsLong : Incremented 100000 times by 1 in 92 ms (current value is 100000)
incrementAsBytes : Incremened 100000 times by 1 in 11 ms (current value is 100000)

incrementAsLong : Incremented 100000 times by 10 in 90 ms (current value is 1000000)
incrementAsBytes : Incremened 100000 times by 10 in 15 ms (current value is 1000000)

incrementAsLong : Incremented 100000 times by 100 in 91 ms (current value is 10000000)
incrementAsBytes : Incremened 100000 times by 100 in 47 ms (current value is 10000000)

incrementAsLong : Incremented 100000 times by 1000 in 91 ms (current value is 100000000)
incrementAsBytes : Incremened 100000 times by 1000 in 98 ms (current value is 100000000)

incrementAsLong : Incremented 100000 times by 10000 in 90 ms (current value is 1000000000)
incrementAsBytes : Incremened 100000 times by 10000 in 35 ms (current value is 1000000000)

incrementAsLong : Incremented 100000 times by 100000 in 91 ms (current value is 10000000000)
incrementAsBytes : Incremened 100000 times by 100000 in 136 ms (current value is 10000000000)

incrementAsLong : Incremented 100000 times by 1000000 in 90 ms (current value is 100000000000)
incrementAsBytes : Incremened 100000 times by 1000000 in 75 ms (current value is 100000000000)
{code}

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray updated HBASE-1252:
---------------------------------

    Attachment: hbase-1252-v1.patch

Changes HTable to take a long.  Also changes implementation of increment to work on bytes directly rather than creating new long and incrementing it.  Probably will not see performance improvement from client POV because the actual increment is orders of magnitude faster than the network latency.  Just wanted to move to binary incrementing so we could do it in place down the road, and we can do it on any size column value rather than just 8 byte longs.

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>         Attachments: hbase-1252-v1.patch
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-1252) Make atomic increment perform a binary increment

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray updated HBASE-1252:
---------------------------------

    Attachment:     (was: hbase-1252-v2.patch)

> Make atomic increment perform a binary increment
> ------------------------------------------------
>
>                 Key: HBASE-1252
>                 URL: https://issues.apache.org/jira/browse/HBASE-1252
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.19.0
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>            Priority: Minor
>             Fix For: 0.19.1, 0.20.0
>
>         Attachments: hbase-1252-v1.patch, hbase-1252-v2.patch
>
>
> A few issues related to recently committed HBASE-803
> - The HTable api still takes an integer amount rather than long, mismatching HRI.
> - Binary increments are 10 times faster for small amounts than going Bytes.toLong, += amount, Bytes.toBytes.  Twice as fast for large amounts (binary incrementor just loops a bunch of single increments, though there is plenty of room for optimizations in my current implementation)
> - Using a binary increment means we don't have to worry about the size of the value.  If someone wants a 16 byte value they can have it, just have to initialize as such.  If no existing value exists, will default to long/8 bytes.  Only odd behavior will be what happens when you are at the max value, currently will just stay at all 11111 binary.  Could actually grow the byte[] but then we can't do things in place. I'm okay with leaving it like that, not exactly sure what the current implementation would do, throw an exception or wrap?
> - Using binary incrementing, we can directly manipulate values in the memcache rather than sending updates with the same timestamp.  I think we should hold off on doing this until HBASE-1234 goes in.  We'll then have to deal directly with hlog.  (this issue is not going to address this)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.