You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Istvan Toth (Jira)" <ji...@apache.org> on 2021/12/01 13:48:00 UTC

[jira] [Updated] (HBASE-26527) ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue()

     [ https://issues.apache.org/jira/browse/HBASE-26527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Istvan Toth updated HBASE-26527:
--------------------------------
    Description: 
While investigating a Phoenix crash, I've found a possible problem in KeyValueUtil.

When using Phoenix, we need configure (at least for older versions) org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec as a WAL codec in HBase.

This codec will eventually serialize standard (not phoenix specifc WAL entries) to the WAL file, and internally converts the Cell objects to KeyValue objects, by building a new byte[].

This fails with an ArrayIndexOutOfBoundsException, because the we allocate a byte[] the size of Cell.getSerializedSize(), and it seems that we are processing a Cell that does not actually serialize the column family and later fields. 
However, we are building a traditional KeyValue object for serialization, which does serialize them, hence we run out of bytes.

I think that since we are writing a KeyValue, we should not rely of the getSerializedSize() method of the source cell, but rather calculate the backing array size based on how KeyValue expects its data to be serialized.

The stack trace for reference:

{noformat}
java.lang.ArrayIndexOutOfBoundsException: 9787
        at org.apache.hadoop.hbase.util.Bytes.putByte(Bytes.java:502)
        at org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(KeyValueUtil.java:142)
        at org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:156)
        at org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:133)
        at org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:97)
        at org.apache.phoenix.util.PhoenixKeyValueUtil.maybeCopyCell(PhoenixKeyValueUtil.java:214)
        at org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueEncoder.write(IndexedWALEditCodec.java:218)
        at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:59)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:294)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:65)
        at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:931)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1075)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:964)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:873)
        at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
        at java.lang.Thread.run(Thread.java:748)
{noformat}

Note that I am still not sure exactly what triggers this bug, one possibility is org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue


  was:
While investigating a Phoenix crash, I've found a possible problem in KeyValueUtil.

When using Phoenix, we need configure (at least for older versions) org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec as a WAL codec in HBase.

This codec will eventually serialize standard (not phoenix specifc WAL entries) to the WAL file, and internally converts the Cell objects to KeyValue objects, by building a new byte[].

This fails with an ArrayIndexOutOfBoundsException, because the we allocate a byte[] the size of Cell.getSerializedSize(), and it seems that we are processing a Cell that does not actually serialize the column family and later fields. 
However, we are building a traditional KeyValue object for serialization, which does serialize them, hence we run out of bytes.

I think that since we are writing a KeyValue, we should not rely of the getSerializedSize() method of the source cell, but rather calculate the backing array size based on how KeyValue expects its data to be serialized.

The stack trace for reference:

{noformat}
2021-11-21 23:05:08,388 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: Append sequenceId=1017038501, requesting roll of WAL
java.lang.ArrayIndexOutOfBoundsException: 9787
at org.apache.hadoop.hbase.util.Bytes.putByte(Bytes.java:502)
at org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(KeyValueUtil.java:142)
at org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:156)
at org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:133)
at org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:97)
at org.apache.phoenix.util.PhoenixKeyValueUtil.maybeCopyCell(PhoenixKeyValueUtil.java:214)
at org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueEncoder.write(IndexedWALEditCodec.java:218)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:59)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:294)
{noformat}

Note that I am still not sure exactly what triggers this bug, one possibility is org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue



> ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue()
> ------------------------------------------------------------------
>
>                 Key: HBASE-26527
>                 URL: https://issues.apache.org/jira/browse/HBASE-26527
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 2.2.7, 3.0.0-alpha-2
>            Reporter: Istvan Toth
>            Assignee: Istvan Toth
>            Priority: Major
>
> While investigating a Phoenix crash, I've found a possible problem in KeyValueUtil.
> When using Phoenix, we need configure (at least for older versions) org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec as a WAL codec in HBase.
> This codec will eventually serialize standard (not phoenix specifc WAL entries) to the WAL file, and internally converts the Cell objects to KeyValue objects, by building a new byte[].
> This fails with an ArrayIndexOutOfBoundsException, because the we allocate a byte[] the size of Cell.getSerializedSize(), and it seems that we are processing a Cell that does not actually serialize the column family and later fields. 
> However, we are building a traditional KeyValue object for serialization, which does serialize them, hence we run out of bytes.
> I think that since we are writing a KeyValue, we should not rely of the getSerializedSize() method of the source cell, but rather calculate the backing array size based on how KeyValue expects its data to be serialized.
> The stack trace for reference:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 9787
>         at org.apache.hadoop.hbase.util.Bytes.putByte(Bytes.java:502)
>         at org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(KeyValueUtil.java:142)
>         at org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:156)
>         at org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:133)
>         at org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:97)
>         at org.apache.phoenix.util.PhoenixKeyValueUtil.maybeCopyCell(PhoenixKeyValueUtil.java:214)
>         at org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueEncoder.write(IndexedWALEditCodec.java:218)
>         at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:59)
>         at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:294)
>         at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:65)
>         at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:931)
>         at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1075)
>         at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:964)
>         at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:873)
>         at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
>         at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Note that I am still not sure exactly what triggers this bug, one possibility is org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue



--
This message was sent by Atlassian Jira
(v8.20.1#820001)