You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Istvan Toth (Jira)" <ji...@apache.org> on 2021/12/01 12:54:00 UTC

[jira] [Created] (HBASE-26527) ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue()

Istvan Toth created HBASE-26527:
-----------------------------------

             Summary: ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue()
                 Key: HBASE-26527
                 URL: https://issues.apache.org/jira/browse/HBASE-26527
             Project: HBase
          Issue Type: Bug
          Components: wal
    Affects Versions: 2.2.7, 3.0.0-alpha-2
            Reporter: Istvan Toth
            Assignee: Istvan Toth


While investigating a Phoenix crash, I've found a possible problem in KeyValueUtil.

When using Phoenix, we need configure (at least for older versions) org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec as a WAL codec in HBase.

This codec will eventually serialize standard (not phoenix specifc WAL entries) to the WAL file, and internally converts the Cell objects to KeyValue objects, by building a new byte[].

This fails with an ArrayIndexOutOfBoundsException, because the we allocate a byte[] the size of Cell.getSerializedSize(), and it seems that we are processing a Cell that does not actually serialize the column family and later fields. 
However, we are building a traditional KeyValue object for serialization, which does serialize them, hence we run out of bytes.

I think that since we are writing a KeyValue, we should not rely of the getSerializedSize() method of the source cell, but rather calculate the backing array size based on how KeyValue expects its data to be serialized.

The stack trace for reference:

{noformat}
2021-11-21 23:05:08,388 WARN org.apache.hadoop.hbase.regionserver.wal.FSHLog: Append sequenceId=1017038501, requesting roll of WAL
java.lang.ArrayIndexOutOfBoundsException: 9787
at org.apache.hadoop.hbase.util.Bytes.putByte(Bytes.java:502)
at org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(KeyValueUtil.java:142)
at org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:156)
at org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:133)
at org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:97)
at org.apache.phoenix.util.PhoenixKeyValueUtil.maybeCopyCell(PhoenixKeyValueUtil.java:214)
at org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueEncoder.write(IndexedWALEditCodec.java:218)
at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:59)
at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:294)
{noformat}

Note that I am still not sure exactly what triggers this bug, one possibility is org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue




--
This message was sent by Atlassian Jira
(v8.20.1#820001)