You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Kai Zheng (JIRA)" <ji...@apache.org> on 2015/05/08 12:57:00 UTC

[jira] [Updated] (HADOOP-11938) Fix ByteBuffer version encode/decode API of raw erasure coder

     [ https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Kai Zheng updated HADOOP-11938:
-------------------------------
    Attachment: HADOOP-11938-HDFS-7285-workaround.patch

Uploaded a work around patch to fix the failure test case.

> Fix ByteBuffer version encode/decode API of raw erasure coder
> -------------------------------------------------------------
>
>                 Key: HADOOP-11938
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11938
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: io
>            Reporter: Kai Zheng
>            Assignee: Kai Zheng
>         Attachments: HADOOP-11938-HDFS-7285-workaround.patch
>
>
> While investigating a test failure in {{TestRecoverStripedFile}}, one issue in raw erasrue coder, a bad optimization in below codes. It assumes the  heap buffer backed by the bytes array available for reading or writing always starts with zero and takes the whole.
> {code}
>   protected static byte[][] toArrays(ByteBuffer[] buffers) {
>     byte[][] bytesArr = new byte[buffers.length][];
>     ByteBuffer buffer;
>     for (int i = 0; i < buffers.length; i++) {
>       buffer = buffers[i];
>       if (buffer == null) {
>         bytesArr[i] = null;
>         continue;
>       }
>       if (buffer.hasArray()) {
>         bytesArr[i] = buffer.array();
>       } else {
>         throw new IllegalArgumentException("Invalid ByteBuffer passed, " +
>             "expecting heap buffer");
>       }
>     }
>     return bytesArr;
>   }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)