You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Wei-Chiu Chuang (JIRA)" <ji...@apache.org> on 2018/06/13 00:23:00 UTC

[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

    [ https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16510447#comment-16510447 ] 

Wei-Chiu Chuang commented on HBASE-20403:
-----------------------------------------

This bug seems to have different facets:

{noformat}
2018-06-12 10:35:45,613 WARN org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180606004324/1fc1713dc72a0d3351b3214add699e59/meta/543bace1c8f545a5a34d8f1dc6be6063, offset=0, end=153035
java.lang.NegativeArraySizeException
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1768)
        at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
        at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2018-06-12 10:35:45,614 WARN org.apache.hadoop.hbase.io.hfile.HFile: HBase checksum verification failed for file hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180606004324/1fc1713dc72a0d3351b3214add699e59/meta/543bace1c8f545a5a34d8f1dc6be6063 at offset 0 filesize 157916. Retrying read with HDFS checksums turned on...
2018-06-12 10:35:45,626 WARN org.apache.hadoop.hbase.io.hfile.HFile: HDFS checksum verification succeeded for file hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180606004324/1fc1713dc72a0d3351b3214add699e59/meta/543bace1c8f545a5a34d8f1dc6be6063 at offset 0 filesize 157916
{noformat}

> Prefetch sometimes doesn't work with encrypted file system
> ----------------------------------------------------------
>
>                 Key: HBASE-20403
>                 URL: https://issues.apache.org/jira/browse/HBASE-20403
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 2.0.0-beta-2
>            Reporter: Umesh Agashe
>            Assignee: Umesh Agashe
>            Priority: Major
>             Fix For: 3.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988, offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)