You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Arpit Agarwal (Jira)" <ji...@apache.org> on 2023/05/30 18:36:00 UTC

[jira] [Resolved] (HDDS-7278) [ozone-LR] ArrayIndexOutOfBoundsException in EC Buckets with fault injections

     [ https://issues.apache.org/jira/browse/HDDS-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arpit Agarwal resolved HDDS-7278.
---------------------------------
    Resolution: Done

> [ozone-LR] ArrayIndexOutOfBoundsException in EC Buckets with fault injections
> -----------------------------------------------------------------------------
>
>                 Key: HDDS-7278
>                 URL: https://issues.apache.org/jira/browse/HDDS-7278
>             Project: Apache Ozone
>          Issue Type: Bug
>          Components: Ozone Datanode
>    Affects Versions: 1.2.0
>            Reporter: Jyotirmoy Sinha
>            Priority: Critical
>
> Steps :
>  # Execute LR testcases on LR cluster with EC enabled. Policy - RS-6-3-1024k
>  # Inject faults on the setup
>  # HiveWrite operation failed on the cluster
> Faults injected (at 30mins intervals) :
>  # datanode disk, process and network level faults (2 nodes at a time)
>  # scm disk, process and network level faults (scm leader)
>  # om process and network level faults (om leader)
> Exception :
> {code:java}
> ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1664191599114_0376_1026_00, diagnostics=[Task failed, taskId=task_1664191599114_0376_1026_00_000147, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1664191599114_0376_1026_00_000147_0:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 1
>         at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:298)
>         at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:252)
>         at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>         at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:75)
>         at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:62)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
>         at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:62)
>         at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:38)
>         at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>         at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>         at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
>         at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>   Caused by: java.lang.RuntimeException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 1
>         at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
>         at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:145)
>         at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
>         at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:156)
>         at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:82)
>         at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:703)
>         at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:662)
>         at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
>         at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114)
>         at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:543)
>         at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:189)
>         at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:268)
>         ... 16 more
> Caused by: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 1
>         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
>         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
>         at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:434)
>         at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
>         ... 27 more
>   Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>         at org.apache.hadoop.ozone.client.io.ECBlockReconstructedStripeInputStream.assignBuffers(ECBlockReconstructedStripeInputStream.java:289)
>         at org.apache.hadoop.ozone.client.io.ECBlockReconstructedStripeInputStream.read(ECBlockReconstructedStripeInputStream.java:360)
>         at org.apache.hadoop.ozone.client.io.ECBlockReconstructedStripeInputStream.readStripe(ECBlockReconstructedStripeInputStream.java:345)
>         at org.apache.hadoop.ozone.client.io.ECBlockReconstructedInputStream.readStripe(ECBlockReconstructedInputStream.java:214)
>         at org.apache.hadoop.ozone.client.io.ECBlockReconstructedInputStream.readAndSeekStripe(ECBlockReconstructedInputStream.java:198)
>         at org.apache.hadoop.ozone.client.io.ECBlockReconstructedInputStream.seek(ECBlockReconstructedInputStream.java:192)
>         at org.apache.hadoop.ozone.client.io.ECBlockInputStreamProxy.seek(ECBlockInputStreamProxy.java:224)
>         at org.apache.hadoop.ozone.client.io.KeyInputStream.seek(KeyInputStream.java:346)
>         at org.apache.hadoop.fs.ozone.OzoneFSInputStream.seek(OzoneFSInputStream.java:78)
>         at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:85)
>         at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:124)
>         at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:116)
>         at org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readStripeFooter(RecordReaderUtils.java:273)
>         at org.apache.orc.impl.RecordReaderImpl.readStripeFooter(RecordReaderImpl.java:308)
>         at org.apache.orc.impl.RecordReaderImpl.beginReadStripe(RecordReaderImpl.java:1131)
>         at org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1093)
>         at org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1261)
>         at org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1296)
>         at org.apache.orc.impl.RecordReaderImpl.<init>(RecordReaderImpl.java:284)
>         at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:68)
>         at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:83)
>         at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcAcidRowBatchReader.<init>(VectorizedOrcAcidRowBatchReader.java:170)
>         at org.apache.hadoop.hive.ql.io.orc.VectorizedOrcAcidRowBatchReader.<init>(VectorizedOrcAcidRowBatchReader.java:160)
>         at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1973)
>         at org.apache.hadoop.hive.ql.io.RecordReaderWrapper.create(RecordReaderWrapper.java:72)
>         at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:431)
>         ... 28 more {code}
> Fault injected when exception occurred - read corrupt datanode on hdds.datanode.dirs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org