You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2021/03/09 10:32:00 UTC

[jira] [Commented] (DRILL-7854) When writing to S3 "WriteOperationHelper.newUploadPartRequest" throws "NoSuchMethodError"

    [ https://issues.apache.org/jira/browse/DRILL-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297997#comment-17297997 ] 

Steve Loughran commented on DRILL-7854:
---------------------------------------

version of guava on the runtime is older than the version hadoop was compiled with; lacks the overloaded checkArgument(string, object) method.

Google's games with overloading the preconditions assertions are one of the key reasons why hadoop trunk has moved to a shaded version of guava and is going to reimplement its own set of the checks entirely. We don't need this suffering, and neither do you. sorry

> When writing to S3 "WriteOperationHelper.newUploadPartRequest" throws "NoSuchMethodError"
> -----------------------------------------------------------------------------------------
>
>                 Key: DRILL-7854
>                 URL: https://issues.apache.org/jira/browse/DRILL-7854
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Other
>    Affects Versions: 1.18.0
>         Environment: Standalone drill running in a docker environment on x86_64 (centos7, alpine 3, debian buster)
>  
> Drill 1.18.0 was downloaded from an official mirror
>            Reporter: Joshua Pedrick
>            Priority: Major
>             Fix For: 1.19.0
>
>
> When writing to S3(hosted on local Minio cluster) I am getting a "NoSuchMethod" error. It appears to be related to the guava version included with hadoop-3.2.1.
>  
> {code:java}
> 2021-02-01 21:39:17,753 [1fe78bd6-0525-196e-b3d2-25c294a604e2:frag:1:13] ERROR o.a.d.e.p.i.p.ProjectRecordBatch - ProjectRecordBatch[projector=Projector[vector2=null, selectionVectorMode=NONE], hasRemainder=false, remainderIndex=0, recordCount=0, container=org.apache.drill.exec.record.VectorContainer@3efe0be4[recordCount = 27884, schemaChanged = false, schema = BatchSchema [, ...]]2021-02-01 21:39:17,754 [1fe78bd6-0525-196e-b3d2-25c294a604e2:frag:1:13] ERROR o.a.d.e.physical.impl.BaseRootExec - Batch dump completed.2021-02-01 21:39:17,754 [1fe78bd6-0525-196e-b3d2-25c294a604e2:frag:1:13] INFO  o.a.d.e.w.fragment.FragmentExecutor - 1fe78bd6-0525-196e-b3d2-25c294a604e2:1:13: State change requested CANCELLATION_REQUESTED --> FAILED2021-02-01 21:39:25,199 [1fe78bd6-0525-196e-b3d2-25c294a604e2:frag:1:13] ERROR o.a.d.exec.server.BootStrapContext - org.apache.drill.exec.work.WorkManager$WorkerBee$1.run() leaked an exception.java.lang.NoSuchMethodError: com/google/common/base/Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;J)V (loaded from <Unknown> by sun.misc.Launcher$AppClassLoader@3156fd60) called from class org.apache.hadoop.fs.s3a.WriteOperationHelper (loaded from file:/opt/drill/jars/3rdparty/hadoop-aws-3.2.1.jar by sun.misc.Launcher$AppClassLoader@3156fd60).	at org.apache.hadoop.fs.s3a.WriteOperationHelper.newUploadPartRequest(WriteOperationHelper.java:397)	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.uploadBlockAsync(S3ABlockOutputStream.java:584)	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$000(S3ABlockOutputStream.java:521)	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.uploadCurrentBlock(S3ABlockOutputStream.java:314)	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.write(S3ABlockOutputStream.java:292)	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)	at java.io.DataOutputStream.write(DataOutputStream.java:107)	at java.io.FilterOutputStream.write(FilterOutputStream.java:97)	at org.apache.parquet.hadoop.util.HadoopPositionOutputStream.write(HadoopPositionOutputStream.java:45)	at org.apache.parquet.bytes.CapacityByteArrayOutputStream.writeToOutput(CapacityByteArrayOutputStream.java:234)	at org.apache.parquet.bytes.CapacityByteArrayOutputStream.writeTo(CapacityByteArrayOutputStream.java:247)	at org.apache.parquet.bytes.BytesInput$CapacityBAOSBytesInput.writeAllTo(BytesInput.java:421)	at org.apache.parquet.hadoop.ParquetFileWriter.writeColumnChunk(ParquetFileWriter.java:620)	at org.apache.parquet.hadoop.ParquetColumnChunkPageWriteStore$ColumnChunkPageWriter.writeToFileWriter(ParquetColumnChunkPageWriteStore.java:268)	at org.apache.parquet.hadoop.ParquetColumnChunkPageWriteStore.flushToFileWriter(ParquetColumnChunkPageWriteStore.java:89)	at org.apache.drill.exec.store.parquet.ParquetRecordWriter.flushParquetFileWriter(ParquetRecordWriter.java:737)	at org.apache.drill.exec.store.parquet.ParquetRecordWriter.flush(ParquetRecordWriter.java:435)	at org.apache.drill.exec.store.parquet.ParquetRecordWriter.cleanup(ParquetRecordWriter.java:703)	at org.apache.drill.exec.physical.impl.WriterRecordBatch.closeWriter(WriterRecordBatch.java:203)	at org.apache.drill.exec.physical.impl.WriterRecordBatch.close(WriterRecordBatch.java:221)	at org.apache.drill.common.DeferredException.suppressingClose(DeferredException.java:159)	at org.apache.drill.exec.physical.impl.BaseRootExec.close(BaseRootExec.java:169)	at org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:408)	at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:239)	at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:360)	at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)	at java.lang.Thread.run(Thread.java:823)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)