You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/05/03 12:45:07 UTC

[GitHub] [hudi] parisni commented on issue #5482: [SUPPORT] metadata index fail with MOR tables

parisni commented on issue #5482:
URL: https://github.com/apache/hudi/issues/5482#issuecomment-1116057329

   I cannot really share the whole code, but parts of it.
   
   > Also, do the timeouts prevent the ingestion from proceeding? 
   yes : I only get 5 commit done but I am trying 6 operations
   
   Then I added a concurrency lock store based on dynamodb.
   
   Here are the logs after the timeout (about 2 min to wait). (this is a minio based local s3 provider). I checked and the file it tries to open exists.
   ```
   org.apache.hudi.exception.HoodieIOException: IOException when reading logfile HoodieLogFile{pathStr='s3a://test-bucket/s3_path/table/.hoodie/metadata/column_stats/.col-stats-0001_00000000000000.log.4_0-128-201', fileLen=-1}
   	at org.apache.hudi.common.table.log.HoodieLogFileReader.hasNext(HoodieLogFileReader.java:352)
   	at org.apache.hudi.common.table.log.HoodieLogFormatReader.hasNext(HoodieLogFormatReader.java:99)
   	at org.apache.hudi.common.table.log.HoodieLogFormatReader.hasNext(HoodieLogFormatReader.java:116)
   	at org.apache.hudi.common.table.log.AbstractHoodieLogRecordReader.scanInternal(AbstractHoodieLogRecordReader.java:223)
   	at org.apache.hudi.common.table.log.AbstractHoodieLogRecordReader.scan(AbstractHoodieLogRecordReader.java:192)
   	at org.apache.hudi.common.table.log.HoodieMergedLogRecordScanner.performScan(HoodieMergedLogRecordScanner.java:110)
   	at org.apache.hudi.common.table.log.HoodieMergedLogRecordScanner.<init>(HoodieMergedLogRecordScanner.java:103)
   	at org.apache.hudi.common.table.log.HoodieMergedLogRecordScanner$Builder.build(HoodieMergedLogRecordScanner.java:324)
   	at org.apache.hudi.table.action.compact.HoodieCompactor.compact(HoodieCompactor.java:198)
   	at org.apache.hudi.table.action.compact.HoodieCompactor.lambda$compact$57154431$1(HoodieCompactor.java:138)
   	at org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1040)
   	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
   	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
   	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
   	at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:221)
   	at org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:349)
   	at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1182)
   	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
   	at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
   	at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
   	at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:357)
   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:308)
   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
   	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
   	at org.apache.spark.scheduler.Task.run(Task.scala:123)
   	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:414)
   	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
   	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:417)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   Caused by: java.io.InterruptedIOException: Reopen at position 0 on s3a://test-bucket/s3_path/table/.hoodie/metadata/column_stats/.col-stats-0001_00000000000000.log.4_0-128-201: com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
   	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:125)
   	at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:155)
   	at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:281)
   	at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:364)
   	at java.io.DataInputStream.read(DataInputStream.java:149)
   	at java.io.DataInputStream.readFully(DataInputStream.java:195)
   	at org.apache.hudi.common.table.log.HoodieLogFileReader.hasNextMagic(HoodieLogFileReader.java:379)
   	at org.apache.hudi.common.table.log.HoodieLogFileReader.readMagic(HoodieLogFileReader.java:365)
   	at org.apache.hudi.common.table.log.HoodieLogFileReader.hasNext(HoodieLogFileReader.java:350)
   	... 32 more
   Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1175)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1121)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
   	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
   	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
   	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4926)
   	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4872)
   	at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1472)
   	at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:148)
   	... 39 more
   Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
   	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:286)
   	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:263)
   	at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70)
   	at com.amazonaws.http.conn.$Proxy46.get(Unknown Source)
   	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:190)
   	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
   	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
   	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
   	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
   	at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1297)
   	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
   	... 50 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org