You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/01/10 22:11:52 UTC

[GitHub] [hudi] nsivabalan commented on issue #2429: [SUPPORT] S3 throws ConnectionPoolTimeoutException: Timeout waiting for connection from pool when metadata table is turned on

nsivabalan commented on issue #2429:
URL: https://github.com/apache/hudi/issues/2429#issuecomment-757552669


   sure. 
   I ran a long running test-suite job in cluster. w/o metadata listing enabled, things were good. But w/ metadata enabled, ran into too many open files problems after 20 iterations(each iteration has inserts, updates, deletes, spark datasource read) 
   
   ```
   21/01/10 06:42:04 WARN TaskSetManager: Lost task 0.0 in stage 3909.0 (TID 455057, agent6922-abc.com, executor 1): java.io.UncheckedIOException: java.net.SocketException: Too many open files
           at org.apache.hudi.integ.testsuite.generator.DeltaGenerator.lambda$writeRecords$5e8e5895$1(DeltaGenerator.java:123)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
           at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
           at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:123)
           at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
                  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: java.net.SocketException: Too many open files
           at sun.nio.ch.Net.socket0(Native Method)
           at sun.nio.ch.Net.socket(Net.java:411)
           at sun.nio.ch.Net.socket(Net.java:404)
           at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:105)
           at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)
           at java.nio.channels.SocketChannel.open(SocketChannel.java:145)
           at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:62)
           at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1531)
           at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
           at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
           at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
   
   21/01/10 06:42:05 WARN TaskSetManager: Lost task 3.0 in stage 3911.0 (TID 455068, agent6922-abc.com, executor 1): java.io.FileNotFoundException: /opt/hudi/shared/yarn/cache/data/nm/usercache/sivabala/appcache/application_1609962097310_747119/blockmgr-7f5d0680-3597-426f-8a2b-29f7411e130b/0c/temp_shuffle_a70e65a8-3e98-4d5a-a714-ef3c44c2fe4e (Too many open files)
           at java.io.FileOutputStream.open0(Native Method)
           at java.io.FileOutputStream.open(FileOutputStream.java:270)
           at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
           at org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:103)
           at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:116)
           at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:237)
           at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
               at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
           at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
           at org.apache.spark.scheduler.Task.run(Task.scala:123)
           at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
   21/01/10 06:42:05 WARN TaskSetManager: Lost task 1.0 in stage 3911.0 (TID 455066, agent6922-abc.com, executor 1): java.io.FileNotFoundException: /opt/hudi/shared/yarn/cache/data/nm/usercache/sivabala/appcache/application_1609962097310_747119/blockmgr-7f5d0680-3597-426f-8a2b-29f7411e130b/03/temp_shuffle_f362ca47-2b91-457a-b3ca-33570de418bd (Too many open files)
           at java.io.FileOutputStream.open0(Native Method)
           at java.io.FileOutputStream.open(FileOutputStream.java:270)
           at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
           at org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:103)
           at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:116)
           at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:237)
           at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
           at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
           at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
           at org.apache.spark.scheduler.Task.run(Task.scala:123)
           at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org