You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Ofer Eliassaf <of...@gmail.com> on 2021/11/23 16:38:29 UTC

S3a directory committer thread out of memoty

Hi all,

Would really like to get some help here.

We are using Spark3.1.2 using standalone cluster.


Since we started using the s3a directory committer, our spark jobs
stability and performance grew significantly!



Lately however we are completely baffled troubleshooting this s3a directory
committer issue for days,

and wonder if you have any idea what's going on?



Our spark jobs fail because of  Java OOM (or rather process limit) error:


 An error occurred while calling
None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.OutOfMemoryError: unable to create native thread: possibly out
of memory or process/resource limits reached at
java.base/java.lang.Thread.start0(Native Method) at
java.base/java.lang.Thread.start(Thread.java:803) at
java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
at
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
at
java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
at
java.base/java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:714)
at
org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1(MessageLoop.scala:174)
at
org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1$adapted(MessageLoop.scala:173)
at scala.collection.immutable.Range.foreach(Range.scala:158) at
org.apache.spark.rpc.netty.DedicatedMessageLoop.<init>(MessageLoop.scala:173)
at org.apache.spark.rpc.netty.Dispatcher.liftedTree1$1(Dispatcher.scala:75)
at
org.apache.spark.rpc.netty.Dispatcher.registerRpcEndpoint(Dispatcher.scala:72)
at
org.apache.spark.rpc.netty.NettyRpcEnv.setupEndpoint(NettyRpcEnv.scala:136)
at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:231) at
org.apache.spark.SparkEnv$.create(SparkEnv.scala:394) at
org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:189) at
org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277) at
org.apache.spark.SparkContext.<init>(SparkContext.scala:458) at
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
py4j.Gateway.invoke(Gateway.java:238) at
py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at
py4j.GatewayConnection.run(GatewayConnection.java:238) at
java.base/java.lang.Thread.run(Thread.java:834)




*Spark Thread Dump shows over 5000 committer threads on the spark driver!*




[image: image.png]






This is considering that our settings do not allow more than 100 threads…

Or we don't understand something…



*fs.s3a.threads.max                           | 100 |  The total number of
threads available in the filesystem for data uploads *or any other queued
filesystem operation*.*

*fs.s3a.connection.maximum       | 1000 | Controls the maximum number of
simultaneous connections to S3. *

*fs.s3a.committer.threads              | 16   |   Number of threads in
committers for parallel operations on files (upload, commit, abort,
delete...)*

*fs.s3a.max.total.tasks                      | 5    |*

*fs.s3a.committer.name <http://fs.s3a.committer.name>
directory*

*fs.s3a.fast.upload.buffer                 disk*

*io.file.buffer.size                                1048576*



*mapreduce.outputcommitter.factory.scheme.s3a    -
org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory*



We had tried different versions of the spark Hadoop cloud library, but the
issue is consistently the same.



https://repository.cloudera.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0-cdh6.3.2/spark-hadoop-cloud_2.11-2.4.0-cdh6.3.2.jar

https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0.7.0.3.0-79/spark-hadoop-cloud_2.11-2.4.0.7.0.3.0-79.jar

https://repo1.maven.org/maven2/org/apache/spark/spark-hadoop-cloud_2.12/3.2.0/spark-hadoop-cloud_2.12-3.2.0.jar

https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.12/3.1.2.7.2.12.0-291/spark-hadoop-cloud_2.12-3.1.2.7.2.12.0-291.jar



We'd really appreciate if you can point us in the right direction 😊



Thank you for your time!

-- 
Regards,
Ofer Eliassaf

Re: S3a directory committer thread out of memoty

Posted by Gourav Sengupta <go...@gmail.com>.
Hi,

not sure of this, but have you checked with AWS that you are not raising
too many requests for S3 from your end?

If you are in Amazon, then not using Glue or EMR and trying to use open
source SPARK then you might want to reconsider your choices.

If you are remotely trying to read and write from S3, once again there are
API call limits, remote data transfer failures and latencies, security
issues, etc that you can avoid by first writing into local HDFS or disk and
then trying to sync it up using other tools.


Regards,
Gourav Sengupta

On Tue, Nov 23, 2021 at 6:53 PM Ofer Eliassaf <of...@gmail.com>
wrote:

> Hi,
> Thanks for the reply. I am not using EMR. Using open source Spark with
> directory comitter.
>
> On Tue, Nov 23, 2021 at 7:38 PM Gourav Sengupta <go...@gmail.com>
> wrote:
>
>> Hi,
>> are you using EMR? In case you are I think that EMRFS should be used and
>> s3a has been deprecated, in case I am not mistaken, almost 4 years back.
>>
>> Regards,
>> Gourav
>>
>>
>> On Tue, Nov 23, 2021 at 4:39 PM Ofer Eliassaf <of...@gmail.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> Would really like to get some help here.
>>>
>>> We are using Spark3.1.2 using standalone cluster.
>>>
>>>
>>> Since we started using the s3a directory committer, our spark jobs
>>> stability and performance grew significantly!
>>>
>>>
>>>
>>> Lately however we are completely baffled troubleshooting this s3a
>>> directory committer issue for days,
>>>
>>> and wonder if you have any idea what's going on?
>>>
>>>
>>>
>>> Our spark jobs fail because of  Java OOM (or rather process limit) error:
>>>
>>>
>>>  An error occurred while calling
>>> None.org.apache.spark.api.java.JavaSparkContext.
>>> : java.lang.OutOfMemoryError: unable to create native thread: possibly
>>> out of memory or process/resource limits reached at
>>> java.base/java.lang.Thread.start0(Native Method) at
>>> java.base/java.lang.Thread.start(Thread.java:803) at
>>> java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
>>> at
>>> java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
>>> at
>>> java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
>>> at
>>> java.base/java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:714)
>>> at
>>> org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1(MessageLoop.scala:174)
>>> at
>>> org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1$adapted(MessageLoop.scala:173)
>>> at scala.collection.immutable.Range.foreach(Range.scala:158) at
>>> org.apache.spark.rpc.netty.DedicatedMessageLoop.<init>(MessageLoop.scala:173)
>>> at org.apache.spark.rpc.netty.Dispatcher.liftedTree1$1(Dispatcher.scala:75)
>>> at
>>> org.apache.spark.rpc.netty.Dispatcher.registerRpcEndpoint(Dispatcher.scala:72)
>>> at
>>> org.apache.spark.rpc.netty.NettyRpcEnv.setupEndpoint(NettyRpcEnv.scala:136)
>>> at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:231) at
>>> org.apache.spark.SparkEnv$.create(SparkEnv.scala:394) at
>>> org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:189) at
>>> org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277) at
>>> org.apache.spark.SparkContext.<init>(SparkContext.scala:458) at
>>> org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
>>> at
>>> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method) at
>>> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> at
>>> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> at
>>> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>>> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at
>>> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
>>> py4j.Gateway.invoke(Gateway.java:238) at
>>> py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
>>> at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at
>>> py4j.GatewayConnection.run(GatewayConnection.java:238) at
>>> java.base/java.lang.Thread.run(Thread.java:834)
>>>
>>>
>>>
>>>
>>> *Spark Thread Dump shows over 5000 committer threads on the spark
>>> driver!*
>>>
>>>
>>>
>>>
>>> [image: image.png]
>>>
>>>
>>>
>>>
>>>
>>>
>>> This is considering that our settings do not allow more than 100 threads…
>>>
>>> Or we don't understand something…
>>>
>>>
>>>
>>> *fs.s3a.threads.max                           | 100 |  The total number
>>> of threads available in the filesystem for data uploads *or any other
>>> queued filesystem operation*.*
>>>
>>> *fs.s3a.connection.maximum       | 1000 | Controls the maximum number of
>>> simultaneous connections to S3. *
>>>
>>> *fs.s3a.committer.threads              | 16   |   Number of threads in
>>> committers for parallel operations on files (upload, commit, abort,
>>> delete...)*
>>>
>>> *fs.s3a.max.total.tasks                      | 5    |*
>>>
>>> *fs.s3a.committer.name <http://fs.s3a.committer.name>
>>> directory*
>>>
>>> *fs.s3a.fast.upload.buffer                 disk*
>>>
>>> *io.file.buffer.size                                1048576*
>>>
>>>
>>>
>>> *mapreduce.outputcommitter.factory.scheme.s3a    -
>>> org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory*
>>>
>>>
>>>
>>> We had tried different versions of the spark Hadoop cloud library, but
>>> the issue is consistently the same.
>>>
>>>
>>>
>>>
>>> https://repository.cloudera.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0-cdh6.3.2/spark-hadoop-cloud_2.11-2.4.0-cdh6.3.2.jar
>>>
>>>
>>> https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0.7.0.3.0-79/spark-hadoop-cloud_2.11-2.4.0.7.0.3.0-79.jar
>>>
>>>
>>> https://repo1.maven.org/maven2/org/apache/spark/spark-hadoop-cloud_2.12/3.2.0/spark-hadoop-cloud_2.12-3.2.0.jar
>>>
>>>
>>> https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.12/3.1.2.7.2.12.0-291/spark-hadoop-cloud_2.12-3.1.2.7.2.12.0-291.jar
>>>
>>>
>>>
>>> We'd really appreciate if you can point us in the right direction 😊
>>>
>>>
>>>
>>> Thank you for your time!
>>>
>>> --
>>> Regards,
>>> Ofer Eliassaf
>>>
>>
>
> --
> Regards,
> Ofer Eliassaf
>

Re: S3a directory committer thread out of memoty

Posted by Ofer Eliassaf <of...@gmail.com>.
Hi,
Thanks for the reply. I am not using EMR. Using open source Spark with
directory comitter.

On Tue, Nov 23, 2021 at 7:38 PM Gourav Sengupta <go...@gmail.com>
wrote:

> Hi,
> are you using EMR? In case you are I think that EMRFS should be used and
> s3a has been deprecated, in case I am not mistaken, almost 4 years back.
>
> Regards,
> Gourav
>
>
> On Tue, Nov 23, 2021 at 4:39 PM Ofer Eliassaf <of...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> Would really like to get some help here.
>>
>> We are using Spark3.1.2 using standalone cluster.
>>
>>
>> Since we started using the s3a directory committer, our spark jobs
>> stability and performance grew significantly!
>>
>>
>>
>> Lately however we are completely baffled troubleshooting this s3a
>> directory committer issue for days,
>>
>> and wonder if you have any idea what's going on?
>>
>>
>>
>> Our spark jobs fail because of  Java OOM (or rather process limit) error:
>>
>>
>>  An error occurred while calling
>> None.org.apache.spark.api.java.JavaSparkContext.
>> : java.lang.OutOfMemoryError: unable to create native thread: possibly
>> out of memory or process/resource limits reached at
>> java.base/java.lang.Thread.start0(Native Method) at
>> java.base/java.lang.Thread.start(Thread.java:803) at
>> java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
>> at
>> java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
>> at
>> java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
>> at
>> java.base/java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:714)
>> at
>> org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1(MessageLoop.scala:174)
>> at
>> org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1$adapted(MessageLoop.scala:173)
>> at scala.collection.immutable.Range.foreach(Range.scala:158) at
>> org.apache.spark.rpc.netty.DedicatedMessageLoop.<init>(MessageLoop.scala:173)
>> at org.apache.spark.rpc.netty.Dispatcher.liftedTree1$1(Dispatcher.scala:75)
>> at
>> org.apache.spark.rpc.netty.Dispatcher.registerRpcEndpoint(Dispatcher.scala:72)
>> at
>> org.apache.spark.rpc.netty.NettyRpcEnv.setupEndpoint(NettyRpcEnv.scala:136)
>> at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:231) at
>> org.apache.spark.SparkEnv$.create(SparkEnv.scala:394) at
>> org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:189) at
>> org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277) at
>> org.apache.spark.SparkContext.<init>(SparkContext.scala:458) at
>> org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
>> at
>> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method) at
>> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> at
>> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at
>> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at
>> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
>> py4j.Gateway.invoke(Gateway.java:238) at
>> py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
>> at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at
>> py4j.GatewayConnection.run(GatewayConnection.java:238) at
>> java.base/java.lang.Thread.run(Thread.java:834)
>>
>>
>>
>>
>> *Spark Thread Dump shows over 5000 committer threads on the spark driver!*
>>
>>
>>
>>
>> [image: image.png]
>>
>>
>>
>>
>>
>>
>> This is considering that our settings do not allow more than 100 threads…
>>
>> Or we don't understand something…
>>
>>
>>
>> *fs.s3a.threads.max                           | 100 |  The total number
>> of threads available in the filesystem for data uploads *or any other
>> queued filesystem operation*.*
>>
>> *fs.s3a.connection.maximum       | 1000 | Controls the maximum number of
>> simultaneous connections to S3. *
>>
>> *fs.s3a.committer.threads              | 16   |   Number of threads in
>> committers for parallel operations on files (upload, commit, abort,
>> delete...)*
>>
>> *fs.s3a.max.total.tasks                      | 5    |*
>>
>> *fs.s3a.committer.name <http://fs.s3a.committer.name>
>> directory*
>>
>> *fs.s3a.fast.upload.buffer                 disk*
>>
>> *io.file.buffer.size                                1048576*
>>
>>
>>
>> *mapreduce.outputcommitter.factory.scheme.s3a    -
>> org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory*
>>
>>
>>
>> We had tried different versions of the spark Hadoop cloud library, but
>> the issue is consistently the same.
>>
>>
>>
>>
>> https://repository.cloudera.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0-cdh6.3.2/spark-hadoop-cloud_2.11-2.4.0-cdh6.3.2.jar
>>
>>
>> https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0.7.0.3.0-79/spark-hadoop-cloud_2.11-2.4.0.7.0.3.0-79.jar
>>
>>
>> https://repo1.maven.org/maven2/org/apache/spark/spark-hadoop-cloud_2.12/3.2.0/spark-hadoop-cloud_2.12-3.2.0.jar
>>
>>
>> https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.12/3.1.2.7.2.12.0-291/spark-hadoop-cloud_2.12-3.1.2.7.2.12.0-291.jar
>>
>>
>>
>> We'd really appreciate if you can point us in the right direction 😊
>>
>>
>>
>> Thank you for your time!
>>
>> --
>> Regards,
>> Ofer Eliassaf
>>
>

-- 
Regards,
Ofer Eliassaf

Re: S3a directory committer thread out of memoty

Posted by Gourav Sengupta <go...@gmail.com>.
Hi,
are you using EMR? In case you are I think that EMRFS should be used and
s3a has been deprecated, in case I am not mistaken, almost 4 years back.

Regards,
Gourav


On Tue, Nov 23, 2021 at 4:39 PM Ofer Eliassaf <of...@gmail.com>
wrote:

> Hi all,
>
> Would really like to get some help here.
>
> We are using Spark3.1.2 using standalone cluster.
>
>
> Since we started using the s3a directory committer, our spark jobs
> stability and performance grew significantly!
>
>
>
> Lately however we are completely baffled troubleshooting this s3a
> directory committer issue for days,
>
> and wonder if you have any idea what's going on?
>
>
>
> Our spark jobs fail because of  Java OOM (or rather process limit) error:
>
>
>  An error occurred while calling
> None.org.apache.spark.api.java.JavaSparkContext.
> : java.lang.OutOfMemoryError: unable to create native thread: possibly out
> of memory or process/resource limits reached at
> java.base/java.lang.Thread.start0(Native Method) at
> java.base/java.lang.Thread.start(Thread.java:803) at
> java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
> at
> java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
> at
> java.base/java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:714)
> at
> org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1(MessageLoop.scala:174)
> at
> org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1$adapted(MessageLoop.scala:173)
> at scala.collection.immutable.Range.foreach(Range.scala:158) at
> org.apache.spark.rpc.netty.DedicatedMessageLoop.<init>(MessageLoop.scala:173)
> at org.apache.spark.rpc.netty.Dispatcher.liftedTree1$1(Dispatcher.scala:75)
> at
> org.apache.spark.rpc.netty.Dispatcher.registerRpcEndpoint(Dispatcher.scala:72)
> at
> org.apache.spark.rpc.netty.NettyRpcEnv.setupEndpoint(NettyRpcEnv.scala:136)
> at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:231) at
> org.apache.spark.SparkEnv$.create(SparkEnv.scala:394) at
> org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:189) at
> org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277) at
> org.apache.spark.SparkContext.<init>(SparkContext.scala:458) at
> org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
> at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method) at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at
> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
> py4j.Gateway.invoke(Gateway.java:238) at
> py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
> at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at
> py4j.GatewayConnection.run(GatewayConnection.java:238) at
> java.base/java.lang.Thread.run(Thread.java:834)
>
>
>
>
> *Spark Thread Dump shows over 5000 committer threads on the spark driver!*
>
>
>
>
> [image: image.png]
>
>
>
>
>
>
> This is considering that our settings do not allow more than 100 threads…
>
> Or we don't understand something…
>
>
>
> *fs.s3a.threads.max                           | 100 |  The total number of
> threads available in the filesystem for data uploads *or any other queued
> filesystem operation*.*
>
> *fs.s3a.connection.maximum       | 1000 | Controls the maximum number of
> simultaneous connections to S3. *
>
> *fs.s3a.committer.threads              | 16   |   Number of threads in
> committers for parallel operations on files (upload, commit, abort,
> delete...)*
>
> *fs.s3a.max.total.tasks                      | 5    |*
>
> *fs.s3a.committer.name <http://fs.s3a.committer.name>
> directory*
>
> *fs.s3a.fast.upload.buffer                 disk*
>
> *io.file.buffer.size                                1048576*
>
>
>
> *mapreduce.outputcommitter.factory.scheme.s3a    -
> org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory*
>
>
>
> We had tried different versions of the spark Hadoop cloud library, but the
> issue is consistently the same.
>
>
>
>
> https://repository.cloudera.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0-cdh6.3.2/spark-hadoop-cloud_2.11-2.4.0-cdh6.3.2.jar
>
>
> https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0.7.0.3.0-79/spark-hadoop-cloud_2.11-2.4.0.7.0.3.0-79.jar
>
>
> https://repo1.maven.org/maven2/org/apache/spark/spark-hadoop-cloud_2.12/3.2.0/spark-hadoop-cloud_2.12-3.2.0.jar
>
>
> https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.12/3.1.2.7.2.12.0-291/spark-hadoop-cloud_2.12-3.1.2.7.2.12.0-291.jar
>
>
>
> We'd really appreciate if you can point us in the right direction 😊
>
>
>
> Thank you for your time!
>
> --
> Regards,
> Ofer Eliassaf
>