You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/10/08 05:42:14 UTC
[jira] [Resolved] (SPARK-24273) Failure while using .checkpoint
method to private S3 store via S3A connector
[ https://issues.apache.org/jira/browse/SPARK-24273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-24273.
----------------------------------
Resolution: Incomplete
> Failure while using .checkpoint method to private S3 store via S3A connector
> ----------------------------------------------------------------------------
>
> Key: SPARK-24273
> URL: https://issues.apache.org/jira/browse/SPARK-24273
> Project: Spark
> Issue Type: Bug
> Components: Spark Shell
> Affects Versions: 2.3.0
> Reporter: Jami Malikzade
> Priority: Major
> Labels: bulk-closed
>
> We are getting following error:
> {code}
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS Service: Amazon S3, AWS Request ID: tx000000000000000014126-005ae9bfd9-9ed9ac2-default, AWS Error Code: InvalidRange, AWS Error Message: null, S3 Extended Request ID: 9ed9ac2-default-default"
> {code}
> when we use checkpoint method as below.
> {code}
> val streamBucketDF = streamPacketDeltaDF
> .filter('timeDelta > maxGap && 'timeDelta <= 30000)
> .withColumn("bucket", when('timeDelta <= mediumGap, "medium")
> .otherwise("large")
> )
> .checkpoint()
> {code}
> Do you have idea how to prevent invalid range in header to be sent, or how it can be workarounded or fixed?
> Thanks.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org