You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (Jira)" <ji...@apache.org> on 2020/10/08 01:12:00 UTC

[jira] [Commented] (SPARK-31913) StackOverflowError in FileScanRDD

    [ https://issues.apache.org/jira/browse/SPARK-31913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17209946#comment-17209946 ] 

Takeshi Yamamuro commented on SPARK-31913:
------------------------------------------

Since this issue looks env-dependent and the PR was automatically closed, I will close this.

> StackOverflowError in FileScanRDD
> ---------------------------------
>
>                 Key: SPARK-31913
>                 URL: https://issues.apache.org/jira/browse/SPARK-31913
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.5, 3.0.0
>            Reporter: Genmao Yu
>            Priority: Minor
>
> Reading from FileScanRDD may failed with a StackOverflowError in my environment:
> - There are a mass of empty files in table partition。
> - Set `spark.sql.files.maxPartitionBytes`  with a large value: 1024MB
> A quick workaround is set `spark.sql.files.maxPartitionBytes` with a small value, like default 128MB.
> A better way is resolve the recursive calls in FileScanRDD.
> {code}
> java.lang.StackOverflowError
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.getSubject(Subject.java:297)
> 	at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:648)
> 	at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2828)
> 	at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2818)
> 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2684)
> 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> 	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> 	at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
> 	at org.apache.parquet.hadoop.ParquetFileReader.<init>(ParquetFileReader.java:640)
> 	at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:148)
> 	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:143)
> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:326)
> 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
> 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
> 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
> 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
> 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
> 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org