You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2018/08/15 20:36:00 UTC

[jira] [Assigned] (SPARK-25126) OrcFileOperator.getFileReader: avoid creating OrcFile.Reader for all orc files

     [ https://issues.apache.org/jira/browse/SPARK-25126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-25126:
------------------------------------

    Assignee: Apache Spark

> OrcFileOperator.getFileReader: avoid creating OrcFile.Reader for all orc files
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-25126
>                 URL: https://issues.apache.org/jira/browse/SPARK-25126
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 2.3.1
>            Reporter: Rao Fu
>            Assignee: Apache Spark
>            Priority: Major
>
> https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileOperator.scala#L73
> Where `basePath` passed to getFileReader is a directory, a OrcFile.Reader is created for every file under the directory although only the first one with a non-empty schema is returned. It consumes a lot of memory when there are many files under the directory as the metadata for the orc file is loaded into memory during the Reader creation.
> I tried the following workaround and the OOM issue went away,
> 1) create a DataSet<Row> from a single orc file.
> Dataset<Row> rowsForFirstFile = spark.read().format("orc").load(oneFile);
> 2) when creating DataSet<Row> from all files under the directory, use the schema from the previous DataSet.
> Dataset<Row> rows = spark.read().schema(rowsForFirstFile.schema()).format("orc").load(path);
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org