You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Zhan Zhang (JIRA)" <ji...@apache.org> on 2014/10/08 23:45:36 UTC
[jira] [Comment Edited] (SPARK-3720) support ORC in spark sql
[ https://issues.apache.org/jira/browse/SPARK-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164222#comment-14164222 ]
Zhan Zhang edited comment on SPARK-3720 at 10/8/14 9:45 PM:
------------------------------------------------------------
There is another gira spark-2883 opened on 06/Aug/14. I am actively working on. This should be duplicated to that one. To avoid redundant work, I don't think it is a right way open a gira right before sending out PR if the gira is not trial. In addition, I do think this PR is premature, as pointed out in the review comment, it actually introduce the dependency between sql core and hive.
was (Author: zzhan):
There is another gira spark-2883 opened on 06/Aug/14. I am actively working on. This should be duplicated to that one. To avoid redundant work, I don't think it is a right way open a gira right before sending out PR if the gira is not trial. In addition, I don't think this PR is premature, as pointed out in the review comment, it actually introduce the dependency between sql core and hive.
> support ORC in spark sql
> ------------------------
>
> Key: SPARK-3720
> URL: https://issues.apache.org/jira/browse/SPARK-3720
> Project: Spark
> Issue Type: New Feature
> Components: SQL
> Affects Versions: 1.1.0
> Reporter: wangfei
>
> The Optimized Row Columnar (ORC) file format provides a highly efficient way to store data on hdfs.ORC file format has many advantages such as:
> 1 a single file as the output of each task, which reduces the NameNode's load
> 2 Hive type support including datetime, decimal, and the complex types (struct, list, map, and union)
> 3 light-weight indexes stored within the file
> skip row groups that don't pass predicate filtering
> seek to a given row
> 4 block-mode compression based on data type
> run-length encoding for integer columns
> dictionary encoding for string columns
> 5 concurrent reads of the same file using separate RecordReaders
> 6 ability to split files without scanning for markers
> 7 bound the amount of memory needed for reading or writing
> 8 metadata stored using Protocol Buffers, which allows addition and removal of fields
> Now spark sql support Parquet, support ORC provide people more opts.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org