You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kazuaki Ishizaki (JIRA)" <ji...@apache.org> on 2017/03/21 19:22:41 UTC
[jira] [Updated] (SPARK-20046) Facilitate loop optimizations in a
JIT compiler regarding sqlContext.read.parquet()
[ https://issues.apache.org/jira/browse/SPARK-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Kazuaki Ishizaki updated SPARK-20046:
-------------------------------------
Issue Type: Improvement (was: Bug)
> Facilitate loop optimizations in a JIT compiler regarding sqlContext.read.parquet()
> -----------------------------------------------------------------------------------
>
> Key: SPARK-20046
> URL: https://issues.apache.org/jira/browse/SPARK-20046
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 2.2.0
> Reporter: Kazuaki Ishizaki
>
> [This article|https://databricks.com/blog/2017/02/16/processing-trillion-rows-per-second-single-machine-can-nested-loop-joins-fast.html] suggests that better generated code can improve performance by facilitating compiler optimizations.
> This JIRA changes the generated code for {{sqlContext.read.parquet("file")}} to facilitate loop optimizations in a JIT compiler for achieving better performance. In particular, [this stackoverflow entry|http://stackoverflow.com/questions/40629435/fast-parquet-row-count-in-spark] suggests me to improve performance of {{sqlContext.read.parquet("file").count}}}.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org