You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Anton Okolnychyi (JIRA)" <ji...@apache.org> on 2019/02/13 10:24:00 UTC

[jira] [Updated] (SPARK-26204) Optimize InSet expression

     [ https://issues.apache.org/jira/browse/SPARK-26204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Anton Okolnychyi updated SPARK-26204:
-------------------------------------
    Description: 
The {{InSet}} expression was introduced in SPARK-3711 to avoid O(n) time complexity in the {{In}} expression. As {{InSet}} relies on Scala {{immutable.Set}}, it introduces expensive autoboxing. As a consequence, the performance of {{InSet}} might be significantly slower than {{In}} even on 100+ values.

We need to find an approach how to optimize {{InSet}} expressions and avoid the cost of autoboxing.

 There are a few approaches that we can use:
 * Collections for primitive values (e.g., FastUtil,  HPPC)
 * Type specialization in Scala (e.g., OpenHashSet in Spark)

According to my local benchmarks, {{OpenHashSet}}, which is already available in Spark and uses type specialization, can significantly reduce the memory footprint. However, it slows down the computation even compared to the built-in Scala sets. On the other hand, FastUtil and HPPC did work and gave a substantial improvement in the performance. So, it makes sense to evaluate primitive collections.

See the attached screenshot of what I experienced while testing.

  was:
The {{InSet}} expression was introduced in SPARK-3711 to avoid O\(n\) time complexity in the {{In}} expression. As {{InSet}} relies on Scala {{immutable.Set}}, it introduces expensive autoboxing. As a consequence, the performance of {{InSet}} might be significantly slower than {{In}} even on 100+ values.

We need to find an approach how to optimize {{InSet}} expressions and avoid the cost of autoboxing.

 There are a few approaches that we can use:
 * Collections for primitive values (e.g., FastUtil,  HPPC)
 * Type specialization in Scala (would it even work for code gen in Spark?)

I tried to use {{OpenHashSet}}, which is already available in Spark and uses type specialization. However, I did not manage to avoid autoboxing. On the other hand, FastUtil did work and I saw a substantial improvement in the performance.

See the attached screenshot of what I experienced while testing.
 


> Optimize InSet expression
> -------------------------
>
>                 Key: SPARK-26204
>                 URL: https://issues.apache.org/jira/browse/SPARK-26204
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Anton Okolnychyi
>            Priority: Major
>         Attachments: heap size.png
>
>
> The {{InSet}} expression was introduced in SPARK-3711 to avoid O(n) time complexity in the {{In}} expression. As {{InSet}} relies on Scala {{immutable.Set}}, it introduces expensive autoboxing. As a consequence, the performance of {{InSet}} might be significantly slower than {{In}} even on 100+ values.
> We need to find an approach how to optimize {{InSet}} expressions and avoid the cost of autoboxing.
>  There are a few approaches that we can use:
>  * Collections for primitive values (e.g., FastUtil,  HPPC)
>  * Type specialization in Scala (e.g., OpenHashSet in Spark)
> According to my local benchmarks, {{OpenHashSet}}, which is already available in Spark and uses type specialization, can significantly reduce the memory footprint. However, it slows down the computation even compared to the built-in Scala sets. On the other hand, FastUtil and HPPC did work and gave a substantial improvement in the performance. So, it makes sense to evaluate primitive collections.
> See the attached screenshot of what I experienced while testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org