You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "mathieu longtin (Jira)" <ji...@apache.org> on 2021/11/01 20:02:00 UTC

[jira] [Created] (SPARK-37185) DataFrame.take() only uses one worker

mathieu longtin created SPARK-37185:
---------------------------------------

             Summary: DataFrame.take() only uses one worker
                 Key: SPARK-37185
                 URL: https://issues.apache.org/jira/browse/SPARK-37185
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 3.2.0, 3.1.1
         Environment: CentOS 7
            Reporter: mathieu longtin


Say you have query:
{code:java}
>>> df = spark.sql("select * from mytable where x = 99"){code}
Now, out of billions of row, there's only ten rows where x is 99.

If I do:
{code:java}
>>> df.limit(10).collect()
[Stage 1:>      (0 + 1) / 1]{code}
It only uses one worker. This takes a really long time since one CPU is reading the billions of row.

However, if I do this:
{code:java}
>>> df.limit(10).rdd.collect()
[Stage 1:>      (0 + 10) / 22]{code}
All the workers are running.

I think there's some optimization issue DataFrame.take(...).

This did not use to be an issue, but I'm not sure if it was working with 3.0 or 2.4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org