You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Boyang Jerry Peng (Jira)" <ji...@apache.org> on 2022/01/20 19:20:00 UTC

[jira] [Created] (SPARK-37973) Directly call super.getDefaultReadLimit when scala issue 12523 is fixed

Boyang Jerry Peng created SPARK-37973:
-----------------------------------------

             Summary: Directly call super.getDefaultReadLimit when scala issue 12523 is fixed
                 Key: SPARK-37973
                 URL: https://issues.apache.org/jira/browse/SPARK-37973
             Project: Spark
          Issue Type: Task
          Components: Spark Core
    Affects Versions: 3.2.0
            Reporter: Boyang Jerry Peng


In regards to [https://github.com/apache/spark/pull/35238] and more specifically these lines:

 

[https://github.com/apache/spark/pull/35238/files#diff-c9def1b07e12775929ebc58e107291495a48461f516db75acc4af4b6ce8b4dc7R106]

 

[https://github.com/apache/spark/pull/35238/files#diff-1ab03f750a3cbd95b7a84bb0c6bb6c6600c79dcb20ea2b43dd825d9aedab9656R140]

 

```scala

maxOffsetsPerTrigger.map(ReadLimit.maxRows).getOrElse(super.getDefaultReadLimit)

```

 

needed to be changed to



```scala

maxOffsetsPerTrigger.map(ReadLimit.maxRows).getOrElse({*}ReadLimit.allAvailable(){*})

```

 

Because of a bug in the scala compiler documented here:

 

[https://github.com/scala/bug/issues/12523]

 

After this bug is fixed we can revert this change, i.e. back to using `super.getDefaultReadLimit`



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org