You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2022/03/29 00:45:00 UTC

[jira] [Commented] (SPARK-38679) Expose the number partitions in a stage to TaskContext

    [ https://issues.apache.org/jira/browse/SPARK-38679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17513742#comment-17513742 ] 

Apache Spark commented on SPARK-38679:
--------------------------------------

User 'vkorukanti' has created a pull request for this issue:
https://github.com/apache/spark/pull/35995

> Expose the number partitions in a stage to TaskContext
> ------------------------------------------------------
>
>                 Key: SPARK-38679
>                 URL: https://issues.apache.org/jira/browse/SPARK-38679
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.2.1
>            Reporter: Venki Korukanti
>            Priority: Major
>
> Add a new api to expose total partition count in the stage belonging to the task in TaskContext, so that the task knows what fraction of the computation is doing.
> With this extra information, users can also generate 32bit unique int ids as below rather than using `monotonically_increasing_id` which generates 64bit long ids.
>  
> {code:java}
>    rdd.mapPartitions { rowsIter =>
>         val partitionId = TaskContext.get().partitionId()
>         val numPartitions = TaskContext.get().numPartitions()
>         var i = 0
>         rowsIter.map { row =>
>           val rowId = partitionId + i * numPartitions
>           i += 1
>           (rowId, row)
>        }
>   }{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org