You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2019/07/18 20:16:42 UTC
[spark] branch branch-2.3 updated: [SPARK-28430][UI] Fix stage
table rendering when some tasks' metrics are missing
This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-2.3 by this push:
new e466662 [SPARK-28430][UI] Fix stage table rendering when some tasks' metrics are missing
e466662 is described below
commit e466662d5c08e2030b727468001561ab4c21078b
Author: Josh Rosen <ro...@gmail.com>
AuthorDate: Thu Jul 18 13:15:39 2019 -0700
[SPARK-28430][UI] Fix stage table rendering when some tasks' metrics are missing
## What changes were proposed in this pull request?
The Spark UI's stages table misrenders the input/output metrics columns when some tasks are missing input metrics. See the screenshot below for an example of the problem:
![image](https://user-images.githubusercontent.com/50748/61420042-a3abc100-a8b5-11e9-8a92-7986563ee712.png)
This is because those columns' are defined as
```scala
{if (hasInput(stage)) {
metricInfo(task) { m =>
...
<td>....</td>
}
}
```
where `metricInfo` renders the node returned by the closure in case metrics are defined or returns `Nil` in case metrics are not defined. If metrics are undefined then we'll fail to render the empty `<td></td>` tag, causing columns to become misaligned as shown in the screenshot.
To fix this, this patch changes this to
```scala
{if (hasInput(stage)) {
<td>{
metricInfo(task) { m =>
...
Unparsed(...)
}
}</td>
}
```
which is an idiom that's already in use for the shuffle read / write columns.
## How was this patch tested?
It isn't. I'm arguing for correctness because the modifications are consistent with rendering methods that work correctly for other columns.
Closes #25183 from JoshRosen/joshrosen/fix-task-table-with-partial-io-metrics.
Authored-by: Josh Rosen <ro...@gmail.com>
Signed-off-by: Dongjoon Hyun <dh...@apple.com>
(cherry picked from commit 3776fbdfdeac07d191f231b29cf906cabdc6de3f)
Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
.../scala/org/apache/spark/ui/jobs/StagePage.scala | 24 +++++++++++++---------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
index 73e78aa..d962b96 100644
--- a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
+++ b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
@@ -863,18 +863,22 @@ private[ui] class TaskPagedTable(
<td>{accumulatorsInfo(task)}</td>
}}
{if (hasInput(stage)) {
- metricInfo(task) { m =>
- val bytesRead = Utils.bytesToString(m.inputMetrics.bytesRead)
- val records = m.inputMetrics.recordsRead
- <td>{bytesRead} / {records}</td>
- }
+ <td>{
+ metricInfo(task) { m =>
+ val bytesRead = Utils.bytesToString(m.inputMetrics.bytesRead)
+ val records = m.inputMetrics.recordsRead
+ Unparsed(s"$bytesRead / $records")
+ }
+ }</td>
}}
{if (hasOutput(stage)) {
- metricInfo(task) { m =>
- val bytesWritten = Utils.bytesToString(m.outputMetrics.bytesWritten)
- val records = m.outputMetrics.recordsWritten
- <td>{bytesWritten} / {records}</td>
- }
+ <td>{
+ metricInfo(task) { m =>
+ val bytesWritten = Utils.bytesToString(m.outputMetrics.bytesWritten)
+ val records = m.outputMetrics.recordsWritten
+ Unparsed(s"$bytesWritten / $records")
+ }
+ }</td>
}}
{if (hasShuffleRead(stage)) {
<td class={TaskDetailsClassNames.SHUFFLE_READ_BLOCKED_TIME}>
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org