You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by an...@apache.org on 2016/02/26 02:04:47 UTC
spark git commit: [SPARK-13501] Remove use of Guava Stopwatch
Repository: spark
Updated Branches:
refs/heads/master 7a6ee8a8f -> f2cfafdfe
[SPARK-13501] Remove use of Guava Stopwatch
Our nightly doc snapshot builds are failing due to some issue involving the Guava Stopwatch constructor:
```
[error] /home/jenkins/workspace/spark-master-docs/spark/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala:496: constructor Stopwatch in class Stopwatch cannot be accessed in class CoarseMesosSchedulerBackend
[error] val stopwatch = new Stopwatch()
[error] ^
```
This Stopwatch constructor was deprecated in newer versions of Guava (https://github.com/google/guava/commit/fd0cbc2c5c90e85fb22c8e86ea19630032090943) and it's possible that some classpath issues affecting Unidoc could be causing this to trigger compilation failures.
In order to work around this issue, this patch removes this use of Stopwatch since we don't use it anywhere else in the Spark codebase.
Author: Josh Rosen <jo...@databricks.com>
Closes #11376 from JoshRosen/remove-stopwatch.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f2cfafdf
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f2cfafdf
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f2cfafdf
Branch: refs/heads/master
Commit: f2cfafdfe0f4b18f31bc63969e2abced1a66e896
Parents: 7a6ee8a
Author: Josh Rosen <jo...@databricks.com>
Authored: Thu Feb 25 17:04:43 2016 -0800
Committer: Andrew Or <an...@databricks.com>
Committed: Thu Feb 25 17:04:43 2016 -0800
----------------------------------------------------------------------
.../scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/spark/blob/f2cfafdf/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala b/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
index f803cc7..622f361 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
@@ -19,14 +19,12 @@ package org.apache.spark.scheduler.cluster.mesos
import java.io.File
import java.util.{Collections, List => JList}
-import java.util.concurrent.TimeUnit
import java.util.concurrent.locks.ReentrantLock
import scala.collection.JavaConverters._
import scala.collection.mutable
import scala.collection.mutable.{Buffer, HashMap, HashSet}
-import com.google.common.base.Stopwatch
import org.apache.mesos.{Scheduler => MScheduler, SchedulerDriver}
import org.apache.mesos.Protos.{TaskInfo => MesosTaskInfo, _}
@@ -493,12 +491,11 @@ private[spark] class CoarseMesosSchedulerBackend(
// Wait for executors to report done, or else mesosDriver.stop() will forcefully kill them.
// See SPARK-12330
- val stopwatch = new Stopwatch()
- stopwatch.start()
+ val startTime = System.nanoTime()
// slaveIdsWithExecutors has no memory barrier, so this is eventually consistent
while (numExecutors() > 0 &&
- stopwatch.elapsed(TimeUnit.MILLISECONDS) < shutdownTimeoutMS) {
+ System.nanoTime() - startTime < shutdownTimeoutMS * 1000L * 1000L) {
Thread.sleep(100)
}
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org