You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2013/11/22 03:12:23 UTC

[1/2] git commit: TimeTrackingOutputStream should pass on calls to close() and flush().

Updated Branches:
  refs/heads/master 2fead510f -> f20093c3a


TimeTrackingOutputStream should pass on calls to close() and flush().

Without this fix you get a huge number of open shuffles after running
shuffles.


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/53b94ef2
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/53b94ef2
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/53b94ef2

Branch: refs/heads/master
Commit: 53b94ef2f5179bdbebe70883b2593b569518e77e
Parents: 4ba3267
Author: Patrick Wendell <pw...@gmail.com>
Authored: Thu Nov 21 17:17:06 2013 -0800
Committer: Patrick Wendell <pw...@gmail.com>
Committed: Thu Nov 21 17:20:15 2013 -0800

----------------------------------------------------------------------
 .../main/scala/org/apache/spark/storage/BlockObjectWriter.scala    | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/53b94ef2/core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala b/core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala
index 32d2dd0..0a32df7 100644
--- a/core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala
+++ b/core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala
@@ -101,6 +101,8 @@ class DiskBlockObjectWriter(
     def write(i: Int): Unit = callWithTiming(out.write(i))
     override def write(b: Array[Byte]) = callWithTiming(out.write(b))
     override def write(b: Array[Byte], off: Int, len: Int) = callWithTiming(out.write(b, off, len))
+    override def close() = out.close()
+    override def flush() = out.flush()
   }
 
   private val syncWrites = System.getProperty("spark.shuffle.sync", "false").toBoolean


[2/2] git commit: Merge pull request #196 from pwendell/master

Posted by rx...@apache.org.
Merge pull request #196 from pwendell/master

TimeTrackingOutputStream should pass on calls to close() and flush().

Without this fix you get a huge number of open files when running shuffles.


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/f20093c3
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/f20093c3
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/f20093c3

Branch: refs/heads/master
Commit: f20093c3afa68439b1c9010de189d497df787c2a
Parents: 2fead51 53b94ef
Author: Reynold Xin <rx...@apache.org>
Authored: Fri Nov 22 10:12:13 2013 +0800
Committer: Reynold Xin <rx...@apache.org>
Committed: Fri Nov 22 10:12:13 2013 +0800

----------------------------------------------------------------------
 .../main/scala/org/apache/spark/storage/BlockObjectWriter.scala    | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/f20093c3/core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala
----------------------------------------------------------------------