You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@uniffle.apache.org by ro...@apache.org on 2022/11/28 03:55:02 UTC

[incubator-uniffle] branch branch-0.6 updated (1a7a3201 -> 22c780e1)

This is an automated email from the ASF dual-hosted git repository.

roryqi pushed a change to branch branch-0.6
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


    from 1a7a3201 [BUG] Fix incorrect spark metrics (#324)
     new 52c727f3 [ISSUE-364] Fix `indexWriter` don't close if exception thrown when close dataWriter (#349)
     new 22c780e1 [ISSUE-228] Fix the problem of protobuf-java incorrect dependency at compile time (#362)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 client-spark/spark3/pom.xml                        |  5 +++++
 .../handler/impl/HdfsShuffleWriteHandler.java      | 24 +++++++---------------
 2 files changed, 12 insertions(+), 17 deletions(-)


[incubator-uniffle] 02/02: [ISSUE-228] Fix the problem of protobuf-java incorrect dependency at compile time (#362)

Posted by ro...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

roryqi pushed a commit to branch branch-0.6
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 22c780e1518f26eea4b78c680f7d03bbd54956ae
Author: yanli <ha...@msn.com>
AuthorDate: Mon Nov 28 11:13:41 2022 +0800

    [ISSUE-228] Fix the problem of protobuf-java incorrect dependency at compile time (#362)
    
    ### What changes were proposed in this pull request?
    1. We specified the version number of `protobuf-java` in the `client-spark/spark3/pom.xml` file.
    
    ### Why are the changes needed?
    More reasonable. https://github.com/apache/incubator-uniffle/issues/228
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    We compiled in the test environment and verified the correctness of the `rss-client-spark3-0.6.0-shaded.jar` package.
---
 client-spark/spark3/pom.xml | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/client-spark/spark3/pom.xml b/client-spark/spark3/pom.xml
index 858d2968..fa7e6365 100644
--- a/client-spark/spark3/pom.xml
+++ b/client-spark/spark3/pom.xml
@@ -34,6 +34,11 @@
     <name>Apache Uniffle Client (Spark 3)</name>
 
     <dependencies>
+        <dependency>
+            <groupId>com.google.protobuf</groupId>
+            <artifactId>protobuf-java</artifactId>
+            <version>${protobuf.version}</version>
+        </dependency>
         <dependency>
             <groupId>org.apache.spark</groupId>
             <artifactId>spark-core_${scala.binary.version}</artifactId>


[incubator-uniffle] 01/02: [ISSUE-364] Fix `indexWriter` don't close if exception thrown when close dataWriter (#349)

Posted by ro...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

roryqi pushed a commit to branch branch-0.6
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 52c727f38be9cf9fc7f1e530b5ead9e0206f12e6
Author: xianjingfeng <58...@qq.com>
AuthorDate: Tue Nov 22 23:28:29 2022 +0800

    [ISSUE-364] Fix `indexWriter` don't close if exception thrown when close dataWriter (#349)
    
    ### What changes were proposed in this pull request?
    Fix `indexWriter` don't close if exception thrown when close dataWriter
    
    ### Why are the changes needed?
    It's a bug
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    I think it's not necessary, because it's too simple.
---
 .../handler/impl/HdfsShuffleWriteHandler.java      | 24 +++++++---------------
 1 file changed, 7 insertions(+), 17 deletions(-)

diff --git a/storage/src/main/java/org/apache/uniffle/storage/handler/impl/HdfsShuffleWriteHandler.java b/storage/src/main/java/org/apache/uniffle/storage/handler/impl/HdfsShuffleWriteHandler.java
index af59ecbd..36240cae 100644
--- a/storage/src/main/java/org/apache/uniffle/storage/handler/impl/HdfsShuffleWriteHandler.java
+++ b/storage/src/main/java/org/apache/uniffle/storage/handler/impl/HdfsShuffleWriteHandler.java
@@ -104,18 +104,15 @@ public class HdfsShuffleWriteHandler implements ShuffleWriteHandler {
   public void write(
       List<ShufflePartitionedBlock> shuffleBlocks) throws IOException, IllegalStateException {
     final long start = System.currentTimeMillis();
-    HdfsFileWriter dataWriter = null;
-    HdfsFileWriter indexWriter = null;
     writeLock.lock();
     try {
-      try {
-        final long ss = System.currentTimeMillis();
-        // Write to HDFS will be failed with lease problem, and can't write the same file again
-        // change the prefix of file name if write failed before
-        String dataFileName = ShuffleStorageUtils.generateDataFileName(fileNamePrefix + "_" + failTimes);
-        String indexFileName = ShuffleStorageUtils.generateIndexFileName(fileNamePrefix + "_" + failTimes);
-        dataWriter = createWriter(dataFileName);
-        indexWriter = createWriter(indexFileName);
+      final long ss = System.currentTimeMillis();
+      // Write to HDFS will be failed with lease problem, and can't write the same file again
+      // change the prefix of file name if write failed before
+      String dataFileName = ShuffleStorageUtils.generateDataFileName(fileNamePrefix + "_" + failTimes);
+      String indexFileName = ShuffleStorageUtils.generateIndexFileName(fileNamePrefix + "_" + failTimes);
+      try (HdfsFileWriter dataWriter = createWriter(dataFileName);
+           HdfsFileWriter indexWriter = createWriter(indexFileName)) {
         for (ShufflePartitionedBlock block : shuffleBlocks) {
           long blockId = block.getBlockId();
           long crc = block.getCrc();
@@ -134,13 +131,6 @@ public class HdfsShuffleWriteHandler implements ShuffleWriteHandler {
         LOG.warn("Write failed with " + shuffleBlocks.size() + " blocks for " + fileNamePrefix + "_" + failTimes, e);
         failTimes++;
         throw new RuntimeException(e);
-      } finally {
-        if (dataWriter != null) {
-          dataWriter.close();
-        }
-        if (indexWriter != null) {
-          indexWriter.close();
-        }
       }
     } finally {
       writeLock.unlock();