You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Runping Qi (JIRA)" <ji...@apache.org> on 2008/10/28 17:37:44 UTC

[jira] Created: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
--------------------------------------------------------------------------------

                 Key: HADOOP-4533
                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
             Project: Hadoop Core
          Issue Type: Bug
    Affects Versions: 0.18.1
            Reporter: Runping Qi



Not sure whether this is considered as a bug or is an expected case.
But here are the details.

I have a cluster using a build from hadoop 0.18 branch.
When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:

hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)

08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
copyFromLocal: Could not get block locations. Aborting...
Exception closing file /tmp/gridmix-env
java.io.IOException: Could not get block locations. Aborting...
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
That means that Pig 2.0 will not work with the to be released hadoop 0.18.2




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643385#action_12643385 ] 

Hairong Kuang commented on HADOOP-4533:
---------------------------------------

Konstantin, the junit tests have passed. I will test this patch on a real dfs cluster. For 0.19 and 0.20, we keep the patch to HADOOP-4116. We should open a jira on Could not read from stream" problem for 0.19 or 0.20.

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hairong Kuang updated HADOOP-4533:
----------------------------------

    Attachment: balancerRM-b18.patch

Ok, I reverted HADOOP-4116 in branch 18. This patch removes the incompatible change in the patch to HADOOP-4116 but keeps the critical code to prevent the Balancer from overusing network bandwidth and avoid the deadlock problem that was described in HADOOP-4116.

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>         Attachments: balancerRM-b18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643293#action_12643293 ] 

Konstantin Shvachko commented on HADOOP-4533:
---------------------------------------------

It looks like we need 2 patches here:
- for 0.18 the incompatible data transfer protocol should be removed.
- for 0.19 we need to provide a clear message saying that data transfer protocols are incompatible rather than "Could not read from stream"

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hairong Kuang resolved HADOOP-4533.
-----------------------------------

      Resolution: Fixed
    Hadoop Flags: [Reviewed]

I've committed this.

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.18.2
>
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Runping Qi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Runping Qi updated HADOOP-4533:
-------------------------------

    Component/s: dfs
    Description: 
Not sure whether this is considered as a bug or is an expected case.
But here are the details.

I have a cluster using a build from hadoop 0.18 branch.
When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:

hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)

08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
copyFromLocal: Could not get block locations. Aborting...
Exception closing file /tmp/gridmix-env
java.io.IOException: Could not get block locations. Aborting...
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
That means that Pig 2.0 will not work with the to be released hadoop 0.18.2




  was:

Not sure whether this is considered as a bug or is an expected case.
But here are the details.

I have a cluster using a build from hadoop 0.18 branch.
When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:

hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)

08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
copyFromLocal: Could not get block locations. Aborting...
Exception closing file /tmp/gridmix-env
java.io.IOException: Could not get block locations. Aborting...
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
That means that Pig 2.0 will not work with the to be released hadoop 0.18.2





> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Assigned: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Owen O'Malley reassigned HADOOP-4533:
-------------------------------------

    Assignee: Hairong Kuang

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643613#action_12643613 ] 

Hairong Kuang commented on HADOOP-4533:
---------------------------------------

Unit test passed and here is the ant test-patch result:
     [exec] +1 overall.

     [exec]     +1 @author.  The patch does not contain any @author tags.

     [exec]     +1 tests included.  The patch appears to include 3 new or modified tests.

     [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.

     [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

     [exec]     +1 findbugs.  The patch does not introduce any new Findbugswarnings.



> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.18.2
>
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Robert Chansler (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Chansler updated HADOOP-4533:
------------------------------------

         Priority: Blocker  (was: Major)
    Fix Version/s: 0.18.2

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.18.2
>
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643284#action_12643284 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-4533:
------------------------------------------------

> ...  based on Nicholas' testing.

Reproduce this
- start a 0.18.1 cluster
- write a file by a 0.18.2 client, e.g. {{hadoop fs -put src dst}}
- It will fail with similar error messages shown in the description.

It won't fail if the patch in HADOOP-4116 is reverted.

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643279#action_12643279 ] 

Owen O'Malley commented on HADOOP-4533:
---------------------------------------

This seems to have been caused by HADOOP-4116, based on Nicholas' testing.

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643380#action_12643380 ] 

Konstantin Shvachko commented on HADOOP-4533:
---------------------------------------------

+1
This looks reasonable for 0.18. It fixes the semaphore contention problem and retains the data transfer protocol compatible across 0.18
We need to run tests with this patch.
For 0.19 and 0.20 it is better to open another jira. 

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hairong Kuang updated HADOOP-4533:
----------------------------------

    Attachment:     (was: balancerRM-b18.patch)

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hairong Kuang updated HADOOP-4533:
----------------------------------

    Attachment: balancerRM_br18.patch

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643293#action_12643293 ] 

shv edited comment on HADOOP-4533 at 10/28/08 11:23 AM:
------------------------------------------------------------------------

It looks like we need 2 patches here:
- for 0.18 the incompatible change in data transfer protocol should be removed.
- for 0.19 we need to provide a clear message saying that data transfer protocols are incompatible rather than "Could not read from stream"

      was (Author: shv):
    It looks like we need 2 patches here:
- for 0.18 the incompatible data transfer protocol should be removed.
- for 0.19 we need to provide a clear message saying that data transfer protocols are incompatible rather than "Could not read from stream"
  
> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.