You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Amar Kamat (JIRA)" <ji...@apache.org> on 2007/12/17 19:00:47 UTC

[jira] Issue Comment Edited: (HADOOP-2419) HADOOP-1965 breaks nutch

    [ https://issues.apache.org/jira/browse/HADOOP-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12552325 ] 

amar_kamat edited comment on HADOOP-2419 at 12/17/07 9:59 AM:
--------------------------------------------------------------

Plz check the new patch [https://issues.apache.org/jira/secure/attachment/12371797/HADOOP-2419.patch]. Earlier the  calls to {{MapTask.Collect()}} were not thread-safe. Now the call to {{Collect()}} is made thread-safe. Let us know if this patch works fine.

      was (Author: amar_kamat):
    Plz check the new patch [https://issues.apache.org/jira/secure/attachment/12371775/HADOOP-2419.patch]. Earlier the  calls to {{MapTask.Collect()}} were not thread-safe. Now the call to {{Collect()}} is made thread-safe. Let us know if this patch works fine.
  
> HADOOP-1965 breaks nutch
> ------------------------
>
>                 Key: HADOOP-2419
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2419
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.16.0
>            Reporter: Paul Saab
>            Assignee: Amar Kamat
>         Attachments: jobtasks.jsp.html, MapRunnableTest.java
>
>
> When running nutch on trunk, nutch is unable to complete a fetch and the following exceptions are raised:
> java.io.EOFException
>         at java.io.DataInputStream.readFully(DataInputStream.java:180)
>         at org.apache.nutch.protocol.Content.readFields(Content.java:158)
>         at org.apache.nutch.util.GenericWritableConfigurable.readFields(GenericWritableConfigurable.java:38)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spill(MapTask.java:536)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpillToDisk(MapTask.java:474)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$100(MapTask.java:248)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$1.run(MapTask.java:413)
> Exception in thread "SortSpillThread" java.lang.NegativeArraySizeException
>      at org.apache.hadoop.io.Text.readString(Text.java:388)
>      at org.apache.nutch.metadata.Metadata.readFields(Metadata.java:243)
>      at org.apache.nutch.protocol.Content.readFields(Content.java:151)
>      at org.apache.nutch.util.GenericWritableConfigurable.readFields(GenericWritableConfigurable.java:38)
>      at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spill(MapTask.java:536)
>      at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpillToDisk(MapTask.java:474)
>      at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$100(MapTask.java:248)
>      at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$1.run(MapTask.java:413)
> After reverting HADOOP-1965 nutch works just fine.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.