You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "binlijin (Created) (JIRA)" <ji...@apache.org> on 2011/12/26 15:30:30 UTC

[jira] [Created] (HIVE-2680) In FileSinkOperator if the RecordWriter.write throw IOException, we should call RecordWriter's close method.

In FileSinkOperator if the RecordWriter.write throw IOException, we should call RecordWriter's close method.
------------------------------------------------------------------------------------------------------------

                 Key: HIVE-2680
                 URL: https://issues.apache.org/jira/browse/HIVE-2680
             Project: Hive
          Issue Type: Improvement
            Reporter: binlijin
             Fix For: 0.9.0


Dynamic-partition Insert, if the partition that will be created is large, many files will be created and if the input is large the DataNode's xceiverCount will easily exceeds the limit of concurrent xcievers (default 1024), The RecordWriter.write(recordValue) will throw the Exception "Could not read from stream". After an hour the lease timeout, there will many commitBlockSynchronization requests and the namenode's load will be very high, so abortWriters should be called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HIVE-2680) In FileSinkOperator if the RecordWriter.write throw IOException, we should call RecordWriter's close method.

Posted by "Ashutosh Chauhan (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ashutosh Chauhan updated HIVE-2680:
-----------------------------------

    Affects Version/s: 0.9.0
        Fix Version/s:     (was: 0.9.0)

Unlinking from 0.9 
                
> In FileSinkOperator if the RecordWriter.write throw IOException, we should call RecordWriter's close method.
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-2680
>                 URL: https://issues.apache.org/jira/browse/HIVE-2680
>             Project: Hive
>          Issue Type: Improvement
>    Affects Versions: 0.9.0
>            Reporter: binlijin
>         Attachments: HIVE-2680.patch
>
>
> Dynamic-partition Insert, if the partition that will be created is large, many files will be created and if the input is large the DataNode's xceiverCount will easily exceeds the limit of concurrent xcievers (default 1024), The RecordWriter.write(recordValue) will throw the Exception "Could not read from stream". After an hour the lease timeout, there will many commitBlockSynchronization requests and the namenode's load will be very high, so abortWriters should be called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HIVE-2680) In FileSinkOperator if the RecordWriter.write throw IOException, we should call RecordWriter's close method.

Posted by "binlijin (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HIVE-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

binlijin updated HIVE-2680:
---------------------------

    Attachment: HIVE-2680.patch
    
> In FileSinkOperator if the RecordWriter.write throw IOException, we should call RecordWriter's close method.
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-2680
>                 URL: https://issues.apache.org/jira/browse/HIVE-2680
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: binlijin
>             Fix For: 0.9.0
>
>         Attachments: HIVE-2680.patch
>
>
> Dynamic-partition Insert, if the partition that will be created is large, many files will be created and if the input is large the DataNode's xceiverCount will easily exceeds the limit of concurrent xcievers (default 1024), The RecordWriter.write(recordValue) will throw the Exception "Could not read from stream". After an hour the lease timeout, there will many commitBlockSynchronization requests and the namenode's load will be very high, so abortWriters should be called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira