You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "zhangjianlin (JIRA)" <ji...@apache.org> on 2012/10/31 09:45:12 UTC

[jira] [Created] (MAPREDUCE-4759) java.io.IOException: File too large

zhangjianlin created MAPREDUCE-4759:
---------------------------------------

             Summary: java.io.IOException: File too large
                 Key: MAPREDUCE-4759
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
             Project: Hadoop Map/Reduce
          Issue Type: Bug
          Components: task-controller
    Affects Versions: 0.20.2
            Reporter: zhangjianlin
            Priority: Critical
             Fix For: 0.20.2


when running mr job.one of cluster tasktracker lost some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
[thread 1097128256 also had an error]
 the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
#
# JRE version: 6.0_18-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
# Problematic frame:
# V  [libjvm.so+0x62263c]
#
# An error report file with more information is saved as:
# /usr/local/hadoop-0.20.2/hs_err_pid20204.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin updated MAPREDUCE-4759:
------------------------------------

    Description: 
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
[thread 1097128256 also had an error]
 the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
#
# JRE version: 6.0_18-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
# Problematic frame:
# V  [libjvm.so+0x62263c]
#
# An error report file with more information is saved as:
# /usr/local/hadoop-0.20.2/hs_err_pid20204.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#



  was:
when running mr job.one of cluster tasktracker lost some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
[thread 1097128256 also had an error]
 the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
#
# JRE version: 6.0_18-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
# Problematic frame:
# V  [libjvm.so+0x62263c]
#
# An error report file with more information is saved as:
# /usr/local/hadoop-0.20.2/hs_err_pid20204.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#



    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>         at org.mortbay.jetty.Server.handle(Server.java:324)
>         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>         at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
> [thread 1097128256 also had an error]
>  the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
> #
> # JRE version: 6.0_18-b07
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
> # Problematic frame:
> # V  [libjvm.so+0x62263c]
> #
> # An error report file with more information is saved as:
> # /usr/local/hadoop-0.20.2/hs_err_pid20204.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://java.sun.com/webapps/bugreport/crash.jsp
> #

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Resolved] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin resolved MAPREDUCE-4759.
-------------------------------------

    Resolution: Not A Problem
    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>         Attachments: hadoop-root-tasktracker-t0928.log, hs_err_pid20204.log
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>   
> see files hs_err_pid20204.log and hadoop-root-tasktracker-t0928.log to get more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin updated MAPREDUCE-4759:
------------------------------------

    Attachment: hs_err_pid20204.log
    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>         Attachments: hs_err_pid20204.log
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>         at org.mortbay.jetty.Server.handle(Server.java:324)
>         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>         at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
> [thread 1097128256 also had an error]
>  the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
> #
> # JRE version: 6.0_18-b07
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
> # Problematic frame:
> # V  [libjvm.so+0x62263c]
> #
> # An error report file with more information is saved as:
> # /usr/local/hadoop-0.20.2/hs_err_pid20204.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://java.sun.com/webapps/bugreport/crash.jsp
> #
> see /usr/local/hadoop-0.20.2/hs_err_pid20204.log:
> VM Arguments:
> jvm_args: -Xmx5000m -Dhadoop.log.dir=/usr/local/hadoop-0.20.2/bin/../logs -Dhadoop.log.file=hadoop-root-tasktracker-t0928.log -Dhadoop.home.dir=/usr/local/hadoop-0.20.2/bin/.. -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=/usr/local/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml 
> java_command: org.apache.hadoop.mapred.TaskTracker
> Launcher Type: SUN_STANDARD
> Environment Variables:
> JAVA_HOME=/usr/java
> CLASSPATH=/usr/local/hadoop-0.20.2/bin/../conf:/usr/java/lib/tools.jar:/usr/local/hadoop-0.20.2/bin/..:/usr/local/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-cli-1.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-codec-1.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-el-1.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-io-1.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/core-3.1.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3-tests.jar:/usr/local/hadoop-0.20.2/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-util-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/junit-3.8.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop-0.20.2/bin/../lib/mail.jar:/usr/local/hadoop-0.20.2/bin/../lib/mockito-all-1.8.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/mysql-connector-java-5.1.6-bin.jar:/usr/local/hadoop-0.20.2/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop-0.20.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/xmlenc-0.52.jar:/usr/local/hadoop-0.20.2/bin/../lib/zookeeper-3.3.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-api-2.1.jar
> PATH=/usr/kerberos/sbin:/usr/kerberos/bin::/usr/java/bin:/usr/java/jre/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/hadoop-0.20.2/bin:/usr/local/mysql/bin:/usr/local/bin:/usr/local/zookeeper-3.3.3/bin:/usr/local/hbase-0.90.3/bin:/usr/local/hive-0.7.1/bin:/home/root1/bin
> LD_LIBRARY_PATH=/usr/java/jre/lib/amd64/server:/usr/java/jre/lib/amd64:/usr/java/jre/../lib/amd64
> SHELL=/bin/bash
> Signal Handlers:
> SIGSEGV: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGBUS: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGFPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGPIPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGXFSZ: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGILL: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
> SIGUSR2: [libjvm.so+0x5da790], sa_mask[0]=0x00000000, sa_flags=0x10000004
> SIGHUP: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
> SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
> SIGTERM: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGQUIT: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> ---------------  S Y S T E M  ---------------
> OS:Red Hat Enterprise Linux Server release 5.5 (Tikanga)
> uname:Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64
> libc:glibc 2.5 NPTL 2.5 
> rlimit: STACK 10240k, CORE 0k, NPROC 73728, NOFILE 8192, AS infinity
> load average:6.77 4.99 2.31
> CPU:total 8 (8 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht
> Memory: 4k page, physical 8165800k(1082892k free), swap 20972848k(20454204k free)
> vm_info: Java HotSpot(TM) 64-Bit Server VM (16.0-b13) for linux-amd64 JRE (1.6.0_18-b07), built on Dec 17 2009 13:42:22 by "java_re" with gcc 3.2.2 (SuSE Linux)
> time: Wed Oct 31 10:58:41 2012
> elapsed time: 80860 seconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin updated MAPREDUCE-4759:
------------------------------------

    Description: 
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
[thread 1097128256 also had an error]
 the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
#
# JRE version: 6.0_18-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
# Problematic frame:
# V  [libjvm.so+0x62263c]
#
# An error report file with more information is saved as:
# /usr/local/hadoop-0.20.2/hs_err_pid20204.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#


see /usr/local/hadoop-0.20.2/hs_err_pid20204.log:
VM Arguments:
jvm_args: -Xmx5000m -Dhadoop.log.dir=/usr/local/hadoop-0.20.2/bin/../logs -Dhadoop.log.file=hadoop-root-tasktracker-t0928.log -Dhadoop.home.dir=/usr/local/hadoop-0.20.2/bin/.. -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=/usr/local/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml 
java_command: org.apache.hadoop.mapred.TaskTracker
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/java
CLASSPATH=/usr/local/hadoop-0.20.2/bin/../conf:/usr/java/lib/tools.jar:/usr/local/hadoop-0.20.2/bin/..:/usr/local/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-cli-1.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-codec-1.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-el-1.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-io-1.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/core-3.1.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3-tests.jar:/usr/local/hadoop-0.20.2/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-util-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/junit-3.8.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop-0.20.2/bin/../lib/mail.jar:/usr/local/hadoop-0.20.2/bin/../lib/mockito-all-1.8.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/mysql-connector-java-5.1.6-bin.jar:/usr/local/hadoop-0.20.2/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop-0.20.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/xmlenc-0.52.jar:/usr/local/hadoop-0.20.2/bin/../lib/zookeeper-3.3.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-api-2.1.jar
PATH=/usr/kerberos/sbin:/usr/kerberos/bin::/usr/java/bin:/usr/java/jre/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/hadoop-0.20.2/bin:/usr/local/mysql/bin:/usr/local/bin:/usr/local/zookeeper-3.3.3/bin:/usr/local/hbase-0.90.3/bin:/usr/local/hive-0.7.1/bin:/home/root1/bin
LD_LIBRARY_PATH=/usr/java/jre/lib/amd64/server:/usr/java/jre/lib/amd64:/usr/java/jre/../lib/amd64
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGBUS: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGFPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGPIPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGXFSZ: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGILL: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x5da790], sa_mask[0]=0x00000000, sa_flags=0x10000004
SIGHUP: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGTERM: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGQUIT: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004


---------------  S Y S T E M  ---------------

OS:Red Hat Enterprise Linux Server release 5.5 (Tikanga)

uname:Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64
libc:glibc 2.5 NPTL 2.5 
rlimit: STACK 10240k, CORE 0k, NPROC 73728, NOFILE 8192, AS infinity
load average:6.77 4.99 2.31

CPU:total 8 (8 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht

Memory: 4k page, physical 8165800k(1082892k free), swap 20972848k(20454204k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (16.0-b13) for linux-amd64 JRE (1.6.0_18-b07), built on Dec 17 2009 13:42:22 by "java_re" with gcc 3.2.2 (SuSE Linux)

time: Wed Oct 31 10:58:41 2012
elapsed time: 80860 seconds

  was:
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
[thread 1097128256 also had an error]
 the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
#
# JRE version: 6.0_18-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
# Problematic frame:
# V  [libjvm.so+0x62263c]
#
# An error report file with more information is saved as:
# /usr/local/hadoop-0.20.2/hs_err_pid20204.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#



    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>         at org.mortbay.jetty.Server.handle(Server.java:324)
>         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>         at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
> [thread 1097128256 also had an error]
>  the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
> #
> # JRE version: 6.0_18-b07
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
> # Problematic frame:
> # V  [libjvm.so+0x62263c]
> #
> # An error report file with more information is saved as:
> # /usr/local/hadoop-0.20.2/hs_err_pid20204.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://java.sun.com/webapps/bugreport/crash.jsp
> #
> see /usr/local/hadoop-0.20.2/hs_err_pid20204.log:
> VM Arguments:
> jvm_args: -Xmx5000m -Dhadoop.log.dir=/usr/local/hadoop-0.20.2/bin/../logs -Dhadoop.log.file=hadoop-root-tasktracker-t0928.log -Dhadoop.home.dir=/usr/local/hadoop-0.20.2/bin/.. -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=/usr/local/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml 
> java_command: org.apache.hadoop.mapred.TaskTracker
> Launcher Type: SUN_STANDARD
> Environment Variables:
> JAVA_HOME=/usr/java
> CLASSPATH=/usr/local/hadoop-0.20.2/bin/../conf:/usr/java/lib/tools.jar:/usr/local/hadoop-0.20.2/bin/..:/usr/local/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-cli-1.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-codec-1.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-el-1.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-io-1.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/core-3.1.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3-tests.jar:/usr/local/hadoop-0.20.2/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-util-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/junit-3.8.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop-0.20.2/bin/../lib/mail.jar:/usr/local/hadoop-0.20.2/bin/../lib/mockito-all-1.8.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/mysql-connector-java-5.1.6-bin.jar:/usr/local/hadoop-0.20.2/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop-0.20.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/xmlenc-0.52.jar:/usr/local/hadoop-0.20.2/bin/../lib/zookeeper-3.3.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-api-2.1.jar
> PATH=/usr/kerberos/sbin:/usr/kerberos/bin::/usr/java/bin:/usr/java/jre/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/hadoop-0.20.2/bin:/usr/local/mysql/bin:/usr/local/bin:/usr/local/zookeeper-3.3.3/bin:/usr/local/hbase-0.90.3/bin:/usr/local/hive-0.7.1/bin:/home/root1/bin
> LD_LIBRARY_PATH=/usr/java/jre/lib/amd64/server:/usr/java/jre/lib/amd64:/usr/java/jre/../lib/amd64
> SHELL=/bin/bash
> Signal Handlers:
> SIGSEGV: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGBUS: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGFPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGPIPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGXFSZ: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGILL: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
> SIGUSR2: [libjvm.so+0x5da790], sa_mask[0]=0x00000000, sa_flags=0x10000004
> SIGHUP: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
> SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
> SIGTERM: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> SIGQUIT: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
> ---------------  S Y S T E M  ---------------
> OS:Red Hat Enterprise Linux Server release 5.5 (Tikanga)
> uname:Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64
> libc:glibc 2.5 NPTL 2.5 
> rlimit: STACK 10240k, CORE 0k, NPROC 73728, NOFILE 8192, AS infinity
> load average:6.77 4.99 2.31
> CPU:total 8 (8 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht
> Memory: 4k page, physical 8165800k(1082892k free), swap 20972848k(20454204k free)
> vm_info: Java HotSpot(TM) 64-Bit Server VM (16.0-b13) for linux-amd64 JRE (1.6.0_18-b07), built on Dec 17 2009 13:42:22 by "java_re" with gcc 3.2.2 (SuSE Linux)
> time: Wed Oct 31 10:58:41 2012
> elapsed time: 80860 seconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin updated MAPREDUCE-4759:
------------------------------------

    Description: 
when running mr job.one of cluster lost tasktracker some times。

see the hadoop-root-tasktracker-xxx.out :

java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
  
see files hs_err_pid20204.log and hadoop-root-tasktracker-t0928.log to get more details

  was:
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
  
see files hs_err_pid20204.log and hadoop-root-tasktracker-t0928.log to get more details

    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>         Attachments: hadoop-root-tasktracker-t0928.log, hs_err_pid20204.log
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>   
> see files hs_err_pid20204.log and hadoop-root-tasktracker-t0928.log to get more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin updated MAPREDUCE-4759:
------------------------------------

    Description: 
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
  
see files hs_err_pid20204.log and hadoop-root-tasktracker-t0928.log to get more details

  was:
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)

see files 

    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>         Attachments: hadoop-root-tasktracker-t0928.log, hs_err_pid20204.log
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>   
> see files hs_err_pid20204.log and hadoop-root-tasktracker-t0928.log to get more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin updated MAPREDUCE-4759:
------------------------------------

    Description: 
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)

see files 

  was:
when running mr job.one of cluster lost tasktracker some times。
see the hadoop-root-tasktracker-xxx.out :
java.io.IOException: File too large
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.log(Category.java:856)
        at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
[thread 1097128256 also had an error]
 the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002adafa2df63c, pid=20204, tid=1101338944
#
# JRE version: 6.0_18-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (16.0-b13 mixed mode linux-amd64 )
# Problematic frame:
# V  [libjvm.so+0x62263c]
#
# An error report file with more information is saved as:
# /usr/local/hadoop-0.20.2/hs_err_pid20204.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#


see /usr/local/hadoop-0.20.2/hs_err_pid20204.log:
VM Arguments:
jvm_args: -Xmx5000m -Dhadoop.log.dir=/usr/local/hadoop-0.20.2/bin/../logs -Dhadoop.log.file=hadoop-root-tasktracker-t0928.log -Dhadoop.home.dir=/usr/local/hadoop-0.20.2/bin/.. -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=/usr/local/hadoop-0.20.2/bin/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml 
java_command: org.apache.hadoop.mapred.TaskTracker
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/java
CLASSPATH=/usr/local/hadoop-0.20.2/bin/../conf:/usr/java/lib/tools.jar:/usr/local/hadoop-0.20.2/bin/..:/usr/local/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-cli-1.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-codec-1.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-el-1.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-io-1.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-0.20.2/bin/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/core-3.1.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/hbase-0.90.3-tests.jar:/usr/local/hadoop-0.20.2/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-0.20.2/bin/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/jetty-util-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/junit-3.8.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/kfs-0.2.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop-0.20.2/bin/../lib/mail.jar:/usr/local/hadoop-0.20.2/bin/../lib/mockito-all-1.8.0.jar:/usr/local/hadoop-0.20.2/bin/../lib/mysql-connector-java-5.1.6-bin.jar:/usr/local/hadoop-0.20.2/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop-0.20.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-0.20.2/bin/../lib/xmlenc-0.52.jar:/usr/local/hadoop-0.20.2/bin/../lib/zookeeper-3.3.2.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-0.20.2/bin/../lib/jsp-2.1/jsp-api-2.1.jar
PATH=/usr/kerberos/sbin:/usr/kerberos/bin::/usr/java/bin:/usr/java/jre/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/hadoop-0.20.2/bin:/usr/local/mysql/bin:/usr/local/bin:/usr/local/zookeeper-3.3.3/bin:/usr/local/hbase-0.90.3/bin:/usr/local/hive-0.7.1/bin:/home/root1/bin
LD_LIBRARY_PATH=/usr/java/jre/lib/amd64/server:/usr/java/jre/lib/amd64:/usr/java/jre/../lib/amd64
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGBUS: [libjvm.so+0x70f1a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGFPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGPIPE: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGXFSZ: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGILL: [libjvm.so+0x5d7f70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x5da790], sa_mask[0]=0x00000000, sa_flags=0x10000004
SIGHUP: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGTERM: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGQUIT: [libjvm.so+0x5da4e0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004


---------------  S Y S T E M  ---------------

OS:Red Hat Enterprise Linux Server release 5.5 (Tikanga)

uname:Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64
libc:glibc 2.5 NPTL 2.5 
rlimit: STACK 10240k, CORE 0k, NPROC 73728, NOFILE 8192, AS infinity
load average:6.77 4.99 2.31

CPU:total 8 (8 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht

Memory: 4k page, physical 8165800k(1082892k free), swap 20972848k(20454204k free)

vm_info: Java HotSpot(TM) 64-Bit Server VM (16.0-b13) for linux-amd64 JRE (1.6.0_18-b07), built on Dec 17 2009 13:42:22 by "java_re" with gcc 3.2.2 (SuSE Linux)

time: Wed Oct 31 10:58:41 2012
elapsed time: 80860 seconds

    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>         Attachments: hs_err_pid20204.log
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>         at org.mortbay.jetty.Server.handle(Server.java:324)
>         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>         at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
> see files 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (MAPREDUCE-4759) java.io.IOException: File too large

Posted by "zhangjianlin (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/MAPREDUCE-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

zhangjianlin updated MAPREDUCE-4759:
------------------------------------

    Attachment: hadoop-root-tasktracker-t0928.log
    
> java.io.IOException: File too large
> -----------------------------------
>
>                 Key: MAPREDUCE-4759
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4759
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller
>    Affects Versions: 0.20.2
>            Reporter: zhangjianlin
>            Priority: Critical
>              Labels: hadoop
>             Fix For: 0.20.2
>
>         Attachments: hadoop-root-tasktracker-t0928.log, hs_err_pid20204.log
>
>
> when running mr job.one of cluster lost tasktracker some times。
> see the hadoop-root-tasktracker-xxx.out :
> java.io.IOException: File too large
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:359)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:133)
>         at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2972)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>         at org.mortbay.jetty.Server.handle(Server.java:324)
>         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>         at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
>         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
> see files 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira