You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Raymond Ng <ra...@gmail.com> on 2012/10/18 18:32:22 UTC

flume restart exception

Hi

I'm getting the following exception when restarting flume after it was
killed manually, please advice


2012-10-18 17:04:33,889  INFO [conf-file-poller-0]
DefaultLogicalNodeManager.java - Starting Channel fileChannel4
2012-10-18 17:04:33,914  INFO [lifecycleSupervisor-1-4]
DirectMemoryUtils.java - Unable to get maxDirectMemory from VM:
NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
2012-10-18 17:04:33,916  INFO [lifecycleSupervisor-1-4]
DirectMemoryUtils.java - Direct Memory Allocation:  Allocation = 1048576,
Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-0]
MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel1
started
2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-2]
MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel2
started
2012-10-18 17:04:40,835 ERROR [lifecycleSupervisor-1-1] Log.java - Failed
to initialize Log
java.io.IOException: Header 80808080 not expected value: deadbeef
    at
org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
    at
org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
    at
org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
    at org.apache.flume.channel.file.Log.replay(Log.java:251)
    at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
    at
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
2012-10-18 17:04:40,836 ERROR [lifecycleSupervisor-1-1] FileChannel.java -
Failed to start the file channel
java.io.IOException: Header 80808080 not expected value: deadbeef
    at
org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
    at
org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
    at
org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
    at org.apache.flume.channel.file.Log.replay(Log.java:251)
    at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
    at
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
2012-10-18 17:04:53,740  INFO [lifecycleSupervisor-1-4]
MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel4
started
2012-10-18 17:04:53,740  INFO [conf-file-poller-0]
DefaultLogicalNodeManager.java - Starting Sink hdfsSink4
2012-10-18 17:04:53,741  INFO [conf-file-poller-0]
DefaultLogicalNodeManager.java - Starting Sink hdfsSink2

.....
.....
2012-10-18 17:10:36,789 ERROR
[SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
deliver event. Exception follows.
java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
    at
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
    at
org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
    at
org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
    at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
    at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
    at java.lang.Thread.run(Thread.java:662)
2012-10-18 17:10:41,790 ERROR
[SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
deliver event. Exception follows.
java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
    at
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
    at
org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
    at
org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
    at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
    at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
    at java.lang.Thread.run(Thread.java:662)



-- 
Rgds
Ray

Re: flume restart exception

Posted by Hari Shreedharan <hs...@cloudera.com>.
Raymond, 

If you mean custom components that you wrote for Flume-1.2.0, it will work without any issues in Flume-1.3.0 also. If there are some custom patches you applied, then you might need to rework the patches based on where the changes went into.

I believe this issue you are hitting should be reason enough (we improved FileChannel a *lot* from Flume-1.2.0 to Flume-1.3.0) to upgrade.
 

Thanks,
Hari

-- 
Hari Shreedharan


On Friday, October 19, 2012 at 8:15 AM, Brock Noland wrote:

> The problem is that 1.2 writes first the marker for the event and then the event. So it happens that sometimes the marker is written without the event. The patch is small and in that jira Hari mentioned.
> 
> On Fri, Oct 19, 2012 at 4:19 AM, Raymond Ng <raymondair@gmail.com (mailto:raymondair@gmail.com)> wrote:
> > thanks for the replies 
> > 
> > I'm using Flume 1.2, and I'll look into getting 1.3 after assessing how much rework is needed to port the local customised changes from 1.2 to 1.3
> > 
> > also does the problem with the trailing 7F always happen when flume is killed ungracefully?
> > 
> > Ray
> > 
> > 
> > On Thu, Oct 18, 2012 at 5:54 PM, Brock Noland <brock@cloudera.com (mailto:brock@cloudera.com)> wrote:
> > > Hari is correct, but that won't fix that log file. To do that, you
> > > need to truncate the log file. If you do a hexdump, there should be a
> > > trailing 7F. That is a 7F with the rest of the file being 80.
> > > 
> > > somedata....7F808080...
> > > 
> > > That trailing 7F is the problem.  Be sure to back up the log file
> > > before truncating it.
> > > 
> > > Brock
> > > 
> > > On Thu, Oct 18, 2012 at 11:48 AM, Hari Shreedharan
> > > <hshreedharan@cloudera.com (mailto:hshreedharan@cloudera.com)> wrote:
> > > > Raymond,
> > > >
> > > > This was an issue which we fixed (FLUME-1380) and will be part of
> > > > Flume-1.3.0. If you need it immediately, you could clone trunk/flume-1.3.0
> > > > branch and build it. You should not see this error once you do that. Or you
> > > > can wait for the next release - Flume-1.3.0, which should be released in the
> > > > next few weeks or so.
> > > >
> > > >
> > > > Thanks
> > > > Hari
> > > >
> > > > --
> > > > Hari Shreedharan
> > > >
> > > > On Thursday, October 18, 2012 at 9:32 AM, Raymond Ng wrote:
> > > >
> > > > Hi
> > > >
> > > > I'm getting the following exception when restarting flume after it was
> > > > killed manually, please advice
> > > >
> > > >
> > > > 2012-10-18 17:04:33,889  INFO [conf-file-poller-0]
> > > > DefaultLogicalNodeManager.java - Starting Channel fileChannel4
> > > > 2012-10-18 17:04:33,914  INFO [lifecycleSupervisor-1-4]
> > > > DirectMemoryUtils.java - Unable to get maxDirectMemory from VM:
> > > > NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> > > > 2012-10-18 17:04:33,916  INFO [lifecycleSupervisor-1-4]
> > > > DirectMemoryUtils.java - Direct Memory Allocation:  Allocation = 1048576,
> > > > Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
> > > > 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-0]
> > > > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel1
> > > > started
> > > > 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-2]
> > > > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel2
> > > > started
> > > > 2012-10-18 17:04:40,835 ERROR [lifecycleSupervisor-1-1] Log.java - Failed to
> > > > initialize Log
> > > > java.io.IOException: Header 80808080 not expected value: deadbeef
> > > >     at
> > > > org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
> > > >     at
> > > > org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
> > > >     at
> > > > org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
> > > >     at org.apache.flume.channel.file.Log.replay(Log.java:251)
> > > >     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
> > > >     at
> > > > org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
> > > >     at
> > > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> > > >     at
> > > > java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> > > >     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> > > >     at
> > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> > > >     at
> > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> > > >     at
> > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> > > >     at
> > > > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> > > >     at
> > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> > > >     at java.lang.Thread.run(Thread.java:662)
> > > > 2012-10-18 17:04:40,836 ERROR [lifecycleSupervisor-1-1] FileChannel.java -
> > > > Failed to start the file channel
> > > > java.io.IOException: Header 80808080 not expected value: deadbeef
> > > >     at
> > > > org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
> > > >     at
> > > > org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
> > > >     at
> > > > org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
> > > >     at org.apache.flume.channel.file.Log.replay(Log.java:251)
> > > >     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
> > > >     at
> > > > org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
> > > >     at
> > > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> > > >     at
> > > > java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> > > >     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> > > >     at
> > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> > > >     at
> > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> > > >     at
> > > > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> > > >     at
> > > > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> > > >     at
> > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> > > >     at java.lang.Thread.run(Thread.java:662)
> > > > 2012-10-18 17:04:53,740  INFO [lifecycleSupervisor-1-4]
> > > > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel4
> > > > started
> > > > 2012-10-18 17:04:53,740  INFO [conf-file-poller-0]
> > > > DefaultLogicalNodeManager.java - Starting Sink hdfsSink4
> > > > 2012-10-18 17:04:53,741  INFO [conf-file-poller-0]
> > > > DefaultLogicalNodeManager.java - Starting Sink hdfsSink2
> > > >
> > > > .....
> > > > .....
> > > > 2012-10-18 17:10:36,789 ERROR
> > > > [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
> > > > deliver event. Exception follows.
> > > > java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
> > > >     at
> > > > com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> > > >     at
> > > > org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
> > > >     at
> > > > org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
> > > >     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
> > > >     at
> > > > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > >     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > >     at java.lang.Thread.run(Thread.java:662)
> > > > 2012-10-18 17:10:41,790 ERROR
> > > > [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
> > > > deliver event. Exception follows.
> > > > java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
> > > >     at
> > > > com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> > > >     at
> > > > org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
> > > >     at
> > > > org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
> > > >     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
> > > >     at
> > > > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > >     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > >     at java.lang.Thread.run(Thread.java:662)
> > > >
> > > >
> > > >
> > > > --
> > > > Rgds
> > > > Ray
> > > >
> > > >
> > > 
> > > 
> > > 
> > > --
> > > Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/
> > 
> > 
> > 
> > -- 
> > Rgds
> > Ray
> 
> 
> 
> -- 
> Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/


Re: flume restart exception

Posted by Brock Noland <br...@cloudera.com>.
The problem is that 1.2 writes first the marker for the event and then the
event. So it happens that sometimes the marker is written without the
event. The patch is small and in that jira Hari mentioned.

On Fri, Oct 19, 2012 at 4:19 AM, Raymond Ng <ra...@gmail.com> wrote:

> thanks for the replies
>
> I'm using Flume 1.2, and I'll look into getting 1.3 after assessing how
> much rework is needed to port the local customised changes from 1.2 to 1.3
>
> also does the problem with the trailing 7F always happen when flume is
> killed ungracefully?
>
> Ray
>
>
> On Thu, Oct 18, 2012 at 5:54 PM, Brock Noland <br...@cloudera.com> wrote:
>
>> Hari is correct, but that won't fix that log file. To do that, you
>> need to truncate the log file. If you do a hexdump, there should be a
>> trailing 7F. That is a 7F with the rest of the file being 80.
>>
>> somedata....7F808080...
>>
>> That trailing 7F is the problem.  Be sure to back up the log file
>> before truncating it.
>>
>> Brock
>>
>> On Thu, Oct 18, 2012 at 11:48 AM, Hari Shreedharan
>> <hs...@cloudera.com> wrote:
>> > Raymond,
>> >
>> > This was an issue which we fixed (FLUME-1380) and will be part of
>> > Flume-1.3.0. If you need it immediately, you could clone
>> trunk/flume-1.3.0
>> > branch and build it. You should not see this error once you do that. Or
>> you
>> > can wait for the next release - Flume-1.3.0, which should be released
>> in the
>> > next few weeks or so.
>> >
>> >
>> > Thanks
>> > Hari
>> >
>> > --
>> > Hari Shreedharan
>> >
>> > On Thursday, October 18, 2012 at 9:32 AM, Raymond Ng wrote:
>> >
>> > Hi
>> >
>> > I'm getting the following exception when restarting flume after it was
>> > killed manually, please advice
>> >
>> >
>> > 2012-10-18 17:04:33,889  INFO [conf-file-poller-0]
>> > DefaultLogicalNodeManager.java - Starting Channel fileChannel4
>> > 2012-10-18 17:04:33,914  INFO [lifecycleSupervisor-1-4]
>> > DirectMemoryUtils.java - Unable to get maxDirectMemory from VM:
>> > NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
>> > 2012-10-18 17:04:33,916  INFO [lifecycleSupervisor-1-4]
>> > DirectMemoryUtils.java - Direct Memory Allocation:  Allocation =
>> 1048576,
>> > Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
>> > 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-0]
>> > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel1
>> > started
>> > 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-2]
>> > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel2
>> > started
>> > 2012-10-18 17:04:40,835 ERROR [lifecycleSupervisor-1-1] Log.java -
>> Failed to
>> > initialize Log
>> > java.io.IOException: Header 80808080 not expected value: deadbeef
>> >     at
>> >
>> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>> >     at
>> >
>> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>> >     at
>> >
>> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>> >     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>> >     at
>> org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>> >     at
>> >
>> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>> >     at
>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>> >     at
>> >
>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>> >     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>> >     at
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>> >     at
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>> >     at
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >     at java.lang.Thread.run(Thread.java:662)
>> > 2012-10-18 17:04:40,836 ERROR [lifecycleSupervisor-1-1]
>> FileChannel.java -
>> > Failed to start the file channel
>> > java.io.IOException: Header 80808080 not expected value: deadbeef
>> >     at
>> >
>> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>> >     at
>> >
>> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>> >     at
>> >
>> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>> >     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>> >     at
>> org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>> >     at
>> >
>> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>> >     at
>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>> >     at
>> >
>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>> >     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>> >     at
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>> >     at
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>> >     at
>> >
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >     at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >     at java.lang.Thread.run(Thread.java:662)
>> > 2012-10-18 17:04:53,740  INFO [lifecycleSupervisor-1-4]
>> > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel4
>> > started
>> > 2012-10-18 17:04:53,740  INFO [conf-file-poller-0]
>> > DefaultLogicalNodeManager.java - Starting Sink hdfsSink4
>> > 2012-10-18 17:04:53,741  INFO [conf-file-poller-0]
>> > DefaultLogicalNodeManager.java - Starting Sink hdfsSink2
>> >
>> > .....
>> > .....
>> > 2012-10-18 17:10:36,789 ERROR
>> > [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java -
>> Unable to
>> > deliver event. Exception follows.
>> > java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>> >     at
>> > com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>> >     at
>> >
>> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>> >     at
>> >
>> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>> >     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>> >     at
>> >
>> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>> >     at
>> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>> >     at java.lang.Thread.run(Thread.java:662)
>> > 2012-10-18 17:10:41,790 ERROR
>> > [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java -
>> Unable to
>> > deliver event. Exception follows.
>> > java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>> >     at
>> > com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>> >     at
>> >
>> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>> >     at
>> >
>> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>> >     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>> >     at
>> >
>> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>> >     at
>> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>> >     at java.lang.Thread.run(Thread.java:662)
>> >
>> >
>> >
>> > --
>> > Rgds
>> > Ray
>> >
>> >
>>
>>
>>
>> --
>> Apache MRUnit - Unit testing MapReduce -
>> http://incubator.apache.org/mrunit/
>>
>
>
>
> --
> Rgds
> Ray
>



-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: flume restart exception

Posted by Raymond Ng <ra...@gmail.com>.
thanks for the replies

I'm using Flume 1.2, and I'll look into getting 1.3 after assessing how
much rework is needed to port the local customised changes from 1.2 to 1.3

also does the problem with the trailing 7F always happen when flume is
killed ungracefully?

Ray

On Thu, Oct 18, 2012 at 5:54 PM, Brock Noland <br...@cloudera.com> wrote:

> Hari is correct, but that won't fix that log file. To do that, you
> need to truncate the log file. If you do a hexdump, there should be a
> trailing 7F. That is a 7F with the rest of the file being 80.
>
> somedata....7F808080...
>
> That trailing 7F is the problem.  Be sure to back up the log file
> before truncating it.
>
> Brock
>
> On Thu, Oct 18, 2012 at 11:48 AM, Hari Shreedharan
> <hs...@cloudera.com> wrote:
> > Raymond,
> >
> > This was an issue which we fixed (FLUME-1380) and will be part of
> > Flume-1.3.0. If you need it immediately, you could clone
> trunk/flume-1.3.0
> > branch and build it. You should not see this error once you do that. Or
> you
> > can wait for the next release - Flume-1.3.0, which should be released in
> the
> > next few weeks or so.
> >
> >
> > Thanks
> > Hari
> >
> > --
> > Hari Shreedharan
> >
> > On Thursday, October 18, 2012 at 9:32 AM, Raymond Ng wrote:
> >
> > Hi
> >
> > I'm getting the following exception when restarting flume after it was
> > killed manually, please advice
> >
> >
> > 2012-10-18 17:04:33,889  INFO [conf-file-poller-0]
> > DefaultLogicalNodeManager.java - Starting Channel fileChannel4
> > 2012-10-18 17:04:33,914  INFO [lifecycleSupervisor-1-4]
> > DirectMemoryUtils.java - Unable to get maxDirectMemory from VM:
> > NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> > 2012-10-18 17:04:33,916  INFO [lifecycleSupervisor-1-4]
> > DirectMemoryUtils.java - Direct Memory Allocation:  Allocation = 1048576,
> > Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
> > 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-0]
> > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel1
> > started
> > 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-2]
> > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel2
> > started
> > 2012-10-18 17:04:40,835 ERROR [lifecycleSupervisor-1-1] Log.java -
> Failed to
> > initialize Log
> > java.io.IOException: Header 80808080 not expected value: deadbeef
> >     at
> >
> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
> >     at
> >
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
> >     at
> >
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
> >     at org.apache.flume.channel.file.Log.replay(Log.java:251)
> >     at
> org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
> >     at
> >
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
> >     at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >     at
> >
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> >     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> >     at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> >     at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> >     at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >     at java.lang.Thread.run(Thread.java:662)
> > 2012-10-18 17:04:40,836 ERROR [lifecycleSupervisor-1-1] FileChannel.java
> -
> > Failed to start the file channel
> > java.io.IOException: Header 80808080 not expected value: deadbeef
> >     at
> >
> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
> >     at
> >
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
> >     at
> >
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
> >     at org.apache.flume.channel.file.Log.replay(Log.java:251)
> >     at
> org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
> >     at
> >
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
> >     at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> >     at
> >
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> >     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> >     at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> >     at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> >     at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >     at java.lang.Thread.run(Thread.java:662)
> > 2012-10-18 17:04:53,740  INFO [lifecycleSupervisor-1-4]
> > MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel4
> > started
> > 2012-10-18 17:04:53,740  INFO [conf-file-poller-0]
> > DefaultLogicalNodeManager.java - Starting Sink hdfsSink4
> > 2012-10-18 17:04:53,741  INFO [conf-file-poller-0]
> > DefaultLogicalNodeManager.java - Starting Sink hdfsSink2
> >
> > .....
> > .....
> > 2012-10-18 17:10:36,789 ERROR
> > [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable
> to
> > deliver event. Exception follows.
> > java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
> >     at
> > com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> >     at
> >
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
> >     at
> >
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
> >     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
> >     at
> >
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> >     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> >     at java.lang.Thread.run(Thread.java:662)
> > 2012-10-18 17:10:41,790 ERROR
> > [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable
> to
> > deliver event. Exception follows.
> > java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
> >     at
> > com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> >     at
> >
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
> >     at
> >
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
> >     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
> >     at
> >
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> >     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> >     at java.lang.Thread.run(Thread.java:662)
> >
> >
> >
> > --
> > Rgds
> > Ray
> >
> >
>
>
>
> --
> Apache MRUnit - Unit testing MapReduce -
> http://incubator.apache.org/mrunit/
>



-- 
Rgds
Ray

Re: flume restart exception

Posted by Brock Noland <br...@cloudera.com>.
Hari is correct, but that won't fix that log file. To do that, you
need to truncate the log file. If you do a hexdump, there should be a
trailing 7F. That is a 7F with the rest of the file being 80.

somedata....7F808080...

That trailing 7F is the problem.  Be sure to back up the log file
before truncating it.

Brock

On Thu, Oct 18, 2012 at 11:48 AM, Hari Shreedharan
<hs...@cloudera.com> wrote:
> Raymond,
>
> This was an issue which we fixed (FLUME-1380) and will be part of
> Flume-1.3.0. If you need it immediately, you could clone trunk/flume-1.3.0
> branch and build it. You should not see this error once you do that. Or you
> can wait for the next release - Flume-1.3.0, which should be released in the
> next few weeks or so.
>
>
> Thanks
> Hari
>
> --
> Hari Shreedharan
>
> On Thursday, October 18, 2012 at 9:32 AM, Raymond Ng wrote:
>
> Hi
>
> I'm getting the following exception when restarting flume after it was
> killed manually, please advice
>
>
> 2012-10-18 17:04:33,889  INFO [conf-file-poller-0]
> DefaultLogicalNodeManager.java - Starting Channel fileChannel4
> 2012-10-18 17:04:33,914  INFO [lifecycleSupervisor-1-4]
> DirectMemoryUtils.java - Unable to get maxDirectMemory from VM:
> NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> 2012-10-18 17:04:33,916  INFO [lifecycleSupervisor-1-4]
> DirectMemoryUtils.java - Direct Memory Allocation:  Allocation = 1048576,
> Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
> 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-0]
> MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel1
> started
> 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-2]
> MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel2
> started
> 2012-10-18 17:04:40,835 ERROR [lifecycleSupervisor-1-1] Log.java - Failed to
> initialize Log
> java.io.IOException: Header 80808080 not expected value: deadbeef
>     at
> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>     at
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>     at
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>     at
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:04:40,836 ERROR [lifecycleSupervisor-1-1] FileChannel.java -
> Failed to start the file channel
> java.io.IOException: Header 80808080 not expected value: deadbeef
>     at
> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>     at
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>     at
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>     at
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:04:53,740  INFO [lifecycleSupervisor-1-4]
> MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel4
> started
> 2012-10-18 17:04:53,740  INFO [conf-file-poller-0]
> DefaultLogicalNodeManager.java - Starting Sink hdfsSink4
> 2012-10-18 17:04:53,741  INFO [conf-file-poller-0]
> DefaultLogicalNodeManager.java - Starting Sink hdfsSink2
>
> .....
> .....
> 2012-10-18 17:10:36,789 ERROR
> [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
> deliver event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>     at
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>     at
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>     at
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>     at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:10:41,790 ERROR
> [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
> deliver event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>     at
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>     at
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>     at
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>     at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>     at java.lang.Thread.run(Thread.java:662)
>
>
>
> --
> Rgds
> Ray
>
>



-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: flume restart exception

Posted by Hari Shreedharan <hs...@cloudera.com>.
Raymond, 

This was an issue which we fixed (FLUME-1380) and will be part of Flume-1.3.0. If you need it immediately, you could clone trunk/flume-1.3.0 branch and build it. You should not see this error once you do that. Or you can wait for the next release - Flume-1.3.0, which should be released in the next few weeks or so.


Thanks
Hari

-- 
Hari Shreedharan


On Thursday, October 18, 2012 at 9:32 AM, Raymond Ng wrote:

> Hi 
> 
> I'm getting the following exception when restarting flume after it was killed manually, please advice
> 
> 
> 2012-10-18 17:04:33,889  INFO [conf-file-poller-0] DefaultLogicalNodeManager.java - Starting Channel fileChannel4
> 2012-10-18 17:04:33,914  INFO [lifecycleSupervisor-1-4] DirectMemoryUtils.java - Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> 2012-10-18 17:04:33,916  INFO [lifecycleSupervisor-1-4] DirectMemoryUtils.java - Direct Memory Allocation:  Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
> 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-0] MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel1 started
> 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-2] MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel2 started
> 2012-10-18 17:04:40,835 ERROR [lifecycleSupervisor-1-1] Log.java - Failed to initialize Log
> java.io.IOException: Header 80808080 not expected value: deadbeef
>     at org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>     at org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>     at org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>     at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:04:40,836 ERROR [lifecycleSupervisor-1-1] FileChannel.java - Failed to start the file channel
> java.io.IOException: Header 80808080 not expected value: deadbeef
>     at org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>     at org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>     at org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>     at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:04:53,740  INFO [lifecycleSupervisor-1-4] MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel4 started
> 2012-10-18 17:04:53,740  INFO [conf-file-poller-0] DefaultLogicalNodeManager.java - Starting Sink hdfsSink4
> 2012-10-18 17:04:53,741  INFO [conf-file-poller-0] DefaultLogicalNodeManager.java - Starting Sink hdfsSink2
> 
> .....
> .....
> 2012-10-18 17:10:36,789 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to deliver event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>     at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>     at org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>     at org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>     at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:10:41,790 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to deliver event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>     at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>     at org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>     at org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>     at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>     at java.lang.Thread.run(Thread.java:662)
> 
> 
> 
> -- 
> Rgds
> Ray


Re: flume restart exception

Posted by Brock Noland <br...@cloudera.com>.
Hi,

What version of flume are you running?

Brock

On Thu, Oct 18, 2012 at 11:32 AM, Raymond Ng <ra...@gmail.com> wrote:
> Hi
>
> I'm getting the following exception when restarting flume after it was
> killed manually, please advice
>
>
> 2012-10-18 17:04:33,889  INFO [conf-file-poller-0]
> DefaultLogicalNodeManager.java - Starting Channel fileChannel4
> 2012-10-18 17:04:33,914  INFO [lifecycleSupervisor-1-4]
> DirectMemoryUtils.java - Unable to get maxDirectMemory from VM:
> NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> 2012-10-18 17:04:33,916  INFO [lifecycleSupervisor-1-4]
> DirectMemoryUtils.java - Direct Memory Allocation:  Allocation = 1048576,
> Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
> 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-0]
> MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel1
> started
> 2012-10-18 17:04:34,058  INFO [lifecycleSupervisor-1-2]
> MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel2
> started
> 2012-10-18 17:04:40,835 ERROR [lifecycleSupervisor-1-1] Log.java - Failed to
> initialize Log
> java.io.IOException: Header 80808080 not expected value: deadbeef
>     at
> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>     at
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>     at
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>     at
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:04:40,836 ERROR [lifecycleSupervisor-1-1] FileChannel.java -
> Failed to start the file channel
> java.io.IOException: Header 80808080 not expected value: deadbeef
>     at
> org.apache.flume.channel.file.TransactionEventRecord.fromDataInput(TransactionEventRecord.java:136)
>     at
> org.apache.flume.channel.file.LogFile$SequentialReader.next(LogFile.java:378)
>     at
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:88)
>     at org.apache.flume.channel.file.Log.replay(Log.java:251)
>     at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>     at
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:237)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>     at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:04:53,740  INFO [lifecycleSupervisor-1-4]
> MonitoredCounterGroup.java - Component type: CHANNEL, name: fileChannel4
> started
> 2012-10-18 17:04:53,740  INFO [conf-file-poller-0]
> DefaultLogicalNodeManager.java - Starting Sink hdfsSink4
> 2012-10-18 17:04:53,741  INFO [conf-file-poller-0]
> DefaultLogicalNodeManager.java - Starting Sink hdfsSink2
>
> .....
> .....
> 2012-10-18 17:10:36,789 ERROR
> [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
> deliver event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>     at
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>     at
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>     at
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>     at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>     at java.lang.Thread.run(Thread.java:662)
> 2012-10-18 17:10:41,790 ERROR
> [SinkRunner-PollingRunner-DefaultSinkProcessor] SinkRunner.java - Unable to
> deliver event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=fileChannel3]
>     at
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
>     at
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:266)
>     at
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:118)
>     at com.test.CustomHDFSSink.process(CustomHDFSSink.java:428)
>     at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>     at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>     at java.lang.Thread.run(Thread.java:662)
>
>
>
> --
> Rgds
> Ray



-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/