You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Sanjay Ramanathan <sa...@lucidworks.com> on 2014/07/16 00:17:56 UTC

"Hit max consecutive under-replication rotations" Error

Hi all,


While trying to input data from flume to HDFS sink, I'm getting this error


"

[ERROR - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:566)] Hit max consecutive under-replication rotations (30); will not continue rolling files under this path due to under-replication

"



I looked up the error online and it said to make the below modification(dfs.replication). I did that and the problem still persists.

My hadoop configuration hdfs-site.xml has the property

"
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
"

I also get this message 30 times before the above error message:
"Block Under-replication detected. Rotating file."

My flume conf file has the configuration:
a1.sinks.k1.channel = c1
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://xx.xx.xx.xx:8020/input1/event/%y-%m-%d/%H%M
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.fileType = DataStream
#a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.rollCount = 1000
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollInterval = 30

?Kindly let me know what is it, that I'm doing wrong.


Sincerely,
Sanjay Ramanathan


RE: "Hit max consecutive under-replication rotations" Error

Posted by Sanjay Ramanathan <sa...@lucidworks.com>.
?Hey Jonathan,


Thanks a lot for the advice. changing the minBlockReplicas did the trick, but I'll work on the replication configurations with more care.

(Currently, I'm just testing it out on test/lab environment, with just one node).


Thanks,

Sanjay Ramanathan


________________________________
From: Jonathan Natkins <na...@streamsets.com>
Sent: Tuesday, July 15, 2014 3:35 PM
To: user@flume.apache.org
Subject: Re: "Hit max consecutive under-replication rotations" Error

Hi Sanjay,

Is this just a single-node test cluster? Playing with replication configs is probably a little bit dangerous, since it means that your blocks will have no replicas, and if you lose a disk, you're going to end up with no way to recover the blocks. If this is a cluster you actually care about, I'd strongly recommend setting your dfs.replication to 3, and looking at HDFS to determine why your blocks are not getting replicated in the first place.

My guess is that what is happening is that your Flume agent is a mismatch in configs. The HDFS sink has a parameter called minBlockReplicas, which informs it as to how many block replicas are necessary to have, and if not specified, it pulls that parameter from the default HDFS configuration file. My guess is that, somehow, it's getting a different value for the dfs.replication or for dfs.namenode.replication.min.

You can probably circumvent this error by modifying your Flume config with this:

a1.sinks.k1.hdfs.minBlockReplicas = 1

Thanks,
Natty


On Tue, Jul 15, 2014 at 3:17 PM, Sanjay Ramanathan <sa...@lucidworks.com>> wrote:

Hi all,


While trying to input data from flume to HDFS sink, I'm getting this error


"

[ERROR - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:566)] Hit max consecutive under-replication rotations (30); will not continue rolling files under this path due to under-replication

"



I looked up the error online and it said to make the below modification(dfs.replication). I did that and the problem still persists.

My hadoop configuration hdfs-site.xml has the property

"
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
"

I also get this message 30 times before the above error message:
"Block Under-replication detected. Rotating file."

My flume conf file has the configuration:
a1.sinks.k1.channel = c1
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://xx.xx.xx.xx:8020/input1/event/%y-%m-%d/%H%M
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.fileType = DataStream
#a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.rollCount = 1000
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollInterval = 30

?Kindly let me know what is it, that I'm doing wrong.


Sincerely,
Sanjay Ramanathan



Re: "Hit max consecutive under-replication rotations" Error

Posted by Jonathan Natkins <na...@streamsets.com>.
Hi Sanjay,

Is this just a single-node test cluster? Playing with replication configs
is probably a little bit dangerous, since it means that your blocks will
have no replicas, and if you lose a disk, you're going to end up with no
way to recover the blocks. If this is a cluster you actually care about,
I'd strongly recommend setting your dfs.replication to 3, and looking at
HDFS to determine why your blocks are not getting replicated in the first
place.

My guess is that what is happening is that your Flume agent is a mismatch
in configs. The HDFS sink has a parameter called minBlockReplicas, which
informs it as to how many block replicas are necessary to have, and if not
specified, it pulls that parameter from the default HDFS configuration
file. My guess is that, somehow, it's getting a different value for the
dfs.replication or for dfs.namenode.replication.min.

You can probably circumvent this error by modifying your Flume config with
this:

a1.sinks.k1.hdfs.minBlockReplicas = 1

Thanks,
Natty


On Tue, Jul 15, 2014 at 3:17 PM, Sanjay Ramanathan <
sanjay.ramanathan@lucidworks.com> wrote:

>  Hi all,
>
>
>  While trying to input data from flume to HDFS sink, I'm getting this
> error
>
>
>  "
>  [ERROR -
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:566)] Hit
> max consecutive under-replication rotations (30); will not continue rolling
> files under this path due to under-replication
>
> "
>
>
>
>  I looked up the error online and it said to make the below
> modification(dfs.replication). I did that and the problem still persists.
>
> My hadoop configuration hdfs-site.xml has the property
>  "
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
> "
>
>  I also get this message 30 times before the above error message:
>  "Block Under-replication detected. Rotating file."
>
>  My flume conf file has the configuration:
>  a1.sinks.k1.channel = c1
> a1.sinks.k1.type = hdfs
> a1.sinks.k1.hdfs.path = hdfs://xx.xx.xx.xx:8020/input1/event/%y-%m-%d/%H%M
> a1.sinks.k1.hdfs.useLocalTimeStamp = true
> a1.sinks.k1.hdfs.round = true
> a1.sinks.k1.hdfs.roundValue = 10
> a1.sinks.k1.hdfs.roundUnit = minute
> a1.sinks.k1.hdfs.writeFormat = Text
> a1.sinks.k1.hdfs.fileType = DataStream
> #a1.sinks.k1.hdfs.filePrefix = events-
> a1.sinks.k1.hdfs.rollCount = 1000
> a1.sinks.k1.hdfs.batchSize = 10000
> a1.sinks.k1.hdfs.rollSize = 0
>  a1.sinks.k1.hdfs.rollInterval = 30
>
>  ​Kindly let me know what is it, that I'm doing wrong.
>
>
>  Sincerely,
>  Sanjay Ramanathan
>
>
>