You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Gary Malouf <ma...@gmail.com> on 2013/05/18 01:46:28 UTC
Flume-ng 1.3.x HDFSSink - override Hadoop default size for only one
of my sinks
Is there a way I can set the block size for files originating from a
specific sink? My use case is that I have a number of different protobuf
messages that each get written to their own directories in HDFS.
Re: Flume-ng 1.3.x HDFSSink - override Hadoop default size for only one of my sinks
Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi,
The HDFS sink writes back to HDFS, the blocksize is defined in your cluster (for new written files). If you use the hdfs sink, you should have a hdfs-site.xml which defines the blocksize (dfs.blocksize). So no, there is now way.
- Alex
On May 18, 2013, at 1:48 AM, Gary Malouf <ma...@gmail.com> wrote:
> If it is not clear, I meant to type default block size.
>
>
> On Fri, May 17, 2013 at 7:46 PM, Gary Malouf <ma...@gmail.com> wrote:
> Is there a way I can set the block size for files originating from a specific sink? My use case is that I have a number of different protobuf messages that each get written to their own directories in HDFS.
>
--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF
Re: Flume-ng 1.3.x HDFSSink - override Hadoop default size for only
one of my sinks
Posted by Gary Malouf <ma...@gmail.com>.
If it is not clear, I meant to type default block size.
On Fri, May 17, 2013 at 7:46 PM, Gary Malouf <ma...@gmail.com> wrote:
> Is there a way I can set the block size for files originating from a
> specific sink? My use case is that I have a number of different protobuf
> messages that each get written to their own directories in HDFS.
>