You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by td...@126.com on 2014/04/28 05:22:18 UTC

hdfs write partially

Hello everyone,

 

Since the default dfs.client-write-packet-size is 64K and it can’t be bigger than 16M.

So if write bigger than 16M a time, how to make sure it doesn’t write partially ?

 

Does anyone knows how to fix this?

 

Thanks a lot.

 

-- 

Ken Huang


Re: 答复: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
You do not need to alter the packet size to write files - why do you
think you need larger packets than the default one?

On Mon, Apr 28, 2014 at 4:04 PM,  <td...@126.com> wrote:
> Hi Harsh,
>
>
>
> I’m using HDFS client to write GZIP compressed files, I want to write once a
> file, in order to not uncompressing  it. So I should make every write
> completely, otherwise file will corrupted.
>
> I’m raising up the client’s write packet size to avoid partially write. But
> it doesn’t work, since it can’t bigger than 16M(file size > 16M).
>
> That’s my problem.
>
>
>
> Thank a lot for replying.
>
>
>
> Regards,
>
> Ken Huang
>
>
>
> 发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
> [mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
> 发送时间: 2014年4月28日 13:30
> 收件人: <us...@hadoop.apache.org>
> 主题: Re: hdfs write partially
>
>
>
> Packets are chunks of the input you try to pass to the HDFS writer. What
> problem are you exactly facing (or, why are you trying to raise up the
> client's write packet size)?
>
>
>
> On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:
>
> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can’t be bigger
> than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn’t write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>
>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: 答复: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
You do not need to alter the packet size to write files - why do you
think you need larger packets than the default one?

On Mon, Apr 28, 2014 at 4:04 PM,  <td...@126.com> wrote:
> Hi Harsh,
>
>
>
> I’m using HDFS client to write GZIP compressed files, I want to write once a
> file, in order to not uncompressing  it. So I should make every write
> completely, otherwise file will corrupted.
>
> I’m raising up the client’s write packet size to avoid partially write. But
> it doesn’t work, since it can’t bigger than 16M(file size > 16M).
>
> That’s my problem.
>
>
>
> Thank a lot for replying.
>
>
>
> Regards,
>
> Ken Huang
>
>
>
> 发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
> [mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
> 发送时间: 2014年4月28日 13:30
> 收件人: <us...@hadoop.apache.org>
> 主题: Re: hdfs write partially
>
>
>
> Packets are chunks of the input you try to pass to the HDFS writer. What
> problem are you exactly facing (or, why are you trying to raise up the
> client's write packet size)?
>
>
>
> On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:
>
> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can’t be bigger
> than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn’t write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>
>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: 答复: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
You do not need to alter the packet size to write files - why do you
think you need larger packets than the default one?

On Mon, Apr 28, 2014 at 4:04 PM,  <td...@126.com> wrote:
> Hi Harsh,
>
>
>
> I’m using HDFS client to write GZIP compressed files, I want to write once a
> file, in order to not uncompressing  it. So I should make every write
> completely, otherwise file will corrupted.
>
> I’m raising up the client’s write packet size to avoid partially write. But
> it doesn’t work, since it can’t bigger than 16M(file size > 16M).
>
> That’s my problem.
>
>
>
> Thank a lot for replying.
>
>
>
> Regards,
>
> Ken Huang
>
>
>
> 发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
> [mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
> 发送时间: 2014年4月28日 13:30
> 收件人: <us...@hadoop.apache.org>
> 主题: Re: hdfs write partially
>
>
>
> Packets are chunks of the input you try to pass to the HDFS writer. What
> problem are you exactly facing (or, why are you trying to raise up the
> client's write packet size)?
>
>
>
> On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:
>
> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can’t be bigger
> than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn’t write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>
>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: 答复: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
You do not need to alter the packet size to write files - why do you
think you need larger packets than the default one?

On Mon, Apr 28, 2014 at 4:04 PM,  <td...@126.com> wrote:
> Hi Harsh,
>
>
>
> I’m using HDFS client to write GZIP compressed files, I want to write once a
> file, in order to not uncompressing  it. So I should make every write
> completely, otherwise file will corrupted.
>
> I’m raising up the client’s write packet size to avoid partially write. But
> it doesn’t work, since it can’t bigger than 16M(file size > 16M).
>
> That’s my problem.
>
>
>
> Thank a lot for replying.
>
>
>
> Regards,
>
> Ken Huang
>
>
>
> 发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
> [mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
> 发送时间: 2014年4月28日 13:30
> 收件人: <us...@hadoop.apache.org>
> 主题: Re: hdfs write partially
>
>
>
> Packets are chunks of the input you try to pass to the HDFS writer. What
> problem are you exactly facing (or, why are you trying to raise up the
> client's write packet size)?
>
>
>
> On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:
>
> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can’t be bigger
> than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn’t write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>
>
>
>
>
> --
> Harsh J



-- 
Harsh J

答复: hdfs write partially

Posted by td...@126.com.
Hi Harsh,

 

I’m using HDFS client to write GZIP compressed files, I want to write once
a file, in order to not uncompressing  it. So I should make every write
completely, otherwise file will corrupted.

I’m raising up the client’s write packet size to avoid partially write.
But it doesn’t work, since it can’t bigger than 16M(file size > 16M).

That’s my problem.

 

Thank a lot for replying.

 

Regards,

Ken Huang

 

发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
[mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
发送时间: 2014年4月28日 13:30
收件人: <us...@hadoop.apache.org>
主题: Re: hdfs write partially

 

Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?

 

On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

Hello everyone,

 

Since the default dfs.client-write-packet-size is 64K and it can’t be
bigger than 16M.

So if write bigger than 16M a time, how to make sure it doesn’t write
partially ?

 

Does anyone knows how to fix this?

 

Thanks a lot.

 

-- 

Ken Huang





 

-- 
Harsh J 


答复: hdfs write partially

Posted by td...@126.com.
Hi Harsh,

 

I’m using HDFS client to write GZIP compressed files, I want to write once
a file, in order to not uncompressing  it. So I should make every write
completely, otherwise file will corrupted.

I’m raising up the client’s write packet size to avoid partially write.
But it doesn’t work, since it can’t bigger than 16M(file size > 16M).

That’s my problem.

 

Thank a lot for replying.

 

Regards,

Ken Huang

 

发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
[mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
发送时间: 2014年4月28日 13:30
收件人: <us...@hadoop.apache.org>
主题: Re: hdfs write partially

 

Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?

 

On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

Hello everyone,

 

Since the default dfs.client-write-packet-size is 64K and it can’t be
bigger than 16M.

So if write bigger than 16M a time, how to make sure it doesn’t write
partially ?

 

Does anyone knows how to fix this?

 

Thanks a lot.

 

-- 

Ken Huang





 

-- 
Harsh J 


答复: hdfs write partially

Posted by td...@126.com.
Hi Harsh,

 

I’m using HDFS client to write GZIP compressed files, I want to write once
a file, in order to not uncompressing  it. So I should make every write
completely, otherwise file will corrupted.

I’m raising up the client’s write packet size to avoid partially write.
But it doesn’t work, since it can’t bigger than 16M(file size > 16M).

That’s my problem.

 

Thank a lot for replying.

 

Regards,

Ken Huang

 

发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
[mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
发送时间: 2014年4月28日 13:30
收件人: <us...@hadoop.apache.org>
主题: Re: hdfs write partially

 

Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?

 

On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

Hello everyone,

 

Since the default dfs.client-write-packet-size is 64K and it can’t be
bigger than 16M.

So if write bigger than 16M a time, how to make sure it doesn’t write
partially ?

 

Does anyone knows how to fix this?

 

Thanks a lot.

 

-- 

Ken Huang





 

-- 
Harsh J 


答复: hdfs write partially

Posted by td...@126.com.
Hi Harsh,

 

I’m using HDFS client to write GZIP compressed files, I want to write once
a file, in order to not uncompressing  it. So I should make every write
completely, otherwise file will corrupted.

I’m raising up the client’s write packet size to avoid partially write.
But it doesn’t work, since it can’t bigger than 16M(file size > 16M).

That’s my problem.

 

Thank a lot for replying.

 

Regards,

Ken Huang

 

发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
[mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
发送时间: 2014年4月28日 13:30
收件人: <us...@hadoop.apache.org>
主题: Re: hdfs write partially

 

Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?

 

On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

Hello everyone,

 

Since the default dfs.client-write-packet-size is 64K and it can’t be
bigger than 16M.

So if write bigger than 16M a time, how to make sure it doesn’t write
partially ?

 

Does anyone knows how to fix this?

 

Thanks a lot.

 

-- 

Ken Huang





 

-- 
Harsh J 


Re: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?


On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can't be
> bigger than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn't write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>



-- 
Harsh J

Re: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?


On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can't be
> bigger than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn't write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>



-- 
Harsh J

Re: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?


On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can't be
> bigger than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn't write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>



-- 
Harsh J

Re: hdfs write partially

Posted by Harsh J <ha...@cloudera.com>.
Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?


On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can't be
> bigger than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn't write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>



-- 
Harsh J