You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Karim Awara <ka...@kaust.edu.sa> on 2013/11/03 06:04:48 UTC

modify writing policy to HDFS

Hi,

I understand the way file upload happens on HDFS, where the data node asks
the namenode for a pipe (64 MB) for writing chunks of the file to hdfs.

I want to change the source code of HDFS such that the datanode can have
multiple pipes opens in parallel, where i push the data to the pipe based
on it content.

so my question is:

1- Is it possible?  if yes, which classes might be responsible for that?
2- how can i track which classes/functions execute a command (for example
when executing a hdfs put command.. how to trace the function calls between
the namnode and datanode?).


Thanks.

--
Best Regards,
Karim Ahmed Awara

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.