You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Daegyu Han <hd...@gmail.com> on 2019/07/06 15:20:51 UTC

Why does MapReduce read and process the input split line by line? Can not I upload the entire file to memory and process it?

Hi all,

Why does MapReduce handle input split files one line at a time?

Or is it not fast enough to read all the input split (ex: 128MB) and
put the data in memory and process the data (string) in memory?

If you are processing one row at a time, as in the current approach,
is not the Java application making line-by-line read system calls to
the operating system?

Best Regards,
Daegyu

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org