You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ashish Dobhal <do...@gmail.com> on 2014/07/18 18:49:45 UTC

MR JOB

Does the normal operations of hadoop such as uploading and downloading a
file into the HDFS run as a MR job.
If so why cant I see the job being run on my task tracker and job tracker.
Thank you.

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Thanks.


On Fri, Jul 18, 2014 at 10:41 PM, Rich Haase <rd...@gmail.com> wrote:

> HDFS handles the splitting of files into multiple blocks.  It's a file
> system operation that is transparent to the user.
>
>
> On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Rich Haase Thanks,
>> But if the copy ops do not occur as a MR job then how does the splitting
>> of a file into several blocks takes place.
>>
>>
>> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>>
>>> File copy operations do not run as map reduce jobs.  All hadoop fs
>>> commands are run as operations against HDFS and do not use the MapReduce.
>>>
>>>
>>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>>> dobhalashish772@gmail.com> wrote:
>>>
>>>> Does the normal operations of hadoop such as uploading and downloading
>>>> a file into the HDFS run as a MR job.
>>>> If so why cant I see the job being run on my task tracker and job
>>>> tracker.
>>>> Thank you.
>>>>
>>>
>>>
>>>
>>> --
>>> *Kernighan's Law*
>>> "Debugging is twice as hard as writing the code in the first place.
>>> Therefore, if you write the code as cleverly as possible, you are, by
>>> definition, not smart enough to debug it."
>>>
>>
>>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Thanks.


On Fri, Jul 18, 2014 at 10:41 PM, Rich Haase <rd...@gmail.com> wrote:

> HDFS handles the splitting of files into multiple blocks.  It's a file
> system operation that is transparent to the user.
>
>
> On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Rich Haase Thanks,
>> But if the copy ops do not occur as a MR job then how does the splitting
>> of a file into several blocks takes place.
>>
>>
>> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>>
>>> File copy operations do not run as map reduce jobs.  All hadoop fs
>>> commands are run as operations against HDFS and do not use the MapReduce.
>>>
>>>
>>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>>> dobhalashish772@gmail.com> wrote:
>>>
>>>> Does the normal operations of hadoop such as uploading and downloading
>>>> a file into the HDFS run as a MR job.
>>>> If so why cant I see the job being run on my task tracker and job
>>>> tracker.
>>>> Thank you.
>>>>
>>>
>>>
>>>
>>> --
>>> *Kernighan's Law*
>>> "Debugging is twice as hard as writing the code in the first place.
>>> Therefore, if you write the code as cleverly as possible, you are, by
>>> definition, not smart enough to debug it."
>>>
>>
>>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Thanks.


On Fri, Jul 18, 2014 at 10:41 PM, Rich Haase <rd...@gmail.com> wrote:

> HDFS handles the splitting of files into multiple blocks.  It's a file
> system operation that is transparent to the user.
>
>
> On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Rich Haase Thanks,
>> But if the copy ops do not occur as a MR job then how does the splitting
>> of a file into several blocks takes place.
>>
>>
>> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>>
>>> File copy operations do not run as map reduce jobs.  All hadoop fs
>>> commands are run as operations against HDFS and do not use the MapReduce.
>>>
>>>
>>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>>> dobhalashish772@gmail.com> wrote:
>>>
>>>> Does the normal operations of hadoop such as uploading and downloading
>>>> a file into the HDFS run as a MR job.
>>>> If so why cant I see the job being run on my task tracker and job
>>>> tracker.
>>>> Thank you.
>>>>
>>>
>>>
>>>
>>> --
>>> *Kernighan's Law*
>>> "Debugging is twice as hard as writing the code in the first place.
>>> Therefore, if you write the code as cleverly as possible, you are, by
>>> definition, not smart enough to debug it."
>>>
>>
>>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Thanks.


On Fri, Jul 18, 2014 at 10:41 PM, Rich Haase <rd...@gmail.com> wrote:

> HDFS handles the splitting of files into multiple blocks.  It's a file
> system operation that is transparent to the user.
>
>
> On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Rich Haase Thanks,
>> But if the copy ops do not occur as a MR job then how does the splitting
>> of a file into several blocks takes place.
>>
>>
>> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>>
>>> File copy operations do not run as map reduce jobs.  All hadoop fs
>>> commands are run as operations against HDFS and do not use the MapReduce.
>>>
>>>
>>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>>> dobhalashish772@gmail.com> wrote:
>>>
>>>> Does the normal operations of hadoop such as uploading and downloading
>>>> a file into the HDFS run as a MR job.
>>>> If so why cant I see the job being run on my task tracker and job
>>>> tracker.
>>>> Thank you.
>>>>
>>>
>>>
>>>
>>> --
>>> *Kernighan's Law*
>>> "Debugging is twice as hard as writing the code in the first place.
>>> Therefore, if you write the code as cleverly as possible, you are, by
>>> definition, not smart enough to debug it."
>>>
>>
>>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
HDFS handles the splitting of files into multiple blocks.  It's a file
system operation that is transparent to the user.


On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Rich Haase Thanks,
> But if the copy ops do not occur as a MR job then how does the splitting
> of a file into several blocks takes place.
>
>
> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>
>> File copy operations do not run as map reduce jobs.  All hadoop fs
>> commands are run as operations against HDFS and do not use the MapReduce.
>>
>>
>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>> dobhalashish772@gmail.com> wrote:
>>
>>> Does the normal operations of hadoop such as uploading and downloading a
>>> file into the HDFS run as a MR job.
>>> If so why cant I see the job being run on my task tracker and job
>>> tracker.
>>> Thank you.
>>>
>>
>>
>>
>> --
>> *Kernighan's Law*
>> "Debugging is twice as hard as writing the code in the first place.
>> Therefore, if you write the code as cleverly as possible, you are, by
>> definition, not smart enough to debug it."
>>
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
HDFS handles the splitting of files into multiple blocks.  It's a file
system operation that is transparent to the user.


On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Rich Haase Thanks,
> But if the copy ops do not occur as a MR job then how does the splitting
> of a file into several blocks takes place.
>
>
> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>
>> File copy operations do not run as map reduce jobs.  All hadoop fs
>> commands are run as operations against HDFS and do not use the MapReduce.
>>
>>
>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>> dobhalashish772@gmail.com> wrote:
>>
>>> Does the normal operations of hadoop such as uploading and downloading a
>>> file into the HDFS run as a MR job.
>>> If so why cant I see the job being run on my task tracker and job
>>> tracker.
>>> Thank you.
>>>
>>
>>
>>
>> --
>> *Kernighan's Law*
>> "Debugging is twice as hard as writing the code in the first place.
>> Therefore, if you write the code as cleverly as possible, you are, by
>> definition, not smart enough to debug it."
>>
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
HDFS handles the splitting of files into multiple blocks.  It's a file
system operation that is transparent to the user.


On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Rich Haase Thanks,
> But if the copy ops do not occur as a MR job then how does the splitting
> of a file into several blocks takes place.
>
>
> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>
>> File copy operations do not run as map reduce jobs.  All hadoop fs
>> commands are run as operations against HDFS and do not use the MapReduce.
>>
>>
>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>> dobhalashish772@gmail.com> wrote:
>>
>>> Does the normal operations of hadoop such as uploading and downloading a
>>> file into the HDFS run as a MR job.
>>> If so why cant I see the job being run on my task tracker and job
>>> tracker.
>>> Thank you.
>>>
>>
>>
>>
>> --
>> *Kernighan's Law*
>> "Debugging is twice as hard as writing the code in the first place.
>> Therefore, if you write the code as cleverly as possible, you are, by
>> definition, not smart enough to debug it."
>>
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
HDFS handles the splitting of files into multiple blocks.  It's a file
system operation that is transparent to the user.


On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Rich Haase Thanks,
> But if the copy ops do not occur as a MR job then how does the splitting
> of a file into several blocks takes place.
>
>
> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:
>
>> File copy operations do not run as map reduce jobs.  All hadoop fs
>> commands are run as operations against HDFS and do not use the MapReduce.
>>
>>
>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>> dobhalashish772@gmail.com> wrote:
>>
>>> Does the normal operations of hadoop such as uploading and downloading a
>>> file into the HDFS run as a MR job.
>>> If so why cant I see the job being run on my task tracker and job
>>> tracker.
>>> Thank you.
>>>
>>
>>
>>
>> --
>> *Kernighan's Law*
>> "Debugging is twice as hard as writing the code in the first place.
>> Therefore, if you write the code as cleverly as possible, you are, by
>> definition, not smart enough to debug it."
>>
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Rich Haase Thanks,
But if the copy ops do not occur as a MR job then how does the splitting of
a file into several blocks takes place.


On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:

> File copy operations do not run as map reduce jobs.  All hadoop fs
> commands are run as operations against HDFS and do not use the MapReduce.
>
>
> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Does the normal operations of hadoop such as uploading and downloading a
>> file into the HDFS run as a MR job.
>> If so why cant I see the job being run on my task tracker and job tracker.
>> Thank you.
>>
>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Rich Haase Thanks,
But if the copy ops do not occur as a MR job then how does the splitting of
a file into several blocks takes place.


On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:

> File copy operations do not run as map reduce jobs.  All hadoop fs
> commands are run as operations against HDFS and do not use the MapReduce.
>
>
> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Does the normal operations of hadoop such as uploading and downloading a
>> file into the HDFS run as a MR job.
>> If so why cant I see the job being run on my task tracker and job tracker.
>> Thank you.
>>
>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Rich Haase Thanks,
But if the copy ops do not occur as a MR job then how does the splitting of
a file into several blocks takes place.


On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:

> File copy operations do not run as map reduce jobs.  All hadoop fs
> commands are run as operations against HDFS and do not use the MapReduce.
>
>
> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Does the normal operations of hadoop such as uploading and downloading a
>> file into the HDFS run as a MR job.
>> If so why cant I see the job being run on my task tracker and job tracker.
>> Thank you.
>>
>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Ashish Dobhal <do...@gmail.com>.
Rich Haase Thanks,
But if the copy ops do not occur as a MR job then how does the splitting of
a file into several blocks takes place.


On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rd...@gmail.com> wrote:

> File copy operations do not run as map reduce jobs.  All hadoop fs
> commands are run as operations against HDFS and do not use the MapReduce.
>
>
> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Does the normal operations of hadoop such as uploading and downloading a
>> file into the HDFS run as a MR job.
>> If so why cant I see the job being run on my task tracker and job tracker.
>> Thank you.
>>
>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
File copy operations do not run as map reduce jobs.  All hadoop fs commands
are run as operations against HDFS and do not use the MapReduce.


On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Does the normal operations of hadoop such as uploading and downloading a
> file into the HDFS run as a MR job.
> If so why cant I see the job being run on my task tracker and job tracker.
> Thank you.
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
File copy operations do not run as map reduce jobs.  All hadoop fs commands
are run as operations against HDFS and do not use the MapReduce.


On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Does the normal operations of hadoop such as uploading and downloading a
> file into the HDFS run as a MR job.
> If so why cant I see the job being run on my task tracker and job tracker.
> Thank you.
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
File copy operations do not run as map reduce jobs.  All hadoop fs commands
are run as operations against HDFS and do not use the MapReduce.


On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Does the normal operations of hadoop such as uploading and downloading a
> file into the HDFS run as a MR job.
> If so why cant I see the job being run on my task tracker and job tracker.
> Thank you.
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: MR JOB

Posted by Rich Haase <rd...@gmail.com>.
File copy operations do not run as map reduce jobs.  All hadoop fs commands
are run as operations against HDFS and do not use the MapReduce.


On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <do...@gmail.com>
wrote:

> Does the normal operations of hadoop such as uploading and downloading a
> file into the HDFS run as a MR job.
> If so why cant I see the job being run on my task tracker and job tracker.
> Thank you.
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."