You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Rui Shi <sh...@yahoo.com> on 2008/01/29 00:03:45 UTC

Task was killed due to running over 600 sec

Hi,

Some of my map tasks are killed by the tracker and give the error "Task task_200801251420_0007_m_000006_0 failed to report status for 601 seconds. Killing!"

My map task is basically copy a large file so I don't have much to report during the process. How can I prevent the task tracker to kill my task?

Thanks,

Rui



      ____________________________________________________________________________________
Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  http://tools.search.yahoo.com/newsearch/category.php?category=shopping

Re: Task was killed due to running over 600 sec

Posted by Arun C Murthy <ac...@yahoo-inc.com>.
On Jan 28, 2008, at 3:03 PM, Rui Shi wrote:

> Hi,
>
> Some of my map tasks are killed by the tracker and give the error  
> "Task task_200801251420_0007_m_000006_0 failed to report status for  
> 601 seconds. Killing!"
>
> My map task is basically copy a large file so I don't have much to  
> report during the process. How can I prevent the task tracker to  
> kill my task?
>

http://hadoop.apache.org/core/docs/r0.15.3/api/org/apache/hadoop/ 
mapred/Mapper.html#map(K1,%20V1,% 
20org.apache.hadoop.mapred.OutputCollector,% 
20org.apache.hadoop.mapred.Reporter)

Arun

> Thanks,
>
> Rui
>
>
>
>        
> ______________________________________________________________________ 
> ______________
> Looking for last minute shopping deals?
> Find them fast with Yahoo! Search.  http://tools.search.yahoo.com/ 
> newsearch/category.php?category=shopping


Re: Task was killed due to running over 600 sec

Posted by Arun C Murthy <ac...@yahoo-inc.com>.
On Jan 28, 2008, at 11:12 PM, ChaoChun Liang wrote:

>
>
> lohit.vijayarenu wrote:
>>
>> You could try setting the value of mapred.task.timeout to higher  
>> value.
>> Thanks,
>> Lohit
>>
>
> Could I set the different timeout values for the maper and reducer
> separately?
> In my case, the execution time for the mapper is short than the  
> reducer.
>

No. There isn't a way to do that.

However, it really _is_ better to send progress/status-updates to the  
TaskTracker rather than to work around it... infact it is as simple  
as calling *reporter.progress()* periodically, or reporter.setStatus  
on the reporter which is passed to the map/reduce method. It helps  
debugging too...

Arun
> Thanks.
> ChaoChun
>
> -- 
> View this message in context: http://www.nabble.com/Task-was-killed- 
> due-to-running-over-600-sec-tp15148129p15153682.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>


Re: Low complexity way to write a file to hdfs?

Posted by Doug Cutting <cu...@apache.org>.
Ted Dunning wrote:
> Don't know.
> 
> I just build a simple patch against the trunk and cloned the UGI stuff from
> the doGet method.
> 
> Will that work?

It should.  You'll have to specify the user & groups in the query 
string.  Looking at the code, it looks like this should be something 
like "&ugi=user,group1,group2;".

Doug

Re: Low complexity way to write a file to hdfs?

Posted by Ted Dunning <td...@veoh.com>.


On 1/30/08 3:04 PM, "Arun C Murthy" <ac...@yahoo-inc.com> wrote:

> 
> On Jan 30, 2008, at 1:30 PM, Ted Dunning wrote:
> 
>> 
>> Am I missing something?
>> 
> 
> Umm... how does it interact with HDFS permissions coming in 0.16.0?

Don't know.

I just build a simple patch against the trunk and cloned the UGI stuff from
the doGet method.

Will that work?



Re: Low complexity way to write a file to hdfs?

Posted by Arun C Murthy <ac...@yahoo-inc.com>.
On Jan 30, 2008, at 1:30 PM, Ted Dunning wrote:

>
> I am looking for a way for scripts to write data to HDFS without  
> having to
> install anything.
>
> The /data and /listPaths URL's on the nameserver are ideal for reading
> files, but I can't find anything comparable to write files.
>
> Am I missing something?
>
> If not, I think I will file a JIRA and make /data accept POST  
> events.  If
> anybody has an opinion about that, please let me know (or put a  
> comment on
> the JIRA, if and when).
>

Umm... how does it interact with HDFS permissions coming in 0.16.0?

Arun

>


Re: Low complexity way to write a file to hdfs?

Posted by Jason Venner <ja...@attributor.com>.
As of the last version, of this great tool, that I have loaded, file 
write was not yet enabled.
Read works well.

Raghu Angadi wrote:
>
> You could take look at fuse plugin in for HDFS. Then client does not 
> even need a web browser.
>
> Raghu.
>
> Ted Dunning wrote:
>> I am looking for a way for scripts to write data to HDFS without 
>> having to
>> install anything.
>>
>> The /data and /listPaths URL's on the nameserver are ideal for reading
>> files, but I can't find anything comparable to write files.
>>
>> Am I missing something?
>>
>> If not, I think I will file a JIRA and make /data accept POST 
>> events.  If
>> anybody has an opinion about that, please let me know (or put a 
>> comment on
>> the JIRA, if and when).
>>
>>
>

-- 
Jason Venner
Attributor - Publish with Confidence <http://www.attributor.com/>
Attributor is hiring Hadoop Wranglers, contact if interested

Re: Low complexity way to write a file to hdfs?

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
You could take look at fuse plugin in for HDFS. Then client does not 
even need a web browser.

Raghu.

Ted Dunning wrote:
> I am looking for a way for scripts to write data to HDFS without having to
> install anything.
> 
> The /data and /listPaths URL's on the nameserver are ideal for reading
> files, but I can't find anything comparable to write files.
> 
> Am I missing something?
> 
> If not, I think I will file a JIRA and make /data accept POST events.  If
> anybody has an opinion about that, please let me know (or put a comment on
> the JIRA, if and when).
> 
> 


Re: Low complexity way to write a file to hdfs?

Posted by Michael Bieniosek <mi...@powerset.com>.
There is a webdav servlet in HADOOP-496 that works for read/write/delete.  I've only tested it with the Mac OSX Finder client though.

-Michael

On 1/30/08 2:18 PM, "Ted Dunning" <td...@veoh.com> wrote:



Might work.


On 1/30/08 2:00 PM, "Vadim Zaliva" <kr...@gmail.com> wrote:

> On Jan 30, 2008, at 13:57, Jason Venner wrote:
>
> I think somebody mentioned WebDAV support. That would work for me,
> so I can PUT files.
>
> Vadim
>
>> I suppose we could add a feature to the hdfs web ui to allow
>> uploading files.
>>
>> Ted Dunning wrote:
>>> I am looking for a way for scripts to write data to HDFS without
>>> having to
>>> install anything.
>>>
>>> The /data and /listPaths URL's on the nameserver are ideal for
>>> reading
>>> files, but I can't find anything comparable to write files.
>>>
>>> Am I missing something?
>>>
>>> If not, I think I will file a JIRA and make /data accept POST
>>> events.  If
>>> anybody has an opinion about that, please let me know (or put a
>>> comment on
>>> the JIRA, if and when).
>>>
>>>
>>>
>>
>> --
>> Jason Venner
>> Attributor - Publish with Confidence <http://www.attributor.com/><http://www.attributor.com/>
>> Attributor is hiring Hadoop Wranglers, contact if interested
>




Re: Low complexity way to write a file to hdfs?

Posted by Ted Dunning <td...@veoh.com>.
Might work.


On 1/30/08 2:00 PM, "Vadim Zaliva" <kr...@gmail.com> wrote:

> On Jan 30, 2008, at 13:57, Jason Venner wrote:
> 
> I think somebody mentioned WebDAV support. That would work for me,
> so I can PUT files.
> 
> Vadim
> 
>> I suppose we could add a feature to the hdfs web ui to allow
>> uploading files.
>> 
>> Ted Dunning wrote:
>>> I am looking for a way for scripts to write data to HDFS without
>>> having to
>>> install anything.
>>> 
>>> The /data and /listPaths URL's on the nameserver are ideal for
>>> reading
>>> files, but I can't find anything comparable to write files.
>>> 
>>> Am I missing something?
>>> 
>>> If not, I think I will file a JIRA and make /data accept POST
>>> events.  If
>>> anybody has an opinion about that, please let me know (or put a
>>> comment on
>>> the JIRA, if and when).
>>> 
>>> 
>>> 
>> 
>> -- 
>> Jason Venner
>> Attributor - Publish with Confidence <http://www.attributor.com/>
>> Attributor is hiring Hadoop Wranglers, contact if interested
> 


Re: Low complexity way to write a file to hdfs?

Posted by Vadim Zaliva <kr...@gmail.com>.
On Jan 30, 2008, at 13:57, Jason Venner wrote:

I think somebody mentioned WebDAV support. That would work for me,
so I can PUT files.

Vadim

> I suppose we could add a feature to the hdfs web ui to allow  
> uploading files.
>
> Ted Dunning wrote:
>> I am looking for a way for scripts to write data to HDFS without  
>> having to
>> install anything.
>>
>> The /data and /listPaths URL's on the nameserver are ideal for  
>> reading
>> files, but I can't find anything comparable to write files.
>>
>> Am I missing something?
>>
>> If not, I think I will file a JIRA and make /data accept POST  
>> events.  If
>> anybody has an opinion about that, please let me know (or put a  
>> comment on
>> the JIRA, if and when).
>>
>>
>>
>
> -- 
> Jason Venner
> Attributor - Publish with Confidence <http://www.attributor.com/>
> Attributor is hiring Hadoop Wranglers, contact if interested


Re: Low complexity way to write a file to hdfs?

Posted by Ted Dunning <td...@veoh.com>.
That's what I am about to do.


On 1/30/08 1:57 PM, "Jason Venner" <ja...@attributor.com> wrote:

> I suppose we could add a feature to the hdfs web ui to allow uploading
> files.
> 
> Ted Dunning wrote:
>> I am looking for a way for scripts to write data to HDFS without having to
>> install anything.
>> 
>> The /data and /listPaths URL's on the nameserver are ideal for reading
>> files, but I can't find anything comparable to write files.
>> 
>> Am I missing something?
>> 
>> If not, I think I will file a JIRA and make /data accept POST events.  If
>> anybody has an opinion about that, please let me know (or put a comment on
>> the JIRA, if and when).
>> 
>> 
>>   


Re: Low complexity way to write a file to hdfs?

Posted by Jason Venner <ja...@attributor.com>.
I suppose we could add a feature to the hdfs web ui to allow uploading 
files.

Ted Dunning wrote:
> I am looking for a way for scripts to write data to HDFS without having to
> install anything.
>
> The /data and /listPaths URL's on the nameserver are ideal for reading
> files, but I can't find anything comparable to write files.
>
> Am I missing something?
>
> If not, I think I will file a JIRA and make /data accept POST events.  If
> anybody has an opinion about that, please let me know (or put a comment on
> the JIRA, if and when).
>
>
>   

-- 
Jason Venner
Attributor - Publish with Confidence <http://www.attributor.com/>
Attributor is hiring Hadoop Wranglers, contact if interested

Re: Low complexity way to write a file to hdfs?

Posted by Ted Dunning <td...@veoh.com>.
http://<namenode-and-port/data/<file-path-in-hadoop>

I also have code written to allow posting to the same URL for file creation,
but haven't had time to get it to actually work (the posting to the URL
doesn't call doPost for some reason).

If somebody else has time to track down the (probably obvious to anybody but
me) issue, I would be happy to file a Jira and post a patch against 15.1 and
trunk for their reference.


On 2/6/08 9:52 AM, "C G" <pa...@yahoo.com> wrote:

> Ted:
>    
>   I am curious about how you read files without installing anything.  Can you
> share your wisdom?
>    
>   Thanks,
>   C G
> 
> Ted Dunning <td...@veoh.com> wrote:
>   
> I am looking for a way for scripts to write data to HDFS without having to
> install anything.
> 
> The /data and /listPaths URL's on the nameserver are ideal for reading
> files, but I can't find anything comparable to write files.
> 
> Am I missing something?
> 
> If not, I think I will file a JIRA and make /data accept POST events. If
> anybody has an opinion about that, please let me know (or put a comment on
> the JIRA, if and when).
> 
> 
> 
> 
>        
> ---------------------------------
> Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it
> now.


Re: Low complexity way to write a file to hdfs?

Posted by C G <pa...@yahoo.com>.
Ted:
   
  I am curious about how you read files without installing anything.  Can you share your wisdom?
   
  Thanks,
  C G

Ted Dunning <td...@veoh.com> wrote:
  
I am looking for a way for scripts to write data to HDFS without having to
install anything.

The /data and /listPaths URL's on the nameserver are ideal for reading
files, but I can't find anything comparable to write files.

Am I missing something?

If not, I think I will file a JIRA and make /data accept POST events. If
anybody has an opinion about that, please let me know (or put a comment on
the JIRA, if and when).




       
---------------------------------
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

Low complexity way to write a file to hdfs?

Posted by Ted Dunning <td...@veoh.com>.
I am looking for a way for scripts to write data to HDFS without having to
install anything.

The /data and /listPaths URL's on the nameserver are ideal for reading
files, but I can't find anything comparable to write files.

Am I missing something?

If not, I think I will file a JIRA and make /data accept POST events.  If
anybody has an opinion about that, please let me know (or put a comment on
the JIRA, if and when).



Re: Task was killed due to running over 600 sec

Posted by ChaoChun Liang <cc...@gmail.com>.

lohit.vijayarenu wrote:
> 
> You could try setting the value of mapred.task.timeout to higher value. 
> Thanks,
> Lohit
> 

Could I set the different timeout values for the maper and reducer
separately?
In my case, the execution time for the mapper is short than the reducer.

Thanks.
ChaoChun

-- 
View this message in context: http://www.nabble.com/Task-was-killed-due-to-running-over-600-sec-tp15148129p15153682.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.