You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Yuanbo Liu (JIRA)" <ji...@apache.org> on 2019/06/17 03:32:00 UTC
[jira] [Created] (MAPREDUCE-7220) Mapreduce jobhistory summary
error if job name is very long
Yuanbo Liu created MAPREDUCE-7220:
-------------------------------------
Summary: Mapreduce jobhistory summary error if job name is very long
Key: MAPREDUCE-7220
URL: https://issues.apache.org/jira/browse/MAPREDUCE-7220
Project: Hadoop Map/Reduce
Issue Type: Improvement
Reporter: Yuanbo Liu
From JobHistoryEventHandler.java, we can see that mapreduce uses writeUTF to write summary.done file to hdfs. The code is here:
{quote}summaryFileOut = doneDirFS.create(qualifiedSummaryDoneFile, true);
summaryFileOut.writeUTF(mi.getJobSummary().getJobSummaryString());
summaryFileOut.close();
{quote}
writeUTF uses first two bytes to record string length, hence the length of summary string cannot exceed 65535. But in the case of hive job, SQL string is part of job name. It's quite normal that SQL length is greater than 65535, then summary done file cannot be written successfully. In this case, hive client thinks such kind of mapreduce job is in the final state of failure sometimes.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org