You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Rui Li (JIRA)" <ji...@apache.org> on 2016/12/16 01:57:58 UTC

[jira] [Comment Edited] (HIVE-13278) Avoid FileNotFoundException when map/reduce.xml is not available

    [ https://issues.apache.org/jira/browse/HIVE-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753143#comment-15753143 ] 

Rui Li edited comment on HIVE-13278 at 12/16/16 1:57 AM:
---------------------------------------------------------

Hi [~csun], sorry maybe I was being misleading. What I have in mind is something like this:
{code}
  // In Utilities::setMapWork
  public static Path setMapWork(Configuration conf, MapWork w, Path hiveScratchDir, boolean useCache) {
    conf.setBoolean(HAS_MAP_WORK, true);
    return setBaseWork(conf, w, hiveScratchDir, MAP_PLAN_NAME, useCache);
  }

  // In Utilities::getMapWork
  public static MapWork getMapWork(Configuration conf) {
    if (!conf.getBoolean(HAS_MAP_WORK, false)) {
      return null;
    }
    ....
{code}
Similar for set/get ReduceWork. So if we haven't called set work, we'll just get null when getting the work. Do you think it makes sense?


was (Author: lirui):
Hi [~csun], sorry maybe I was being misleading. What I have in mind is something like this:
{code}
  // In Utilities::setMapWork
  public static Path setMapWork(Configuration conf, MapWork w, Path hiveScratchDir, boolean useCache) {
    conf.setBoolean(HAS_REDUCE_WORK, true);
    return setBaseWork(conf, w, hiveScratchDir, MAP_PLAN_NAME, useCache);
  }

  // In Utilities::getMapWork
  public static MapWork getMapWork(Configuration conf) {
    if (!conf.getBoolean(HAS_MAP_WORK, false)) {
      return null;
    }
    ....
{code}
Similar for set/get ReduceWork. So if we haven't called set work, we'll just get null when getting the work. Do you think it makes sense?

> Avoid FileNotFoundException when map/reduce.xml is not available
> ----------------------------------------------------------------
>
>                 Key: HIVE-13278
>                 URL: https://issues.apache.org/jira/browse/HIVE-13278
>             Project: Hive
>          Issue Type: Bug
>         Environment: Hive on Spark engine
> Found based on :
> Apache Hive 2.0.0
> Apache Spark 1.6.0
>            Reporter: Xin Hao
>            Assignee: Chao Sun
>            Priority: Minor
>         Attachments: HIVE-13278.1.patch, HIVE-13278.2.patch, HIVE-13278.3.patch, HIVE-13278.4.patch
>
>
> Many redundant 'File not found' messages appeared in container log during query execution with Hive on Spark.
> Certainly, it doesn't prevent the query from running successfully. So mark it as Minor currently.
> Error message example:
> {noformat}
> 16/03/14 01:45:06 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/hadoop/2d378538-f5d3-493c-9276-c62dd6634fb4/hive_2016-03-14_01-44-16_835_623058724409492515-6/-mr-10010/0a6d0cae-1eb3-448c-883b-590b3b198a73/reduce.xml
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1932)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1873)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1853)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1825)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:565)
>         at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)