You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Dapeng Sun (JIRA)" <ji...@apache.org> on 2019/03/20 11:57:00 UTC
[jira] [Assigned] (HIVE-21483) HoS would fail when scratch dir is
using remote HDFS
[ https://issues.apache.org/jira/browse/HIVE-21483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dapeng Sun reassigned HIVE-21483:
---------------------------------
> HoS would fail when scratch dir is using remote HDFS
> ----------------------------------------------------
>
> Key: HIVE-21483
> URL: https://issues.apache.org/jira/browse/HIVE-21483
> Project: Hive
> Issue Type: Bug
> Reporter: Dapeng Sun
> Assignee: Dapeng Sun
> Priority: Major
>
> HoS would fail when scratch dir is using remote HDFS:
> public static URI uploadToHDFS(URI source, HiveConf conf) throws IOException {
> Path localFile = new Path(source.getPath());
> Path remoteFile = new Path(SessionState.get().getSparkSession().getHDFSSessionDir(),
> getFileName(source));
> - FileSystem fileSystem = FileSystem.get(conf);
> + FileSystem fileSystem = remoteFile.getFileSystem(conf);
> // Overwrite if the remote file already exists. Whether the file can be added
> // on executor is up to spark, i.e. spark.files.overwrite
> fileSystem.copyFromLocalFile(false, true, localFile, remoteFile);
> Path fullPath = fileSystem.getFileStatus(remoteFile).getPath();
> r
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)