You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2017/08/12 17:28:00 UTC

[jira] [Commented] (HADOOP-14766) Add an object store high performance dfs put command

    [ https://issues.apache.org/jira/browse/HADOOP-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16124648#comment-16124648 ] 

Steve Loughran commented on HADOOP-14766:
-----------------------------------------

+if you are trying to reduce risk of throttling, you may even want to look at filesize before choosing which to upload, & try to mix big files with little ones. You'd want to sort them & start the largest few off first to stop it being a bottleneck, then go for the rest. 

without store throttling and bandwidth limitations, the optimal scheduling would seem to be just queue the largest files for upload first & let them sort themselves out, finishing off with all the small files after. But: those small files take up lots of HTTP requests (3x HEAD & 1 PUT & 1 DELETE with depth(file) entries), & with SSE-KMS, one createKey call. Mixing them in with the larger uploads, as well as mixing paths, would appear to be less prone to throttling

> Add an object store high performance dfs put command
> ----------------------------------------------------
>
>                 Key: HADOOP-14766
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14766
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs, fs/s3
>    Affects Versions: 2.8.1
>            Reporter: Steve Loughran
>
> {{hdfs put local s3a://path}} is suboptimal as it treewalks down down the source tree then, sequentially, copies up the file through copying the file (opened as a stream) contents to a buffer, writes that to the dest file, repeats.
> For S3A that hurts because
> * it;s doing the upload inefficiently: the file can be uploaded just by handling the pathname to the AWS xter manager
> * it is doing it sequentially, when some parallelised upload would work. 
> * as the ordering of the files to upload is a recursive treewalk, it doesn't spread the upload across multiple shards. 
> Better:
> * build the list of files to upload
> * upload in parallel, picking entries from the list at random and spreading across a pool of uploaders
> * upload straight from local file (copyFromLocalFile()
> * track IO load (files created/second) to estimate risk of throttling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org