You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2017/11/02 21:00:00 UTC

[jira] [Commented] (HADOOP-15011) Getting file not found exception while using distcp with s3a

    [ https://issues.apache.org/jira/browse/HADOOP-15011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236593#comment-16236593 ] 

Steve Loughran commented on HADOOP-15011:
-----------------------------------------

This is consistency, but not one you need s3guard. Looks more like HADOOP-13145; the stack is exactly the same as HADOOP-11487. Closing as a duplicate of those.

This was fixed a while back. What version of CDH are you using?

* Hadoop 2.8 or the recent HDP and CDH releases have the higher performance upload
* for config : [https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_cloud-data-access/content/using-distcp.html]

make sure you aren't trying to use: --atomic  or any of the -poptions.

bq. I'm not seeing the throughput of 3gbps 

I'd be surprised if s3 gave you that. Anyway, it's a "maximum per node", not any guarantee of actual B/W.

Are you trying to write to S3 from a physical cluster, or inside EC2 itself. 

250 GB in 1h30 is 800 KB/s; 6-7 MBits. For a long-haul link, well, its conceivable that is the bandwidth. For in-EC2, its pretty bad.

It does a lot of throttling for writes to specific buckets and paths in it. You may find you get better performance by actually cranking back on how aggressive the bandwidth per node is, reducing the # of mappers. Try cutting it in half and seeing what happens. Then do it again.

bq.  WIth fast upload option, I'm writing the files to S3 using threads. Could you please help me in providing some tuning option for this.

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_cloud-data-access/content/s3a-fast-upload.html

If want to benchmark your upload speed better, download and run https://github.com/steveloughran/cloudup ; for a bulk upload of local data. This isolates all network traffic for the upload, prioritises large files first, and shuffles the filenames to reduce throttling at the back end. Your bandwidth per node will not be > that

> Getting file not found exception while using distcp with s3a
> ------------------------------------------------------------
>
>                 Key: HADOOP-15011
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15011
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>            Reporter: Logesh Rangan
>
> I'm using the distcp option to copy the huge files from Hadoop to S3. Sometimes i'm getting the below error,
> *Command:* (Copying 378 GB data)
> _hadoop distcp -D HADOOP_OPTS=-Xmx12g -D HADOOP_CLIENT_OPTS='-Xmx12g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled' -D 'mapreduce.map.memory.mb=12288' -D 'mapreduce.map.java.opts=-Xmx10g' -D 'mapreduce.reduce.memory.mb=12288' -D 'mapreduce.reduce.java.opts=-Xmx10g' '-Dfs.s3a.proxy.host=edhmgrn-prod.cloud.capitalone.com' '-Dfs.s3a.proxy.port=8088' '-Dfs.s3a.access.key=XXXXXXX' '-Dfs.s3a.secret.key=XXXXXXX' '-Dfs.s3a.connection.timeout=180000' '-Dfs.s3a.attempts.maximum=5' '-Dfs.s3a.fast.upload=true' '-Dfs.s3a.fast.upload.buffer=array' '-Dfs.s3a.fast.upload.active.blocks=50' '-Dfs.s3a.multipart.size=262144000' '-Dfs.s3a.threads.max=500' '-Dfs.s3a.threads.keepalivetime=600' '-Dfs.s3a.server-side-encryption-algorithm=AES256' -bandwidth 3072 -strategy dynamic -m 220 -numListstatusThreads 30 /src/ s3a://bucket/dest
> _
> 17/11/01 12:23:27 INFO mapreduce.Job: Task Id : attempt_1497120915913_2792335_m_000165_0, Status : FAILED
> Error: java.io.FileNotFoundException: No such file or directory: s3a://bucketname/filename
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1132)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:78)
>         at org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:197)
>         at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:256)
>         at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1912)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 17/11/01 12:28:32 INFO mapreduce.Job: Task Id : attempt_1497120915913_2792335_m_000010_0, Status : FAILED
> Error: java.io.IOException: File copy failed: hdfs://nameservice1/filena --> s3a://cof-prod-lake-card/src/seam/acct_scores/acctmdlscore_card_cobna_anon_vldtd/instnc_id=20161023000000/000004_0_copy_6
>         at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:284)
>         at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:252)
>         at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1912)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://nameservice1/filename to s3a://bucketname/filename
>         at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>         at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:280)
>         ... 10 more
> Caused by: com.cloudera.com.amazonaws.AmazonClientException: Failed to parse XML document with handler class com.cloudera.com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
>         at com.cloudera.com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:164)
>         at com.cloudera.com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListBucketObjectsResponse(XmlResponsesSaxParser.java:299)
>         at com.cloudera.com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:77)
>         at com.cloudera.com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:74)
>         at com.cloudera.com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
>         at com.cloudera.com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
>         at com.cloudera.com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:1072)
>         at com.cloudera.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:746)
>         at com.cloudera.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>         at com.cloudera.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>         at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
>         at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3738)
>         at com.cloudera.com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:653)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1096)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:1279)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:1268)
>         at org.apache.hadoop.fs.s3a.S3AFastOutputStream.close(S3AFastOutputStream.java:257)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>         at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
>         at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:261)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:184)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:124)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:100)
>         at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>         ... 11 more
> Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 2; XML document structures must start and end within the same entity.
>         at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
>         at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
>         at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
>         at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
>         at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
>         at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
>         at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.endEntity(Unknown Source)
>         at org.apache.xerces.impl.XMLDocumentScannerImpl.endEntity(Unknown Source)
>         at org.apache.xerces.impl.XMLEntityManager.endEntity(Unknown Source)
>         at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
>         at org.apache.xerces.impl.XMLEntityScanner.skipChar(Unknown Source)
>         at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
>         at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
>         at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
>         at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
>         at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
>         at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
>         at com.cloudera.com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:151)
>         ... 35 more
> And also please help me in choosing the number of mappers and what should I do to copy the data faster to S3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org