You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Ashish Kumar (Jira)" <ji...@apache.org> on 2023/12/05 08:45:00 UTC

[jira] [Comment Edited] (HDDS-9762) [FSO] Hadoop dfs s3a protocol does not work with FSO buckets

    [ https://issues.apache.org/jira/browse/HDDS-9762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793116#comment-17793116 ] 

Ashish Kumar edited comment on HDDS-9762 at 12/5/23 8:44 AM:
-------------------------------------------------------------

[~mladjangadzic] Can you please get the s3g and OM audit and process logs as well to see what is the request and response from ozone?

Other point is when we use dfs with s3a multiple times it works normally as below command.
{code:java}
hdfs dfs -Dfs.s3a.access.key=1 -Dfs.s3a.secret.key=1 -Dfs.s3a.endpoint=http://localhost:9878 -Dfs.s3a.path.style.access=true -put a.txt s3a://fso/s3-1GB/key2{code}
So the issue is only while using freon?


was (Author: JIRAUSER298402):
[~mladjangadzic] Can you please get the s3g and OM audit and process logs as well to see what is the request and response from ozone?

> [FSO] Hadoop dfs s3a protocol does not work with FSO buckets
> ------------------------------------------------------------
>
>                 Key: HDDS-9762
>                 URL: https://issues.apache.org/jira/browse/HDDS-9762
>             Project: Apache Ozone
>          Issue Type: Bug
>    Affects Versions: 1.4.0
>            Reporter: Mladjan Gadzic
>            Priority: Blocker
>         Attachments: 2023-12-02.png
>
>
> Trying to exercise freon dfsg over s3a results in exception.
> Command:
>  
> {code:java}
> OZONE_CLASSPATH=/opt/hadoop/share/ozone/lib/aws-java-sdk-bundle-1.11.1026.jar:/opt/hadoop/share/ozone/lib/hadoop-aws-3.3.2.jar:$(ozone classpath ozone-common) ozone freon \-Dfs.s3a.endpoint=http://host.docker.internal:9878 \-Dfs.s3a.etag.checksum.enabled=false \-Dfs.s3a.path.style.access=true \-Dfs.s3a.change.detection.source=versionid \-Dfs.s3a.change.detection.mode=client \-Dfs.s3a.change.detection.version.required=false \dfsg -s102400 -n10000 -t10 --path=s3a://fso/ --prefix="s3-1GB" {code}
>  
> Exception (command first run)
> {code:java}
> 2023-11-22 18:34:19,180 [s3a-transfer-fso-unbounded-pool4-t1] DEBUG impl.BulkDeleteRetryHandler: Retrying on error during bulk delete
> :org.apache.hadoop.fs.s3a.AWSS3IOException: delete: com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more objects could not be deleted (Service: null; Status Code: 200; Error Code: null; Request ID: 0bcdb9b8-40f8-402f-b8d1-b5bdb8159823; S3 Extended Request ID: DwT29rWRhtYS; Proxy: null), S3 Extended Request ID: DwT29rWRhtYS:null: InternalError: s3-1GB/: Directory is not empty. Key:s3-1GB
> : One or more objects could not be deleted (Service: null; Status Code: 200; Error Code: null; Request ID: 0bcdb9b8-40f8-402f-b8d1-b5bdb8159823; S3 Extended Request ID: DwT29rWRhtYS; Proxy: null)
>         at org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteSupport.translateDeleteException(MultiObjectDeleteSupport.java:117)
>         at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:312)
>         at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:426)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:2775)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeysS3(S3AFileSystem.java:3022)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:3121)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:3078)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:4498)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$finishedWrite$31(S3AFileSystem.java:4403)
>         at org.apache.hadoop.fs.s3a.impl.CallableSupplier.get(CallableSupplier.java:87)
>         at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
>         at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>         at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more objects could not be deleted (Service: null; Status Code: 200; Error Code: null; Request ID: 0bcdb9b8-40f8-402f-b8d1-b5bdb8159823; S3 Extended Request ID: DwT29rWRhtYS; Proxy: null), S3 Extended Request ID: DwT29rWRhtYS
>         at com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:2345)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$16(S3AFileSystem.java:2785)
>         at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>         at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>         at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:414)
>         ... 11 more{code}
> In consecutive run (command second run), there is a different exception
> {code:java}
> 2023-11-22 18:39:36,543 [pool-2-thread-9] ERROR freon.BaseFreonGenerator: Error on executing task 7
> :org.apache.hadoop.fs.FileAlreadyExistsException: s3a://fso/s3-1GB/7 is a directory
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.innerCreateFile(S3AFileSystem.java:1690)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$create$6(S3AFileSystem.java:1646)
>  at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>  at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>  at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:1645)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1233)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1210)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1091)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1078)
>  at org.apache.hadoop.ozone.freon.HadoopFsGenerator.lambda$createFile$0(HadoopFsGenerator.java:112)
>  at com.codahale.metrics.Timer.time(Timer.java:101)
>  at org.apache.hadoop.ozone.freon.HadoopFsGenerator.createFile(HadoopFsGenerator.java:111)
>  at org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:220)
>  at org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:200)
>  at org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:174)
>  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:829) {code}
> Ozone SHA f34d347af1f7b9c1eb82cf27fbe8231c85493628.
> Libs from Hadoop 3.3.2 version.
> It is reproducible using unsecure Ozone Docker cluster with 3DNs.
> Steps to reproduce the issue:
>  # bring up unsecure Ozone Docker cluster
>  # exec into OM container
>  # add env variables 
> AWS_ACCESS_KEY_ID=random
> AWS_SECRET_KEY=random
> OZONE_ROOT_LOGGER=debug,console
>  # create bucket named "fso" with FSO layout
>  # run mentioned command (first time)
>  # check output
>  # run mentioned command (second time)
>  # check output



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org