You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by khushalkj jain <cr...@gmail.com> on 2021/04/13 12:37:54 UTC

Issue with connecting Apache Drill to S3

Hi All,
Need your help in fixing the below Issue.
I am running drill locally on my MAC in embedded mode.

*Query:*

> use s3;

*Log with Error info :*

org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:
> AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error
> Code: 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request
> ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
> Please, refer to logs for more information.
> [Error Id: ab06e603-feea-40cc-933f-85904f731ed8 on 192.168.1.7:31010]
>   (org.apache.drill.exec.work.foreman.ForemanException) Unexpected
> exception during fragment initialization: Failed to create DrillFileSystem
> for proxy user: doesBucketExist on c360-archival:
> com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service:
> Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID:
> G9TTDZNV531H5RS9; S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=),
> S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=:400
> Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code:
> 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
>     org.apache.drill.exec.work.foreman.Foreman.run():301
>     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
>     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
>     java.lang.Thread.run():832
>   Caused By (org.apache.drill.common.exceptions.DrillRuntimeException)
> Failed to create DrillFileSystem for proxy user: doesBucketExist on
> c360-archival: com.amazonaws.services.s3.model.AmazonS3Exception: Bad
> Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request;
> Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=),
> S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=:400
> Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code:
> 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
>     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():220
>     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():205
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>():84
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():72
>     org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():232
>     org.apache.calcite.jdbc.DynamicRootSchema.loadSchemaFactory():87
>     org.apache.calcite.jdbc.DynamicRootSchema.getImplicitSubSchema():72
>     org.apache.calcite.jdbc.CalciteSchema.getSubSchema():265
>     org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getSubSchema():684
>     org.apache.drill.exec.planner.sql.SchemaUtilites.searchSchemaTree():98
>     org.apache.drill.exec.planner.sql.SchemaUtilites.findSchema():51
>     org.apache.drill.exec.rpc.user.UserSession.setDefaultSchemaPath():225
>
> org.apache.drill.exec.planner.sql.handlers.UseSchemaHandler.getPlan():43
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():283
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan():163
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan():128
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():93
>     org.apache.drill.exec.work.foreman.Foreman.runSQL():593
>     org.apache.drill.exec.work.foreman.Foreman.run():274
>     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
>     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
>     java.lang.Thread.run():832
>   Caused By (org.apache.hadoop.fs.s3a.AWSBadRequestException)
> doesBucketExist on c360-archival:
> com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service:
> Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID:
> G9TTDZNV531H5RS9; S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=),
> S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=:400
> Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code:
> 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
>     org.apache.hadoop.fs.s3a.S3AUtils.translateException():224
>     org.apache.hadoop.fs.s3a.Invoker.once():111
>     org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3():265
>     org.apache.hadoop.fs.s3a.Invoker.retryUntranslated():322
>     org.apache.hadoop.fs.s3a.Invoker.retry():261
>     org.apache.hadoop.fs.s3a.Invoker.retry():236
>     org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists():380
>     org.apache.hadoop.fs.s3a.S3AFileSystem.initialize():314
>     org.apache.hadoop.fs.FileSystem.createFileSystem():3303
>     org.apache.hadoop.fs.FileSystem.get():476
>     org.apache.hadoop.fs.FileSystem.get():227
>     org.apache.drill.exec.store.dfs.DrillFileSystem.<init>():94
>
> org.apache.drill.exec.util.ImpersonationUtil.lambda$createFileSystem$0():215
>     java.security.AccessController.doPrivileged():691
>     javax.security.auth.Subject.doAs():425
>     org.apache.hadoop.security.UserGroupInformation.doAs():1730
>     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():213
>     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():205
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>():84
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():72
>     org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():232
>     org.apache.calcite.jdbc.DynamicRootSchema.loadSchemaFactory():87
>     org.apache.calcite.jdbc.DynamicRootSchema.getImplicitSubSchema():72
>     org.apache.calcite.jdbc.CalciteSchema.getSubSchema():265
>     org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getSubSchema():684
>     org.apache.drill.exec.planner.sql.SchemaUtilites.searchSchemaTree():98
>     org.apache.drill.exec.planner.sql.SchemaUtilites.findSchema():51
>     org.apache.drill.exec.rpc.user.UserSession.setDefaultSchemaPath():225
>
> org.apache.drill.exec.planner.sql.handlers.UseSchemaHandler.getPlan():43
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():283
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan():163
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan():128
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():93
>     org.apache.drill.exec.work.foreman.Foreman.runSQL():593
>     org.apache.drill.exec.work.foreman.Foreman.run():274
>     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
>     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
>     java.lang.Thread.run():832
>   Caused By (com.amazonaws.services.s3.model.AmazonS3Exception) Bad
> Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request;
> Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
>
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse():1640
>
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest():1304
>
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper():1058
>     com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute():743
>
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer():717
>     com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute():699
>     com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500():667
>
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute():649
>     com.amazonaws.http.AmazonHttpClient.execute():513
>     com.amazonaws.services.s3.AmazonS3Client.invoke():4368
>     com.amazonaws.services.s3.AmazonS3Client.invoke():4315
>     com.amazonaws.services.s3.AmazonS3Client.headBucket():1344
>     com.amazonaws.services.s3.AmazonS3Client.doesBucketExist():1284
>
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1():381
>     org.apache.hadoop.fs.s3a.Invoker.once():109
>     org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3():265
>     org.apache.hadoop.fs.s3a.Invoker.retryUntranslated():322
>     org.apache.hadoop.fs.s3a.Invoker.retry():261
>     org.apache.hadoop.fs.s3a.Invoker.retry():236
>     org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists():380
>     org.apache.hadoop.fs.s3a.S3AFileSystem.initialize():314
>     org.apache.hadoop.fs.FileSystem.createFileSystem():3303
>     org.apache.hadoop.fs.FileSystem.get():476
>     org.apache.hadoop.fs.FileSystem.get():227
>     org.apache.drill.exec.store.dfs.DrillFileSystem.<init>():94
>
> org.apache.drill.exec.util.ImpersonationUtil.lambda$createFileSystem$0():215
>     java.security.AccessController.doPrivileged():691
>     javax.security.auth.Subject.doAs():425
>     org.apache.hadoop.security.UserGroupInformation.doAs():1730
>     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():213
>     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():205
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>():84
>
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():72
>     org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():232
>     org.apache.calcite.jdbc.DynamicRootSchema.loadSchemaFactory():87
>     org.apache.calcite.jdbc.DynamicRootSchema.getImplicitSubSchema():72
>     org.apache.calcite.jdbc.CalciteSchema.getSubSchema():265
>     org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getSubSchema():684
>     org.apache.drill.exec.planner.sql.SchemaUtilites.searchSchemaTree():98
>     org.apache.drill.exec.planner.sql.SchemaUtilites.findSchema():51
>     org.apache.drill.exec.rpc.user.UserSession.setDefaultSchemaPath():225
>
> org.apache.drill.exec.planner.sql.handlers.UseSchemaHandler.getPlan():43
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():283
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan():163
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan():128
>     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():93
>     org.apache.drill.exec.work.foreman.Foreman.runSQL():593
>     org.apache.drill.exec.work.foreman.Foreman.run():274
>     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
>     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
>     java.lang.Thread.run():832
>
> *I have configured my storage plugin for s3 via UI as follows :*
> {
>   "type": "file",
>   "connection": "s3a://c360-archival/",
>   "config": {
>     "fs.s3a.secret.key": "*******",
>     "fs.s3a.access.key": "*************",
>     "fs.s3a.endpoint": "s3.us-west-2.amazonaws.com",
>     "fs.s3a.impl.disable.cache": "true"
>   },
>   "workspaces": {
>     "tmp": {
>       "location": "/tmp",
>       "writable": true,
>       "defaultInputFormat": null,
>       "allowAccessOutsideWorkspace": true
>     },
>     "root": {
>       "location": "/",
>       "writable": true,
>       "defaultInputFormat": null,
>       "allowAccessOutsideWorkspace": true
>     }
>   },
>   "formats": {
>     "parquet": {
>       "type": "parquet"
>     },
>     "avro": {
>       "type": "avro",
>       "extensions": [
>         "avro"
>       ]
>     },
>     "json": {
>       "type": "json",
>       "extensions": [
>         "json"
>       ]
>     },
>     "pcap": {
>       "type": "pcap",
>       "extensions": [
>         "pcap"
>       ]
>     },
>     "csvh": {
>       "type": "text",
>       "extensions": [
>         "csvh"
>       ],
>       "extractHeader": true
>     },
>     "sequencefile": {
>       "type": "sequencefile",
>       "extensions": [
>         "seq"
>       ]
>     },
>     "pcapng": {
>       "type": "pcapng",
>       "extensions": [
>         "pcapng"
>       ]
>     },
>     "psv": {
>       "type": "text",
>       "extensions": [
>         "tbl"
>       ],
>       "fieldDelimiter": "|"
>     },
>     "tsv": {
>       "type": "text",
>       "extensions": [
>         "tsv"
>       ],
>       "fieldDelimiter": "\t"
>     },
>     "csv": {
>       "type": "text",
>       "extensions": [
>         "csv"
>       ]
>     },
>     "spss": {
>       "type": "spss",
>       "extensions": [
>         "sav"
>       ]
>     },
>     "excel": {
>       "type": "excel",
>       "extensions": [
>         "xlsx"
>       ],
>       "lastRow": 1048576
>     },
>     "shp": {
>       "type": "shp",
>       "extensions": [
>         "shp"
>       ]
>     },
>     "hdf5": {
>       "type": "hdf5",
>       "extensions": [
>         "h5"
>       ],
>       "defaultPath": null
>     },
>     "syslog": {
>       "type": "syslog",
>       "extensions": [
>         "syslog"
>       ],
>       "maxErrors": 10
>     },
>     "ltsv": {
>       "type": "ltsv",
>       "extensions": [
>         "ltsv"
>       ]
>     }
>   },
>   "enabled": true
> }

*core-site.xml*

>       <property>
>        <name>fs.s3a.aws.credentials.provider</name>
>      <value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider</value>
>      </property>

Re: Issue with connecting Apache Drill to S3

Posted by Ted Dunning <te...@gmail.com>.
WHat happens when you just run a query that reference s3 data instead of
trying to do the "use s3" command?



On Tue, Apr 13, 2021 at 6:15 AM khushalkj jain <cr...@gmail.com>
wrote:

> Hi All,
> Need your help in fixing the below Issue.
> I am running drill locally on my MAC in embedded mode.
>
> *Query:*
>
> > use s3;
>
> *Log with Error info :*
>
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:
> > AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400;
> Error
> > Code: 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request
> > ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
> > Please, refer to logs for more information.
> > [Error Id: ab06e603-feea-40cc-933f-85904f731ed8 on 192.168.1.7:31010]
> >   (org.apache.drill.exec.work.foreman.ForemanException) Unexpected
> > exception during fragment initialization: Failed to create
> DrillFileSystem
> > for proxy user: doesBucketExist on c360-archival:
> > com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service:
> > Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID:
> > G9TTDZNV531H5RS9; S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=),
> > S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=:400
> > Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error
> Code:
> > 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
> >     org.apache.drill.exec.work.foreman.Foreman.run():301
> >     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
> >     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
> >     java.lang.Thread.run():832
> >   Caused By (org.apache.drill.common.exceptions.DrillRuntimeException)
> > Failed to create DrillFileSystem for proxy user: doesBucketExist on
> > c360-archival: com.amazonaws.services.s3.model.AmazonS3Exception: Bad
> > Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad
> Request;
> > Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=),
> > S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=:400
> > Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error
> Code:
> > 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
> >     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():220
> >     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():205
> >
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>():84
> >
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():72
> >
>  org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():232
> >     org.apache.calcite.jdbc.DynamicRootSchema.loadSchemaFactory():87
> >     org.apache.calcite.jdbc.DynamicRootSchema.getImplicitSubSchema():72
> >     org.apache.calcite.jdbc.CalciteSchema.getSubSchema():265
> >
>  org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getSubSchema():684
> >
>  org.apache.drill.exec.planner.sql.SchemaUtilites.searchSchemaTree():98
> >     org.apache.drill.exec.planner.sql.SchemaUtilites.findSchema():51
> >     org.apache.drill.exec.rpc.user.UserSession.setDefaultSchemaPath():225
> >
> > org.apache.drill.exec.planner.sql.handlers.UseSchemaHandler.getPlan():43
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():283
> >
>  org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan():163
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan():128
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():93
> >     org.apache.drill.exec.work.foreman.Foreman.runSQL():593
> >     org.apache.drill.exec.work.foreman.Foreman.run():274
> >     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
> >     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
> >     java.lang.Thread.run():832
> >   Caused By (org.apache.hadoop.fs.s3a.AWSBadRequestException)
> > doesBucketExist on c360-archival:
> > com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service:
> > Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID:
> > G9TTDZNV531H5RS9; S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=),
> > S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=:400
> > Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error
> Code:
> > 400 Bad Request; Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
> >     org.apache.hadoop.fs.s3a.S3AUtils.translateException():224
> >     org.apache.hadoop.fs.s3a.Invoker.once():111
> >     org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3():265
> >     org.apache.hadoop.fs.s3a.Invoker.retryUntranslated():322
> >     org.apache.hadoop.fs.s3a.Invoker.retry():261
> >     org.apache.hadoop.fs.s3a.Invoker.retry():236
> >     org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists():380
> >     org.apache.hadoop.fs.s3a.S3AFileSystem.initialize():314
> >     org.apache.hadoop.fs.FileSystem.createFileSystem():3303
> >     org.apache.hadoop.fs.FileSystem.get():476
> >     org.apache.hadoop.fs.FileSystem.get():227
> >     org.apache.drill.exec.store.dfs.DrillFileSystem.<init>():94
> >
> >
> org.apache.drill.exec.util.ImpersonationUtil.lambda$createFileSystem$0():215
> >     java.security.AccessController.doPrivileged():691
> >     javax.security.auth.Subject.doAs():425
> >     org.apache.hadoop.security.UserGroupInformation.doAs():1730
> >     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():213
> >     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():205
> >
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>():84
> >
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():72
> >
>  org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():232
> >     org.apache.calcite.jdbc.DynamicRootSchema.loadSchemaFactory():87
> >     org.apache.calcite.jdbc.DynamicRootSchema.getImplicitSubSchema():72
> >     org.apache.calcite.jdbc.CalciteSchema.getSubSchema():265
> >
>  org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getSubSchema():684
> >
>  org.apache.drill.exec.planner.sql.SchemaUtilites.searchSchemaTree():98
> >     org.apache.drill.exec.planner.sql.SchemaUtilites.findSchema():51
> >     org.apache.drill.exec.rpc.user.UserSession.setDefaultSchemaPath():225
> >
> > org.apache.drill.exec.planner.sql.handlers.UseSchemaHandler.getPlan():43
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():283
> >
>  org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan():163
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan():128
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():93
> >     org.apache.drill.exec.work.foreman.Foreman.runSQL():593
> >     org.apache.drill.exec.work.foreman.Foreman.run():274
> >     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
> >     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
> >     java.lang.Thread.run():832
> >   Caused By (com.amazonaws.services.s3.model.AmazonS3Exception) Bad
> > Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad
> Request;
> > Request ID: G9TTDZNV531H5RS9; S3 Extended Request ID:
> >
> ihj+EsqMcF3qlP2EYHBwuarC5mOqiQ/PvVfgmu722WY8pL5VgRU69gbl4U1B3vpNqYYjcbiejGs=)
> >
> >
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse():1640
> >
> >
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest():1304
> >
> > com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper():1058
> >     com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute():743
> >
> >
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer():717
> >     com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute():699
> >     com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500():667
> >
> >
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute():649
> >     com.amazonaws.http.AmazonHttpClient.execute():513
> >     com.amazonaws.services.s3.AmazonS3Client.invoke():4368
> >     com.amazonaws.services.s3.AmazonS3Client.invoke():4315
> >     com.amazonaws.services.s3.AmazonS3Client.headBucket():1344
> >     com.amazonaws.services.s3.AmazonS3Client.doesBucketExist():1284
> >
> > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1():381
> >     org.apache.hadoop.fs.s3a.Invoker.once():109
> >     org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3():265
> >     org.apache.hadoop.fs.s3a.Invoker.retryUntranslated():322
> >     org.apache.hadoop.fs.s3a.Invoker.retry():261
> >     org.apache.hadoop.fs.s3a.Invoker.retry():236
> >     org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists():380
> >     org.apache.hadoop.fs.s3a.S3AFileSystem.initialize():314
> >     org.apache.hadoop.fs.FileSystem.createFileSystem():3303
> >     org.apache.hadoop.fs.FileSystem.get():476
> >     org.apache.hadoop.fs.FileSystem.get():227
> >     org.apache.drill.exec.store.dfs.DrillFileSystem.<init>():94
> >
> >
> org.apache.drill.exec.util.ImpersonationUtil.lambda$createFileSystem$0():215
> >     java.security.AccessController.doPrivileged():691
> >     javax.security.auth.Subject.doAs():425
> >     org.apache.hadoop.security.UserGroupInformation.doAs():1730
> >     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():213
> >     org.apache.drill.exec.util.ImpersonationUtil.createFileSystem():205
> >
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>():84
> >
> >
> org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():72
> >
>  org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():232
> >     org.apache.calcite.jdbc.DynamicRootSchema.loadSchemaFactory():87
> >     org.apache.calcite.jdbc.DynamicRootSchema.getImplicitSubSchema():72
> >     org.apache.calcite.jdbc.CalciteSchema.getSubSchema():265
> >
>  org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getSubSchema():684
> >
>  org.apache.drill.exec.planner.sql.SchemaUtilites.searchSchemaTree():98
> >     org.apache.drill.exec.planner.sql.SchemaUtilites.findSchema():51
> >     org.apache.drill.exec.rpc.user.UserSession.setDefaultSchemaPath():225
> >
> > org.apache.drill.exec.planner.sql.handlers.UseSchemaHandler.getPlan():43
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():283
> >
>  org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan():163
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan():128
> >     org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():93
> >     org.apache.drill.exec.work.foreman.Foreman.runSQL():593
> >     org.apache.drill.exec.work.foreman.Foreman.run():274
> >     java.util.concurrent.ThreadPoolExecutor.runWorker():1130
> >     java.util.concurrent.ThreadPoolExecutor$Worker.run():630
> >     java.lang.Thread.run():832
> >
> > *I have configured my storage plugin for s3 via UI as follows :*
> > {
> >   "type": "file",
> >   "connection": "s3a://c360-archival/",
> >   "config": {
> >     "fs.s3a.secret.key": "*******",
> >     "fs.s3a.access.key": "*************",
> >     "fs.s3a.endpoint": "s3.us-west-2.amazonaws.com",
> >     "fs.s3a.impl.disable.cache": "true"
> >   },
> >   "workspaces": {
> >     "tmp": {
> >       "location": "/tmp",
> >       "writable": true,
> >       "defaultInputFormat": null,
> >       "allowAccessOutsideWorkspace": true
> >     },
> >     "root": {
> >       "location": "/",
> >       "writable": true,
> >       "defaultInputFormat": null,
> >       "allowAccessOutsideWorkspace": true
> >     }
> >   },
> >   "formats": {
> >     "parquet": {
> >       "type": "parquet"
> >     },
> >     "avro": {
> >       "type": "avro",
> >       "extensions": [
> >         "avro"
> >       ]
> >     },
> >     "json": {
> >       "type": "json",
> >       "extensions": [
> >         "json"
> >       ]
> >     },
> >     "pcap": {
> >       "type": "pcap",
> >       "extensions": [
> >         "pcap"
> >       ]
> >     },
> >     "csvh": {
> >       "type": "text",
> >       "extensions": [
> >         "csvh"
> >       ],
> >       "extractHeader": true
> >     },
> >     "sequencefile": {
> >       "type": "sequencefile",
> >       "extensions": [
> >         "seq"
> >       ]
> >     },
> >     "pcapng": {
> >       "type": "pcapng",
> >       "extensions": [
> >         "pcapng"
> >       ]
> >     },
> >     "psv": {
> >       "type": "text",
> >       "extensions": [
> >         "tbl"
> >       ],
> >       "fieldDelimiter": "|"
> >     },
> >     "tsv": {
> >       "type": "text",
> >       "extensions": [
> >         "tsv"
> >       ],
> >       "fieldDelimiter": "\t"
> >     },
> >     "csv": {
> >       "type": "text",
> >       "extensions": [
> >         "csv"
> >       ]
> >     },
> >     "spss": {
> >       "type": "spss",
> >       "extensions": [
> >         "sav"
> >       ]
> >     },
> >     "excel": {
> >       "type": "excel",
> >       "extensions": [
> >         "xlsx"
> >       ],
> >       "lastRow": 1048576
> >     },
> >     "shp": {
> >       "type": "shp",
> >       "extensions": [
> >         "shp"
> >       ]
> >     },
> >     "hdf5": {
> >       "type": "hdf5",
> >       "extensions": [
> >         "h5"
> >       ],
> >       "defaultPath": null
> >     },
> >     "syslog": {
> >       "type": "syslog",
> >       "extensions": [
> >         "syslog"
> >       ],
> >       "maxErrors": 10
> >     },
> >     "ltsv": {
> >       "type": "ltsv",
> >       "extensions": [
> >         "ltsv"
> >       ]
> >     }
> >   },
> >   "enabled": true
> > }
>
> *core-site.xml*
>
> >       <property>
> >        <name>fs.s3a.aws.credentials.provider</name>
> >      <value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider</value>
> >      </property>
>