You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@drill.apache.org by Herman Tan <he...@redcubesg.com> on 2018/10/02 08:44:05 UTC

Re: ERROR is reading parquet data after create table

Hi Divya and everyone,

The problem has disappeared.
Drill was not restarted.
This appears to be intermittent.
Before I submitted the error report, I ran the script several times and it
failed all the time.
Today I ran it again and it succeeded.
I will restart and test again.

Regards,
Herman



On Thu, Sep 27, 2018 at 11:50 AM Divya Gehlot <di...@gmail.com>
wrote:

> Hi Herman,
> Just to ensure that  your parquet file format is not corrupted , Can you
> please query a folder like just 2001 or some of the files underneath
> .Instead of querying the whole data set at once .
>
> Thanks,
> Divya
>
> On Wed, 26 Sep 2018 at 15:35, Herman Tan <he...@redcubesg.com> wrote:
>
> > Hi Kunal,
> >
> > ----
> > That said, could you provide some details about the parquet data you've
> > created, like the schema, parquet version and the tool used to generate.
> > Usually, the schema (and meta) provides most of these details for any
> > parquet file.
> > ----
> >
> > 1. The schema is under dfs.tmp, the queries to generate are all
> documented
> > below.
> > 2. I don't know how to find the parquet version of the data file
> > 3. The tool used to generate the parquest is apache drill.  The CTAS is
> > detailed below.
> >
> > Regards,
> > Herman
> > ____________
> >
> > *This is the Text data*
> >
> > This is the folders of the files
> > Total # of lines about 50 million rows
> > ----------
> > show files from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
> > 20180825`
> > ;
> > show files from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
> > 20180825\2011`
> > ;
> > -----
> > sales_pos_detail
> >   \pos_details_20180825
> >     \2007
> >     \2008
> >     \2009
> >     \2010
> >     \2011
> >   \pos_details_0.csv
> >   \pos_details_1.csv
> >   \pos_details_2.csv
> >   \pos_details_3.csv
> >   \pos_details_4.csv
> >   \pos_details_5.csv
> >   \pos_details_6.csv
> >   \pos_details_7.csv
> >   \pos_details_8.csv
> >     \2012
> >     \2013
> >     \2014
> >     \2015
> >     \2016
> >     \2017
> >     \2018
> >     \others
> > -----
> >
> > *This is the view with the metadata defined:*
> >
> > create or replace view dfs.tmp.load_pos_sales_detail as
> > SELECT
> > -- dimension keys
> >  cast(dim_date_key as int) dim_date_key
> > ,cast(dim_site_key as int) dim_site_key
> > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
> > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
> > ,cast(dim_card_number_key as int) dim_card_number_key
> > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
> > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
> > ,cast(dim_product_key as int) dim_product_key
> > ,cast(dim_pos_employee_purchase_key as int) dim_pos_employee_purchase_key
> > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
> > ,cast(dim_campaign_key as int) dim_campaign_key
> > ,cast(dim_promo_key as int) dim_promo_key
> > ,cast( case when dim_site_lfl_key = '' then 0 else dim_site_lfl_key end
> as
> > int) dim_site_lfl_key
> > -- derived from keys
> > ,dim_date_str
> > ,`year` as `trx_year`
> > -- Measures
> > ,Product_Sales_Qty
> > ,Product_Sales_Price
> > ,Product_Cost_Price
> > ,Product_Cost_Amt
> > ,Product_Sales_Gross_Amt
> > ,Product_Sales_Promo_Disc_Amt
> > ,Product_Sales_Add_Promo_Disc_Amt
> > ,Product_Sales_Total_Promo_Disc_Amt
> > ,Product_Sales_Retail_Promo_Amt
> > ,Product_Sales_Retail_Amt
> > ,Product_Sales_VAT_Amt
> > ,Product_Sales_Product_Margin_Amt
> > ,Product_Sales_Initial_Margin_Amt
> > from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> > ;
> >
> >
> > *This is the CTAS that generates the parquet from the view above:*
> >
> > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
> > ;
> >
> > create table dfs.tmp.load_pos_sales_detail_tbl AS
> > SELECT
> > -- dimension keys
> >  dim_date_key
> > ,dim_site_key
> > ,dim_pos_header_key
> > ,dim_pos_cashier_key
> > ,dim_card_number_key
> > ,dim_hour_minute_key
> > ,dim_pos_clerk_key
> > ,dim_product_key
> > ,dim_pos_employee_purchase_key
> > ,dim_pos_terminal_key
> > ,dim_campaign_key
> > ,dim_promo_key
> > ,dim_site_lfl_key
> > -- derived from keys
> > ,dim_date_str
> > ,`trx_year`
> > -- Measures
> > ,Product_Sales_Qty Sales_Qty
> > ,Product_Sales_Price Sales_Price
> > ,Product_Cost_Price Cost_Price
> > ,Product_Cost_Amt Cost_Amt
> > ,Product_Sales_Gross_Amt Sales_Gross_Amt
> > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
> > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
> > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
> > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
> > ,Product_Sales_Retail_Amt Retail_Amt
> > ,Product_Sales_VAT_Amt VAT_Amt
> > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
> > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
> > from dfs.tmp.load_pos_sales_detail
> > ;
> >
> >
> > *This is the select query that generated the error:*
> >
> > select *
> > from dfs.tmp.load_pos_sales_detail_tbl
> > ;
> >
> > ----- ERROR ----------------------------
> >
> > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> > parallelism 16.
> >
> >
> > On Mon, Sep 24, 2018 at 9:08 AM, Kunal Khatua <ku...@apache.org> wrote:
> >
> > > Hi Herman
> > >
> > > Assuming that you're doing analytics on your data. If that's the case,
> > > parquet format is the way to go.
> > >
> > > That said, could you provide some details about the parquet data you've
> > > created, like the schema, parquet version and the tool used to
> generate.
> > > Usually, the schema (and meta) provides most of these details for any
> > > parquet file.
> > >
> > > It'll be useful to know if there is a pattern in the failure because of
> > > which there might be corruption occurring.
> > >
> > > Kunal
> > >
> > >
> > > On 9/22/2018 11:49:36 PM, Herman Tan <he...@redcubesg.com> wrote:
> > > Hi Karthik,
> > >
> > > Thank you for pointing me to the mail archive in May 2018.
> > > That is exactly the same problem I am facing.
> > >
> > > I thought of using Drill as an ETL where I load the warehouse parquet
> > > tables from text source files.
> > > Then I query the parquet tables.
> > > It works on some parquet tables but am having problems with large ones
> > that
> > > consist of several files. (I think)
> > > Still investigating.
> > > Anyone in the community have other experience?
> > > Should I work with all text files instead of parquet?
> > >
> > >
> > > Herman
> > >
> > >
> > > On Fri, Sep 21, 2018 at 2:15 AM, Karthikeyan Manivannan
> > > kmanivannan@mapr.com> wrote:
> > >
> > > > Hi Herman,
> > > >
> > > > I am not sure what the exact problem here is but can you check to see
> > if
> > > > you are not hitting the problem described here:
> > > >
> > > > http://mail-archives.apache.org/mod_mbox/drill-user/201805.mbox/%
> > > > 3CCACwRgneXLXoP2vCYuGsA4Gwd1jGS8F+rcpzQ8rHuatFW5fmRaQ@mail.gmail.com
> > %3E
> > > >
> > > > Thanks
> > > >
> > > > Karthik
> > > >
> > > > On Wed, Sep 19, 2018 at 7:02 PM Herman Tan wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I encountered the following error.
> > > > > The Steps I did are as follows:
> > > > > 1. Create a view to fix the data type of fields with cast
> > > > > 2. Create table (parquet) using the view
> > > > > 3. Query select * from table (query a field also does not work)
> > > > >
> > > > > The error:
> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks
> for
> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> > > > > parallelism 16.
> > > > >
> > > > > When I re-run this, the number of tasks will vary.
> > > > >
> > > > > What could be the problem?
> > > > >
> > > > > Regards,
> > > > > Herman Tan
> > > > >
> > > > > More info below:
> > > > >
> > > > > This is the folders of the files
> > > > > Total # of lines, 50 million
> > > > > ----------
> > > > > show files from
> > > > > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> > > > > ;
> > > > > show files from
> > > > >
> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825\2011`
> > > > > ;
> > > > > -----
> > > > > sales_pos_detail
> > > > > \pos_details_20180825
> > > > > \2007
> > > > > \2008
> > > > > \2009
> > > > > \2010
> > > > > \2011
> > > > > \pos_details_0.csv
> > > > > \pos_details_1.csv
> > > > > \pos_details_2.csv
> > > > > \pos_details_3.csv
> > > > > \pos_details_4.csv
> > > > > \pos_details_5.csv
> > > > > \pos_details_6.csv
> > > > > \pos_details_7.csv
> > > > > \pos_details_8.csv
> > > > > \2012
> > > > > \2013
> > > > > \2014
> > > > > \2015
> > > > > \2016
> > > > > \2017
> > > > > \2018
> > > > > \others
> > > > > -----
> > > > >
> > > > > create or replace view dfs.tmp.load_pos_sales_detail as
> > > > > SELECT
> > > > > -- dimension keys
> > > > > cast(dim_date_key as int) dim_date_key
> > > > > ,cast(dim_site_key as int) dim_site_key
> > > > > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
> > > > > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
> > > > > ,cast(dim_card_number_key as int) dim_card_number_key
> > > > > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
> > > > > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
> > > > > ,cast(dim_product_key as int) dim_product_key
> > > > > ,cast(dim_pos_employee_purchase_key as int)
> > > > dim_pos_employee_purchase_key
> > > > > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
> > > > > ,cast(dim_campaign_key as int) dim_campaign_key
> > > > > ,cast(dim_promo_key as int) dim_promo_key
> > > > > ,cast( case when dim_site_lfl_key = '' then 0 else dim_site_lfl_key
> > end
> > > > as
> > > > > int) dim_site_lfl_key
> > > > > -- derived from keys
> > > > > ,dim_date_str
> > > > > ,`year` as `trx_year`
> > > > > -- Measures
> > > > > ,Product_Sales_Qty
> > > > > ,Product_Sales_Price
> > > > > ,Product_Cost_Price
> > > > > ,Product_Cost_Amt
> > > > > ,Product_Sales_Gross_Amt
> > > > > ,Product_Sales_Promo_Disc_Amt
> > > > > ,Product_Sales_Add_Promo_Disc_Amt
> > > > > ,Product_Sales_Total_Promo_Disc_Amt
> > > > > ,Product_Sales_Retail_Promo_Amt
> > > > > ,Product_Sales_Retail_Amt
> > > > > ,Product_Sales_VAT_Amt
> > > > > ,Product_Sales_Product_Margin_Amt
> > > > > ,Product_Sales_Initial_Margin_Amt
> > > > > from
> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> > > > > ;
> > > > >
> > > > > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
> > > > > ;
> > > > >
> > > > > create table dfs.tmp.load_pos_sales_detail_tbl AS
> > > > > SELECT
> > > > > -- dimension keys
> > > > > dim_date_key
> > > > > ,dim_site_key
> > > > > ,dim_pos_header_key
> > > > > ,dim_pos_cashier_key
> > > > > ,dim_card_number_key
> > > > > ,dim_hour_minute_key
> > > > > ,dim_pos_clerk_key
> > > > > ,dim_product_key
> > > > > ,dim_pos_employee_purchase_key
> > > > > ,dim_pos_terminal_key
> > > > > ,dim_campaign_key
> > > > > ,dim_promo_key
> > > > > ,dim_site_lfl_key
> > > > > -- derived from keys
> > > > > ,dim_date_str
> > > > > ,`trx_year`
> > > > > -- Measures
> > > > > ,Product_Sales_Qty Sales_Qty
> > > > > ,Product_Sales_Price Sales_Price
> > > > > ,Product_Cost_Price Cost_Price
> > > > > ,Product_Cost_Amt Cost_Amt
> > > > > ,Product_Sales_Gross_Amt Sales_Gross_Amt
> > > > > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
> > > > > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
> > > > > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
> > > > > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
> > > > > ,Product_Sales_Retail_Amt Retail_Amt
> > > > > ,Product_Sales_VAT_Amt VAT_Amt
> > > > > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
> > > > > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
> > > > > from dfs.tmp.load_pos_sales_detail
> > > > > ;
> > > > >
> > > > > select *
> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> > > > > ;
> > > > >
> > > > > ----- ERROR ----------------------------
> > > > >
> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks
> for
> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> > > > > parallelism 16.
> > > > >
> > > > >
> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
> > > > parquet
> > > > > metadata' are complete. Total number of tasks 29, parallelism 16.
> > > > >
> > > > >
> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
> > > > > parquet metadata' are complete. Total number of tasks 29,
> parallelism
> > > 16.
> > > > >
> > > > >
> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
> > > > > parquet metadata' are complete. Total number of tasks 29,
> parallelism
> > > 16.
> > > > >
> > > > >
> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > > > >
> > > > > ----------------------------------------
> > > > > From Drill log:
> > > > >
> > > > > 2018-09-20 08:58:12,035
> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id
> > > > > 245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf: select *
> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> > > > >
> > > > > 2018-09-20 08:58:53,068
> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
> but
> > > > only
> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number of
> > > tasks
> > > > > 29, parallelism 16.
> > > > > java.util.concurrent.CancellationException: null
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:86)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:57)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > > > toList$2(Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_172]
> > > > > at
> > > > > org.apache.drill.common.collections.Collectors.toList(
> > > > Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > > TimedCallable.java:214)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:324)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:305)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:124)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > initInternal(ParquetGroupScan.java:254)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > > > AbstractParquetGroupScan.java:380)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:132)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:102)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:70)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > > > FileSystemPlugin.java:136)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:116)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:111)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > > > getGroupScan(DrillTable.java:99)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:89)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:69)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:62)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > > > DrillScanRule.java:38)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > > > onMatch(VolcanoRuleCall.java:212)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > findBestExp(VolcanoPlanner.java:652)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > > > Programs.java:368)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:429)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:369)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToDrel(DefaultSqlHandler.java:318)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > > > DefaultSqlHandler.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > > > getQueryPlan(DrillSqlWorker.java:145)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > > > DrillSqlWorker.java:83)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> > .foreman.Foreman.runSQL(Foreman.java:567)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> .foreman.Foreman.run(Foreman.java:266)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > > ThreadPoolExecutor.java:1149)
> > > > > [na:1.8.0_172]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > > > ThreadPoolExecutor.java:624)
> > > > > [na:1.8.0_172]
> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > > > > 2018-09-20 08:58:53,080
> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
> > Waited
> > > > for
> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> > complete.
> > > > > Total number of tasks 29, parallelism 16. (null)
> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR:
> > > Waited
> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> > > > complete.
> > > > > Total number of tasks 29, parallelism 16.
> > > > >
> > > > >
> > > > > [Error Id: f887dcae-9f55-469c-be52-b6ce2a37eeb0 ]
> > > > > at
> > > > >
> > > > > org.apache.drill.common.exceptions.UserException$
> > > > Builder.build(UserException.java:633)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > > TimedCallable.java:253)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:324)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:305)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:124)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > initInternal(ParquetGroupScan.java:254)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > > > AbstractParquetGroupScan.java:380)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:132)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:102)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:70)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > > > FileSystemPlugin.java:136)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:116)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:111)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > > > getGroupScan(DrillTable.java:99)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:89)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:69)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:62)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > > > DrillScanRule.java:38)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > > > onMatch(VolcanoRuleCall.java:212)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > findBestExp(VolcanoPlanner.java:652)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > > > Programs.java:368)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:429)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:369)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToDrel(DefaultSqlHandler.java:318)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > > > DefaultSqlHandler.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > > > getQueryPlan(DrillSqlWorker.java:145)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > > > DrillSqlWorker.java:83)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> > .foreman.Foreman.runSQL(Foreman.java:567)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> .foreman.Foreman.run(Foreman.java:266)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > > ThreadPoolExecutor.java:1149)
> > > > > [na:1.8.0_172]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > > > ThreadPoolExecutor.java:624)
> > > > > [na:1.8.0_172]
> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > > > > Caused by: java.util.concurrent.CancellationException: null
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:86)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:57)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > > > toList$2(Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_172]
> > > > > at
> > > > > org.apache.drill.common.collections.Collectors.toList(
> > > > Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > > TimedCallable.java:214)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > ... 33 common frames omitted
> > > > > 2018-09-20 09:02:10,608 [UserServer-1] WARN
> > > > > o.a.drill.exec.rpc.user.UserServer - Message of mode REQUEST of rpc
> > > > type 3
> > > > > took longer than 500ms. Actual duration was 2042ms.
> > > > > 2018-09-20 09:02:10,608
> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id
> > > > > 245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc: select *
> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> > > > >
> > > > > 2018-09-20 09:02:42,615
> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
> but
> > > > only
> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number of
> > > tasks
> > > > > 29, parallelism 16.
> > > > > java.util.concurrent.CancellationException: null
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:86)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:57)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > > > toList$2(Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_172]
> > > > > at
> > > > > org.apache.drill.common.collections.Collectors.toList(
> > > > Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > > TimedCallable.java:214)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:324)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:305)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:124)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > initInternal(ParquetGroupScan.java:254)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > > > AbstractParquetGroupScan.java:380)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:132)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:102)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:70)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > > > FileSystemPlugin.java:136)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:116)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:111)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > > > getGroupScan(DrillTable.java:99)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:89)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:69)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:62)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > > > DrillScanRule.java:38)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > > > onMatch(VolcanoRuleCall.java:212)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > findBestExp(VolcanoPlanner.java:652)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > > > Programs.java:368)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:429)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:369)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToDrel(DefaultSqlHandler.java:318)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > > > DefaultSqlHandler.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > > > getQueryPlan(DrillSqlWorker.java:145)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > > > DrillSqlWorker.java:83)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> > .foreman.Foreman.runSQL(Foreman.java:567)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> .foreman.Foreman.run(Foreman.java:266)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > > ThreadPoolExecutor.java:1149)
> > > > > [na:1.8.0_172]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > > > ThreadPoolExecutor.java:624)
> > > > > [na:1.8.0_172]
> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > > > > 2018-09-20 09:02:42,625
> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
> > Waited
> > > > for
> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> > complete.
> > > > > Total number of tasks 29, parallelism 16. (null)
> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR:
> > > Waited
> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> > > > complete.
> > > > > Total number of tasks 29, parallelism 16.
> > > > >
> > > > >
> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 ]
> > > > > at
> > > > >
> > > > > org.apache.drill.common.exceptions.UserException$
> > > > Builder.build(UserException.java:633)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > > TimedCallable.java:253)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:324)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:305)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > > > getParquetTableMetadata(Metadata.java:124)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > initInternal(ParquetGroupScan.java:254)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > > > AbstractParquetGroupScan.java:380)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:132)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > > > init>(ParquetGroupScan.java:102)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > > > ParquetFormatPlugin.java:70)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > > > FileSystemPlugin.java:136)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:116)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > > > AbstractStoragePlugin.java:111)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > > > getGroupScan(DrillTable.java:99)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:89)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:69)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > > > DrillScanRel.java:62)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > > > DrillScanRule.java:38)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > > > onMatch(VolcanoRuleCall.java:212)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > > > findBestExp(VolcanoPlanner.java:652)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > > > Programs.java:368)
> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:429)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.
> > > DefaultSqlHandler.transform(
> > > > DefaultSqlHandler.java:369)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > > > convertToDrel(DefaultSqlHandler.java:318)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > >
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > > > DefaultSqlHandler.java:180)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > > > getQueryPlan(DrillSqlWorker.java:145)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > > > DrillSqlWorker.java:83)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> > .foreman.Foreman.runSQL(Foreman.java:567)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.work
> .foreman.Foreman.run(Foreman.java:266)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > > > ThreadPoolExecutor.java:1149)
> > > > > [na:1.8.0_172]
> > > > > at
> > > > >
> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > > > ThreadPoolExecutor.java:624)
> > > > > [na:1.8.0_172]
> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > > > > Caused by: java.util.concurrent.CancellationException: null
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:86)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > > > apply(TimedCallable.java:57)
> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > > > > at
> > > > >
> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > > > toList$2(Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_172]
> > > > > at
> > > > > org.apache.drill.common.collections.Collectors.toList(
> > > > Collectors.java:97)
> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > > TimedCallable.java:214)
> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > > > > ... 33 common frames omitted
> > > > >
> > > > >
> > > > > ----------------------------------------
> > > > > ----------
> > > > >
> > > >
> > >
> >
>

Re: ERROR is reading parquet data after create table

Posted by Herman Tan <he...@redcubesg.com>.
Hi,

I am running into this problem again.
The solution is to define a view that is a "union all" of individual select
queries of single csv files.
Can I request that the hardcoded timeout values be parametrized please?

Herman

On Tue, Oct 2, 2018 at 7:29 PM Herman Tan <he...@redcubesg.com> wrote:

> Hi,
>
> I found the hard coded parameter in the source code
>
>
> https://github.com/apache/drill/blob/8edeb49873d1a1710cfe28e0b49364d07eb1aef4/exec/java-exec/src/main/java/org/apache/drill/exec/store/TimedCallable.java
>
> LINE 52   : private static long TIMEOUT_PER_RUNNABLE_IN_MSECS = 15000;
> LINE 210 :timeout = TIMEOUT_PER_RUNNABLE_IN_MSECS * ((tasks.size() -
> 1)/parallelism + 1);
>
> parallelism param is also hardcoded
>
> https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/metadata/Metadata.java
> LINE 343: 16
>
> In my case: task size is 29, 15000 * ((29-1)/(16+1)) = 15000 * 2 = 30000
> 16 runnables executes round robbin on 29 tasks.  Each runnable given 30000
> ms to timeout.
>
> This is the error message:
> Waited for 30000 ms, but only 11 tasks for 'Fetch parquet metadata' are
> complete. Total number of tasks 29, parallelism 16.
>
> TimedCallable.java:
> LINE 248:    final String errMsg = String.format("Waited for %d ms, but
> only %d tasks for '%s' are complete." +
>             " Total number of tasks %d, parallelism %d.", timeout,
> futureMapper.count, activity, tasks.size(), parallelism);
>
> Shouldn't these be parameterized in "options" based on the infrastructure?
>
> Regards,
> Herman
>
>
>
>
> On Tue, Oct 2, 2018 at 6:47 PM Herman Tan <he...@redcubesg.com> wrote:
>
>> Hi,
>>
>> I have restarted drill and run the script again.
>>
>> select * from dfs.tmp.`load_pos_sales_detail_tbl`;
>> -- SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 11 tasks for
>> 'Fetch parquet metadata' are complete. Total number of tasks 29,
>> parallelism 16.
>>
>> The 29 tasks is related to the 29 parquet files in the folder.
>> To check if any of the parquet files has an error, I ran the following
>> SQL on each parquet file in the folder.  ALL PASSED. (SQL Below).
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
>>
>> So it seems that for this table drill can only get the metadata for 11
>> parquet files before it times out.
>> The time-out is a calculation and it varies from size of table.
>> I checked the source code but I cannot find the calculation of the
>> timeout of "30000 ms".
>> When I am lucky, drill can resolve the metadata for 29 files in 30000 ms
>> and it passes.
>>
>> I plan to use drill for production but this bothers me that there is a
>> limit on the number of parquet files and the timeout parameter cannot be
>> tuned.
>>
>> Does anyone have any ideas?
>>
>> Regards,
>> Herman
>> --------------  SQL BELOW -------------
>>
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_14_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_1_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_2_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_3_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_1.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_0.parquet`;
>> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_1.parquet`;
>>
>>
>> On Tue, Oct 2, 2018 at 4:44 PM Herman Tan <he...@redcubesg.com> wrote:
>>
>>> Hi Divya and everyone,
>>>
>>> The problem has disappeared.
>>> Drill was not restarted.
>>> This appears to be intermittent.
>>> Before I submitted the error report, I ran the script several times and
>>> it failed all the time.
>>> Today I ran it again and it succeeded.
>>> I will restart and test again.
>>>
>>> Regards,
>>> Herman
>>>
>>>
>>>
>>> On Thu, Sep 27, 2018 at 11:50 AM Divya Gehlot <di...@gmail.com>
>>> wrote:
>>>
>>>> Hi Herman,
>>>> Just to ensure that  your parquet file format is not corrupted , Can you
>>>> please query a folder like just 2001 or some of the files underneath
>>>> .Instead of querying the whole data set at once .
>>>>
>>>> Thanks,
>>>> Divya
>>>>
>>>> On Wed, 26 Sep 2018 at 15:35, Herman Tan <he...@redcubesg.com> wrote:
>>>>
>>>> > Hi Kunal,
>>>> >
>>>> > ----
>>>> > That said, could you provide some details about the parquet data
>>>> you've
>>>> > created, like the schema, parquet version and the tool used to
>>>> generate.
>>>> > Usually, the schema (and meta) provides most of these details for any
>>>> > parquet file.
>>>> > ----
>>>> >
>>>> > 1. The schema is under dfs.tmp, the queries to generate are all
>>>> documented
>>>> > below.
>>>> > 2. I don't know how to find the parquet version of the data file
>>>> > 3. The tool used to generate the parquest is apache drill.  The CTAS
>>>> is
>>>> > detailed below.
>>>> >
>>>> > Regards,
>>>> > Herman
>>>> > ____________
>>>> >
>>>> > *This is the Text data*
>>>> >
>>>> > This is the folders of the files
>>>> > Total # of lines about 50 million rows
>>>> > ----------
>>>> > show files from
>>>> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
>>>> > 20180825`
>>>> > ;
>>>> > show files from
>>>> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
>>>> > 20180825\2011`
>>>> > ;
>>>> > -----
>>>> > sales_pos_detail
>>>> >   \pos_details_20180825
>>>> >     \2007
>>>> >     \2008
>>>> >     \2009
>>>> >     \2010
>>>> >     \2011
>>>> >   \pos_details_0.csv
>>>> >   \pos_details_1.csv
>>>> >   \pos_details_2.csv
>>>> >   \pos_details_3.csv
>>>> >   \pos_details_4.csv
>>>> >   \pos_details_5.csv
>>>> >   \pos_details_6.csv
>>>> >   \pos_details_7.csv
>>>> >   \pos_details_8.csv
>>>> >     \2012
>>>> >     \2013
>>>> >     \2014
>>>> >     \2015
>>>> >     \2016
>>>> >     \2017
>>>> >     \2018
>>>> >     \others
>>>> > -----
>>>> >
>>>> > *This is the view with the metadata defined:*
>>>> >
>>>> > create or replace view dfs.tmp.load_pos_sales_detail as
>>>> > SELECT
>>>> > -- dimension keys
>>>> >  cast(dim_date_key as int) dim_date_key
>>>> > ,cast(dim_site_key as int) dim_site_key
>>>> > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
>>>> > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
>>>> > ,cast(dim_card_number_key as int) dim_card_number_key
>>>> > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
>>>> > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
>>>> > ,cast(dim_product_key as int) dim_product_key
>>>> > ,cast(dim_pos_employee_purchase_key as int)
>>>> dim_pos_employee_purchase_key
>>>> > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
>>>> > ,cast(dim_campaign_key as int) dim_campaign_key
>>>> > ,cast(dim_promo_key as int) dim_promo_key
>>>> > ,cast( case when dim_site_lfl_key = '' then 0 else dim_site_lfl_key
>>>> end as
>>>> > int) dim_site_lfl_key
>>>> > -- derived from keys
>>>> > ,dim_date_str
>>>> > ,`year` as `trx_year`
>>>> > -- Measures
>>>> > ,Product_Sales_Qty
>>>> > ,Product_Sales_Price
>>>> > ,Product_Cost_Price
>>>> > ,Product_Cost_Amt
>>>> > ,Product_Sales_Gross_Amt
>>>> > ,Product_Sales_Promo_Disc_Amt
>>>> > ,Product_Sales_Add_Promo_Disc_Amt
>>>> > ,Product_Sales_Total_Promo_Disc_Amt
>>>> > ,Product_Sales_Retail_Promo_Amt
>>>> > ,Product_Sales_Retail_Amt
>>>> > ,Product_Sales_VAT_Amt
>>>> > ,Product_Sales_Product_Margin_Amt
>>>> > ,Product_Sales_Initial_Margin_Amt
>>>> > from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>>>> > ;
>>>> >
>>>> >
>>>> > *This is the CTAS that generates the parquet from the view above:*
>>>> >
>>>> > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
>>>> > ;
>>>> >
>>>> > create table dfs.tmp.load_pos_sales_detail_tbl AS
>>>> > SELECT
>>>> > -- dimension keys
>>>> >  dim_date_key
>>>> > ,dim_site_key
>>>> > ,dim_pos_header_key
>>>> > ,dim_pos_cashier_key
>>>> > ,dim_card_number_key
>>>> > ,dim_hour_minute_key
>>>> > ,dim_pos_clerk_key
>>>> > ,dim_product_key
>>>> > ,dim_pos_employee_purchase_key
>>>> > ,dim_pos_terminal_key
>>>> > ,dim_campaign_key
>>>> > ,dim_promo_key
>>>> > ,dim_site_lfl_key
>>>> > -- derived from keys
>>>> > ,dim_date_str
>>>> > ,`trx_year`
>>>> > -- Measures
>>>> > ,Product_Sales_Qty Sales_Qty
>>>> > ,Product_Sales_Price Sales_Price
>>>> > ,Product_Cost_Price Cost_Price
>>>> > ,Product_Cost_Amt Cost_Amt
>>>> > ,Product_Sales_Gross_Amt Sales_Gross_Amt
>>>> > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
>>>> > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
>>>> > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
>>>> > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
>>>> > ,Product_Sales_Retail_Amt Retail_Amt
>>>> > ,Product_Sales_VAT_Amt VAT_Amt
>>>> > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
>>>> > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
>>>> > from dfs.tmp.load_pos_sales_detail
>>>> > ;
>>>> >
>>>> >
>>>> > *This is the select query that generated the error:*
>>>> >
>>>> > select *
>>>> > from dfs.tmp.load_pos_sales_detail_tbl
>>>> > ;
>>>> >
>>>> > ----- ERROR ----------------------------
>>>> >
>>>> > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
>>>> > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>>>> > parallelism 16.
>>>> >
>>>> >
>>>> > On Mon, Sep 24, 2018 at 9:08 AM, Kunal Khatua <ku...@apache.org>
>>>> wrote:
>>>> >
>>>> > > Hi Herman
>>>> > >
>>>> > > Assuming that you're doing analytics on your data. If that's the
>>>> case,
>>>> > > parquet format is the way to go.
>>>> > >
>>>> > > That said, could you provide some details about the parquet data
>>>> you've
>>>> > > created, like the schema, parquet version and the tool used to
>>>> generate.
>>>> > > Usually, the schema (and meta) provides most of these details for
>>>> any
>>>> > > parquet file.
>>>> > >
>>>> > > It'll be useful to know if there is a pattern in the failure
>>>> because of
>>>> > > which there might be corruption occurring.
>>>> > >
>>>> > > Kunal
>>>> > >
>>>> > >
>>>> > > On 9/22/2018 11:49:36 PM, Herman Tan <he...@redcubesg.com> wrote:
>>>> > > Hi Karthik,
>>>> > >
>>>> > > Thank you for pointing me to the mail archive in May 2018.
>>>> > > That is exactly the same problem I am facing.
>>>> > >
>>>> > > I thought of using Drill as an ETL where I load the warehouse
>>>> parquet
>>>> > > tables from text source files.
>>>> > > Then I query the parquet tables.
>>>> > > It works on some parquet tables but am having problems with large
>>>> ones
>>>> > that
>>>> > > consist of several files. (I think)
>>>> > > Still investigating.
>>>> > > Anyone in the community have other experience?
>>>> > > Should I work with all text files instead of parquet?
>>>> > >
>>>> > >
>>>> > > Herman
>>>> > >
>>>> > >
>>>> > > On Fri, Sep 21, 2018 at 2:15 AM, Karthikeyan Manivannan
>>>> > > kmanivannan@mapr.com> wrote:
>>>> > >
>>>> > > > Hi Herman,
>>>> > > >
>>>> > > > I am not sure what the exact problem here is but can you check to
>>>> see
>>>> > if
>>>> > > > you are not hitting the problem described here:
>>>> > > >
>>>> > > > http://mail-archives.apache.org/mod_mbox/drill-user/201805.mbox/%
>>>> > > >
>>>> 3CCACwRgneXLXoP2vCYuGsA4Gwd1jGS8F+rcpzQ8rHuatFW5fmRaQ@mail.gmail.com
>>>> > %3E
>>>> > > >
>>>> > > > Thanks
>>>> > > >
>>>> > > > Karthik
>>>> > > >
>>>> > > > On Wed, Sep 19, 2018 at 7:02 PM Herman Tan wrote:
>>>> > > >
>>>> > > > > Hi,
>>>> > > > >
>>>> > > > > I encountered the following error.
>>>> > > > > The Steps I did are as follows:
>>>> > > > > 1. Create a view to fix the data type of fields with cast
>>>> > > > > 2. Create table (parquet) using the view
>>>> > > > > 3. Query select * from table (query a field also does not work)
>>>> > > > >
>>>> > > > > The error:
>>>> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
>>>> tasks for
>>>> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>>>> > > > > parallelism 16.
>>>> > > > >
>>>> > > > > When I re-run this, the number of tasks will vary.
>>>> > > > >
>>>> > > > > What could be the problem?
>>>> > > > >
>>>> > > > > Regards,
>>>> > > > > Herman Tan
>>>> > > > >
>>>> > > > > More info below:
>>>> > > > >
>>>> > > > > This is the folders of the files
>>>> > > > > Total # of lines, 50 million
>>>> > > > > ----------
>>>> > > > > show files from
>>>> > > > >
>>>> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>>>> > > > > ;
>>>> > > > > show files from
>>>> > > > >
>>>> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825\2011`
>>>> > > > > ;
>>>> > > > > -----
>>>> > > > > sales_pos_detail
>>>> > > > > \pos_details_20180825
>>>> > > > > \2007
>>>> > > > > \2008
>>>> > > > > \2009
>>>> > > > > \2010
>>>> > > > > \2011
>>>> > > > > \pos_details_0.csv
>>>> > > > > \pos_details_1.csv
>>>> > > > > \pos_details_2.csv
>>>> > > > > \pos_details_3.csv
>>>> > > > > \pos_details_4.csv
>>>> > > > > \pos_details_5.csv
>>>> > > > > \pos_details_6.csv
>>>> > > > > \pos_details_7.csv
>>>> > > > > \pos_details_8.csv
>>>> > > > > \2012
>>>> > > > > \2013
>>>> > > > > \2014
>>>> > > > > \2015
>>>> > > > > \2016
>>>> > > > > \2017
>>>> > > > > \2018
>>>> > > > > \others
>>>> > > > > -----
>>>> > > > >
>>>> > > > > create or replace view dfs.tmp.load_pos_sales_detail as
>>>> > > > > SELECT
>>>> > > > > -- dimension keys
>>>> > > > > cast(dim_date_key as int) dim_date_key
>>>> > > > > ,cast(dim_site_key as int) dim_site_key
>>>> > > > > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
>>>> > > > > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
>>>> > > > > ,cast(dim_card_number_key as int) dim_card_number_key
>>>> > > > > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
>>>> > > > > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
>>>> > > > > ,cast(dim_product_key as int) dim_product_key
>>>> > > > > ,cast(dim_pos_employee_purchase_key as int)
>>>> > > > dim_pos_employee_purchase_key
>>>> > > > > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
>>>> > > > > ,cast(dim_campaign_key as int) dim_campaign_key
>>>> > > > > ,cast(dim_promo_key as int) dim_promo_key
>>>> > > > > ,cast( case when dim_site_lfl_key = '' then 0 else
>>>> dim_site_lfl_key
>>>> > end
>>>> > > > as
>>>> > > > > int) dim_site_lfl_key
>>>> > > > > -- derived from keys
>>>> > > > > ,dim_date_str
>>>> > > > > ,`year` as `trx_year`
>>>> > > > > -- Measures
>>>> > > > > ,Product_Sales_Qty
>>>> > > > > ,Product_Sales_Price
>>>> > > > > ,Product_Cost_Price
>>>> > > > > ,Product_Cost_Amt
>>>> > > > > ,Product_Sales_Gross_Amt
>>>> > > > > ,Product_Sales_Promo_Disc_Amt
>>>> > > > > ,Product_Sales_Add_Promo_Disc_Amt
>>>> > > > > ,Product_Sales_Total_Promo_Disc_Amt
>>>> > > > > ,Product_Sales_Retail_Promo_Amt
>>>> > > > > ,Product_Sales_Retail_Amt
>>>> > > > > ,Product_Sales_VAT_Amt
>>>> > > > > ,Product_Sales_Product_Margin_Amt
>>>> > > > > ,Product_Sales_Initial_Margin_Amt
>>>> > > > > from
>>>> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>>>> > > > > ;
>>>> > > > >
>>>> > > > > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
>>>> > > > > ;
>>>> > > > >
>>>> > > > > create table dfs.tmp.load_pos_sales_detail_tbl AS
>>>> > > > > SELECT
>>>> > > > > -- dimension keys
>>>> > > > > dim_date_key
>>>> > > > > ,dim_site_key
>>>> > > > > ,dim_pos_header_key
>>>> > > > > ,dim_pos_cashier_key
>>>> > > > > ,dim_card_number_key
>>>> > > > > ,dim_hour_minute_key
>>>> > > > > ,dim_pos_clerk_key
>>>> > > > > ,dim_product_key
>>>> > > > > ,dim_pos_employee_purchase_key
>>>> > > > > ,dim_pos_terminal_key
>>>> > > > > ,dim_campaign_key
>>>> > > > > ,dim_promo_key
>>>> > > > > ,dim_site_lfl_key
>>>> > > > > -- derived from keys
>>>> > > > > ,dim_date_str
>>>> > > > > ,`trx_year`
>>>> > > > > -- Measures
>>>> > > > > ,Product_Sales_Qty Sales_Qty
>>>> > > > > ,Product_Sales_Price Sales_Price
>>>> > > > > ,Product_Cost_Price Cost_Price
>>>> > > > > ,Product_Cost_Amt Cost_Amt
>>>> > > > > ,Product_Sales_Gross_Amt Sales_Gross_Amt
>>>> > > > > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
>>>> > > > > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
>>>> > > > > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
>>>> > > > > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
>>>> > > > > ,Product_Sales_Retail_Amt Retail_Amt
>>>> > > > > ,Product_Sales_VAT_Amt VAT_Amt
>>>> > > > > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
>>>> > > > > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
>>>> > > > > from dfs.tmp.load_pos_sales_detail
>>>> > > > > ;
>>>> > > > >
>>>> > > > > select *
>>>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>>>> > > > > ;
>>>> > > > >
>>>> > > > > ----- ERROR ----------------------------
>>>> > > > >
>>>> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
>>>> tasks for
>>>> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>>>> > > > > parallelism 16.
>>>> > > > >
>>>> > > > >
>>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
>>>> 'Fetch
>>>> > > > parquet
>>>> > > > > metadata' are complete. Total number of tasks 29, parallelism
>>>> 16.
>>>> > > > >
>>>> > > > >
>>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
>>>> 'Fetch
>>>> > > > > parquet metadata' are complete. Total number of tasks 29,
>>>> parallelism
>>>> > > 16.
>>>> > > > >
>>>> > > > >
>>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
>>>> 'Fetch
>>>> > > > > parquet metadata' are complete. Total number of tasks 29,
>>>> parallelism
>>>> > > 16.
>>>> > > > >
>>>> > > > >
>>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>>> > > > >
>>>> > > > > ----------------------------------------
>>>> > > > > From Drill log:
>>>> > > > >
>>>> > > > > 2018-09-20 08:58:12,035
>>>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>>>> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query
>>>> id
>>>> > > > > 245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf: select *
>>>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>>>> > > > >
>>>> > > > > 2018-09-20 08:58:53,068
>>>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>>>> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000
>>>> ms, but
>>>> > > > only
>>>> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total
>>>> number of
>>>> > > tasks
>>>> > > > > 29, parallelism 16.
>>>> > > > > java.util.concurrent.CancellationException: null
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:86)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:57)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>>> > > > toList$2(Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>>> ~[na:1.8.0_172]
>>>> > > > > at
>>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>>> > > > Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>>> > > TimedCallable.java:214)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:324)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:305)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:124)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > initInternal(ParquetGroupScan.java:254)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>>> > > > AbstractParquetGroupScan.java:380)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:132)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:102)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:70)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>>> > > > FileSystemPlugin.java:136)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:116)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:111)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>>> > > > getGroupScan(DrillTable.java:99)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:89)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:69)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:62)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>>> > > > DrillScanRule.java:38)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>>> > > > onMatch(VolcanoRuleCall.java:212)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>>> > > > findBestExp(VolcanoPlanner.java:652)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>>> > > > Programs.java:368)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:429)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:369)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>>> > > > DefaultSqlHandler.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>>> > > > DrillSqlWorker.java:83)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> .foreman.Foreman.run(Foreman.java:266)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> > > > ThreadPoolExecutor.java:1149)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> > > > ThreadPoolExecutor.java:624)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>>> > > > > 2018-09-20 08:58:53,080
>>>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>>>> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
>>>> > Waited
>>>> > > > for
>>>> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>>> > complete.
>>>> > > > > Total number of tasks 29, parallelism 16. (null)
>>>> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE
>>>> ERROR:
>>>> > > Waited
>>>> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>>> > > > complete.
>>>> > > > > Total number of tasks 29, parallelism 16.
>>>> > > > >
>>>> > > > >
>>>> > > > > [Error Id: f887dcae-9f55-469c-be52-b6ce2a37eeb0 ]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.common.exceptions.UserException$
>>>> > > > Builder.build(UserException.java:633)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>>> > > TimedCallable.java:253)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:324)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:305)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:124)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > initInternal(ParquetGroupScan.java:254)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>>> > > > AbstractParquetGroupScan.java:380)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:132)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:102)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:70)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>>> > > > FileSystemPlugin.java:136)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:116)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:111)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>>> > > > getGroupScan(DrillTable.java:99)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:89)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:69)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:62)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>>> > > > DrillScanRule.java:38)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>>> > > > onMatch(VolcanoRuleCall.java:212)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>>> > > > findBestExp(VolcanoPlanner.java:652)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>>> > > > Programs.java:368)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:429)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:369)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>>> > > > DefaultSqlHandler.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>>> > > > DrillSqlWorker.java:83)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> .foreman.Foreman.run(Foreman.java:266)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> > > > ThreadPoolExecutor.java:1149)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> > > > ThreadPoolExecutor.java:624)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>>> > > > > Caused by: java.util.concurrent.CancellationException: null
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:86)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:57)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>>> > > > toList$2(Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>>> ~[na:1.8.0_172]
>>>> > > > > at
>>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>>> > > > Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>>> > > TimedCallable.java:214)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > ... 33 common frames omitted
>>>> > > > > 2018-09-20 09:02:10,608 [UserServer-1] WARN
>>>> > > > > o.a.drill.exec.rpc.user.UserServer - Message of mode REQUEST of
>>>> rpc
>>>> > > > type 3
>>>> > > > > took longer than 500ms. Actual duration was 2042ms.
>>>> > > > > 2018-09-20 09:02:10,608
>>>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>>>> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query
>>>> id
>>>> > > > > 245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc: select *
>>>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>>>> > > > >
>>>> > > > > 2018-09-20 09:02:42,615
>>>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>>>> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000
>>>> ms, but
>>>> > > > only
>>>> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total
>>>> number of
>>>> > > tasks
>>>> > > > > 29, parallelism 16.
>>>> > > > > java.util.concurrent.CancellationException: null
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:86)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:57)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>>> > > > toList$2(Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>>> ~[na:1.8.0_172]
>>>> > > > > at
>>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>>> > > > Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>>> > > TimedCallable.java:214)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:324)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:305)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:124)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > initInternal(ParquetGroupScan.java:254)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>>> > > > AbstractParquetGroupScan.java:380)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:132)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:102)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:70)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>>> > > > FileSystemPlugin.java:136)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:116)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:111)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>>> > > > getGroupScan(DrillTable.java:99)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:89)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:69)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:62)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>>> > > > DrillScanRule.java:38)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>>> > > > onMatch(VolcanoRuleCall.java:212)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>>> > > > findBestExp(VolcanoPlanner.java:652)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>>> > > > Programs.java:368)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:429)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:369)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>>> > > > DefaultSqlHandler.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>>> > > > DrillSqlWorker.java:83)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> .foreman.Foreman.run(Foreman.java:266)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> > > > ThreadPoolExecutor.java:1149)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> > > > ThreadPoolExecutor.java:624)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>>> > > > > 2018-09-20 09:02:42,625
>>>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>>>> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
>>>> > Waited
>>>> > > > for
>>>> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>>> > complete.
>>>> > > > > Total number of tasks 29, parallelism 16. (null)
>>>> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE
>>>> ERROR:
>>>> > > Waited
>>>> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>>> > > > complete.
>>>> > > > > Total number of tasks 29, parallelism 16.
>>>> > > > >
>>>> > > > >
>>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 ]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.common.exceptions.UserException$
>>>> > > > Builder.build(UserException.java:633)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>>> > > TimedCallable.java:253)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:324)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:305)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>>> > > > getParquetTableMetadata(Metadata.java:124)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > initInternal(ParquetGroupScan.java:254)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>>> > > > AbstractParquetGroupScan.java:380)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:132)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>>> > > > init>(ParquetGroupScan.java:102)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>>> > > > ParquetFormatPlugin.java:70)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>>> > > > FileSystemPlugin.java:136)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:116)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>>> > > > AbstractStoragePlugin.java:111)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>>> > > > getGroupScan(DrillTable.java:99)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:89)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:69)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>>> > > > DrillScanRel.java:62)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>>> > > > DrillScanRule.java:38)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>>> > > > onMatch(VolcanoRuleCall.java:212)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>>> > > > findBestExp(VolcanoPlanner.java:652)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>>> > > > Programs.java:368)
>>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:429)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>>> > > DefaultSqlHandler.transform(
>>>> > > > DefaultSqlHandler.java:369)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > >
>>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>>> > > > DefaultSqlHandler.java:180)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>>> > > > DrillSqlWorker.java:83)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.work
>>>> .foreman.Foreman.run(Foreman.java:266)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> > > > ThreadPoolExecutor.java:1149)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at
>>>> > > > >
>>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> > > > ThreadPoolExecutor.java:624)
>>>> > > > > [na:1.8.0_172]
>>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>>> > > > > Caused by: java.util.concurrent.CancellationException: null
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:86)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>>> > > > apply(TimedCallable.java:57)
>>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > at
>>>> > > > >
>>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>>> > > > toList$2(Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>>> ~[na:1.8.0_172]
>>>> > > > > at
>>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>>> > > > Collectors.java:97)
>>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>>> > > TimedCallable.java:214)
>>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>>> > > > > ... 33 common frames omitted
>>>> > > > >
>>>> > > > >
>>>> > > > > ----------------------------------------
>>>> > > > > ----------
>>>> > > > >
>>>> > > >
>>>> > >
>>>> >
>>>>
>>>

Re: ERROR is reading parquet data after create table

Posted by Herman Tan <he...@redcubesg.com>.
Hi,

I found the hard coded parameter in the source code

https://github.com/apache/drill/blob/8edeb49873d1a1710cfe28e0b49364d07eb1aef4/exec/java-exec/src/main/java/org/apache/drill/exec/store/TimedCallable.java

LINE 52   : private static long TIMEOUT_PER_RUNNABLE_IN_MSECS = 15000;
LINE 210 :timeout = TIMEOUT_PER_RUNNABLE_IN_MSECS * ((tasks.size() -
1)/parallelism + 1);

parallelism param is also hardcoded
https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/metadata/Metadata.java
LINE 343: 16

In my case: task size is 29, 15000 * ((29-1)/(16+1)) = 15000 * 2 = 30000
16 runnables executes round robbin on 29 tasks.  Each runnable given 30000
ms to timeout.

This is the error message:
Waited for 30000 ms, but only 11 tasks for 'Fetch parquet metadata' are
complete. Total number of tasks 29, parallelism 16.

TimedCallable.java:
LINE 248:    final String errMsg = String.format("Waited for %d ms, but
only %d tasks for '%s' are complete." +
            " Total number of tasks %d, parallelism %d.", timeout,
futureMapper.count, activity, tasks.size(), parallelism);

Shouldn't these be parameterized in "options" based on the infrastructure?

Regards,
Herman




On Tue, Oct 2, 2018 at 6:47 PM Herman Tan <he...@redcubesg.com> wrote:

> Hi,
>
> I have restarted drill and run the script again.
>
> select * from dfs.tmp.`load_pos_sales_detail_tbl`;
> -- SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 11 tasks for
> 'Fetch parquet metadata' are complete. Total number of tasks 29,
> parallelism 16.
>
> The 29 tasks is related to the 29 parquet files in the folder.
> To check if any of the parquet files has an error, I ran the following SQL
> on each parquet file in the folder.  ALL PASSED. (SQL Below).
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
>
> So it seems that for this table drill can only get the metadata for 11
> parquet files before it times out.
> The time-out is a calculation and it varies from size of table.
> I checked the source code but I cannot find the calculation of the timeout
> of "30000 ms".
> When I am lucky, drill can resolve the metadata for 29 files in 30000 ms
> and it passes.
>
> I plan to use drill for production but this bothers me that there is a
> limit on the number of parquet files and the timeout parameter cannot be
> tuned.
>
> Does anyone have any ideas?
>
> Regards,
> Herman
> --------------  SQL BELOW -------------
>
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_14_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_1_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_2_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_3_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_1.parquet`;
>
>
> On Tue, Oct 2, 2018 at 4:44 PM Herman Tan <he...@redcubesg.com> wrote:
>
>> Hi Divya and everyone,
>>
>> The problem has disappeared.
>> Drill was not restarted.
>> This appears to be intermittent.
>> Before I submitted the error report, I ran the script several times and
>> it failed all the time.
>> Today I ran it again and it succeeded.
>> I will restart and test again.
>>
>> Regards,
>> Herman
>>
>>
>>
>> On Thu, Sep 27, 2018 at 11:50 AM Divya Gehlot <di...@gmail.com>
>> wrote:
>>
>>> Hi Herman,
>>> Just to ensure that  your parquet file format is not corrupted , Can you
>>> please query a folder like just 2001 or some of the files underneath
>>> .Instead of querying the whole data set at once .
>>>
>>> Thanks,
>>> Divya
>>>
>>> On Wed, 26 Sep 2018 at 15:35, Herman Tan <he...@redcubesg.com> wrote:
>>>
>>> > Hi Kunal,
>>> >
>>> > ----
>>> > That said, could you provide some details about the parquet data you've
>>> > created, like the schema, parquet version and the tool used to
>>> generate.
>>> > Usually, the schema (and meta) provides most of these details for any
>>> > parquet file.
>>> > ----
>>> >
>>> > 1. The schema is under dfs.tmp, the queries to generate are all
>>> documented
>>> > below.
>>> > 2. I don't know how to find the parquet version of the data file
>>> > 3. The tool used to generate the parquest is apache drill.  The CTAS is
>>> > detailed below.
>>> >
>>> > Regards,
>>> > Herman
>>> > ____________
>>> >
>>> > *This is the Text data*
>>> >
>>> > This is the folders of the files
>>> > Total # of lines about 50 million rows
>>> > ----------
>>> > show files from
>>> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
>>> > 20180825`
>>> > ;
>>> > show files from
>>> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
>>> > 20180825\2011`
>>> > ;
>>> > -----
>>> > sales_pos_detail
>>> >   \pos_details_20180825
>>> >     \2007
>>> >     \2008
>>> >     \2009
>>> >     \2010
>>> >     \2011
>>> >   \pos_details_0.csv
>>> >   \pos_details_1.csv
>>> >   \pos_details_2.csv
>>> >   \pos_details_3.csv
>>> >   \pos_details_4.csv
>>> >   \pos_details_5.csv
>>> >   \pos_details_6.csv
>>> >   \pos_details_7.csv
>>> >   \pos_details_8.csv
>>> >     \2012
>>> >     \2013
>>> >     \2014
>>> >     \2015
>>> >     \2016
>>> >     \2017
>>> >     \2018
>>> >     \others
>>> > -----
>>> >
>>> > *This is the view with the metadata defined:*
>>> >
>>> > create or replace view dfs.tmp.load_pos_sales_detail as
>>> > SELECT
>>> > -- dimension keys
>>> >  cast(dim_date_key as int) dim_date_key
>>> > ,cast(dim_site_key as int) dim_site_key
>>> > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
>>> > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
>>> > ,cast(dim_card_number_key as int) dim_card_number_key
>>> > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
>>> > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
>>> > ,cast(dim_product_key as int) dim_product_key
>>> > ,cast(dim_pos_employee_purchase_key as int)
>>> dim_pos_employee_purchase_key
>>> > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
>>> > ,cast(dim_campaign_key as int) dim_campaign_key
>>> > ,cast(dim_promo_key as int) dim_promo_key
>>> > ,cast( case when dim_site_lfl_key = '' then 0 else dim_site_lfl_key
>>> end as
>>> > int) dim_site_lfl_key
>>> > -- derived from keys
>>> > ,dim_date_str
>>> > ,`year` as `trx_year`
>>> > -- Measures
>>> > ,Product_Sales_Qty
>>> > ,Product_Sales_Price
>>> > ,Product_Cost_Price
>>> > ,Product_Cost_Amt
>>> > ,Product_Sales_Gross_Amt
>>> > ,Product_Sales_Promo_Disc_Amt
>>> > ,Product_Sales_Add_Promo_Disc_Amt
>>> > ,Product_Sales_Total_Promo_Disc_Amt
>>> > ,Product_Sales_Retail_Promo_Amt
>>> > ,Product_Sales_Retail_Amt
>>> > ,Product_Sales_VAT_Amt
>>> > ,Product_Sales_Product_Margin_Amt
>>> > ,Product_Sales_Initial_Margin_Amt
>>> > from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>>> > ;
>>> >
>>> >
>>> > *This is the CTAS that generates the parquet from the view above:*
>>> >
>>> > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
>>> > ;
>>> >
>>> > create table dfs.tmp.load_pos_sales_detail_tbl AS
>>> > SELECT
>>> > -- dimension keys
>>> >  dim_date_key
>>> > ,dim_site_key
>>> > ,dim_pos_header_key
>>> > ,dim_pos_cashier_key
>>> > ,dim_card_number_key
>>> > ,dim_hour_minute_key
>>> > ,dim_pos_clerk_key
>>> > ,dim_product_key
>>> > ,dim_pos_employee_purchase_key
>>> > ,dim_pos_terminal_key
>>> > ,dim_campaign_key
>>> > ,dim_promo_key
>>> > ,dim_site_lfl_key
>>> > -- derived from keys
>>> > ,dim_date_str
>>> > ,`trx_year`
>>> > -- Measures
>>> > ,Product_Sales_Qty Sales_Qty
>>> > ,Product_Sales_Price Sales_Price
>>> > ,Product_Cost_Price Cost_Price
>>> > ,Product_Cost_Amt Cost_Amt
>>> > ,Product_Sales_Gross_Amt Sales_Gross_Amt
>>> > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
>>> > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
>>> > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
>>> > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
>>> > ,Product_Sales_Retail_Amt Retail_Amt
>>> > ,Product_Sales_VAT_Amt VAT_Amt
>>> > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
>>> > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
>>> > from dfs.tmp.load_pos_sales_detail
>>> > ;
>>> >
>>> >
>>> > *This is the select query that generated the error:*
>>> >
>>> > select *
>>> > from dfs.tmp.load_pos_sales_detail_tbl
>>> > ;
>>> >
>>> > ----- ERROR ----------------------------
>>> >
>>> > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
>>> > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>>> > parallelism 16.
>>> >
>>> >
>>> > On Mon, Sep 24, 2018 at 9:08 AM, Kunal Khatua <ku...@apache.org>
>>> wrote:
>>> >
>>> > > Hi Herman
>>> > >
>>> > > Assuming that you're doing analytics on your data. If that's the
>>> case,
>>> > > parquet format is the way to go.
>>> > >
>>> > > That said, could you provide some details about the parquet data
>>> you've
>>> > > created, like the schema, parquet version and the tool used to
>>> generate.
>>> > > Usually, the schema (and meta) provides most of these details for any
>>> > > parquet file.
>>> > >
>>> > > It'll be useful to know if there is a pattern in the failure because
>>> of
>>> > > which there might be corruption occurring.
>>> > >
>>> > > Kunal
>>> > >
>>> > >
>>> > > On 9/22/2018 11:49:36 PM, Herman Tan <he...@redcubesg.com> wrote:
>>> > > Hi Karthik,
>>> > >
>>> > > Thank you for pointing me to the mail archive in May 2018.
>>> > > That is exactly the same problem I am facing.
>>> > >
>>> > > I thought of using Drill as an ETL where I load the warehouse parquet
>>> > > tables from text source files.
>>> > > Then I query the parquet tables.
>>> > > It works on some parquet tables but am having problems with large
>>> ones
>>> > that
>>> > > consist of several files. (I think)
>>> > > Still investigating.
>>> > > Anyone in the community have other experience?
>>> > > Should I work with all text files instead of parquet?
>>> > >
>>> > >
>>> > > Herman
>>> > >
>>> > >
>>> > > On Fri, Sep 21, 2018 at 2:15 AM, Karthikeyan Manivannan
>>> > > kmanivannan@mapr.com> wrote:
>>> > >
>>> > > > Hi Herman,
>>> > > >
>>> > > > I am not sure what the exact problem here is but can you check to
>>> see
>>> > if
>>> > > > you are not hitting the problem described here:
>>> > > >
>>> > > > http://mail-archives.apache.org/mod_mbox/drill-user/201805.mbox/%
>>> > > >
>>> 3CCACwRgneXLXoP2vCYuGsA4Gwd1jGS8F+rcpzQ8rHuatFW5fmRaQ@mail.gmail.com
>>> > %3E
>>> > > >
>>> > > > Thanks
>>> > > >
>>> > > > Karthik
>>> > > >
>>> > > > On Wed, Sep 19, 2018 at 7:02 PM Herman Tan wrote:
>>> > > >
>>> > > > > Hi,
>>> > > > >
>>> > > > > I encountered the following error.
>>> > > > > The Steps I did are as follows:
>>> > > > > 1. Create a view to fix the data type of fields with cast
>>> > > > > 2. Create table (parquet) using the view
>>> > > > > 3. Query select * from table (query a field also does not work)
>>> > > > >
>>> > > > > The error:
>>> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
>>> tasks for
>>> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>>> > > > > parallelism 16.
>>> > > > >
>>> > > > > When I re-run this, the number of tasks will vary.
>>> > > > >
>>> > > > > What could be the problem?
>>> > > > >
>>> > > > > Regards,
>>> > > > > Herman Tan
>>> > > > >
>>> > > > > More info below:
>>> > > > >
>>> > > > > This is the folders of the files
>>> > > > > Total # of lines, 50 million
>>> > > > > ----------
>>> > > > > show files from
>>> > > > > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>>> > > > > ;
>>> > > > > show files from
>>> > > > >
>>> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825\2011`
>>> > > > > ;
>>> > > > > -----
>>> > > > > sales_pos_detail
>>> > > > > \pos_details_20180825
>>> > > > > \2007
>>> > > > > \2008
>>> > > > > \2009
>>> > > > > \2010
>>> > > > > \2011
>>> > > > > \pos_details_0.csv
>>> > > > > \pos_details_1.csv
>>> > > > > \pos_details_2.csv
>>> > > > > \pos_details_3.csv
>>> > > > > \pos_details_4.csv
>>> > > > > \pos_details_5.csv
>>> > > > > \pos_details_6.csv
>>> > > > > \pos_details_7.csv
>>> > > > > \pos_details_8.csv
>>> > > > > \2012
>>> > > > > \2013
>>> > > > > \2014
>>> > > > > \2015
>>> > > > > \2016
>>> > > > > \2017
>>> > > > > \2018
>>> > > > > \others
>>> > > > > -----
>>> > > > >
>>> > > > > create or replace view dfs.tmp.load_pos_sales_detail as
>>> > > > > SELECT
>>> > > > > -- dimension keys
>>> > > > > cast(dim_date_key as int) dim_date_key
>>> > > > > ,cast(dim_site_key as int) dim_site_key
>>> > > > > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
>>> > > > > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
>>> > > > > ,cast(dim_card_number_key as int) dim_card_number_key
>>> > > > > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
>>> > > > > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
>>> > > > > ,cast(dim_product_key as int) dim_product_key
>>> > > > > ,cast(dim_pos_employee_purchase_key as int)
>>> > > > dim_pos_employee_purchase_key
>>> > > > > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
>>> > > > > ,cast(dim_campaign_key as int) dim_campaign_key
>>> > > > > ,cast(dim_promo_key as int) dim_promo_key
>>> > > > > ,cast( case when dim_site_lfl_key = '' then 0 else
>>> dim_site_lfl_key
>>> > end
>>> > > > as
>>> > > > > int) dim_site_lfl_key
>>> > > > > -- derived from keys
>>> > > > > ,dim_date_str
>>> > > > > ,`year` as `trx_year`
>>> > > > > -- Measures
>>> > > > > ,Product_Sales_Qty
>>> > > > > ,Product_Sales_Price
>>> > > > > ,Product_Cost_Price
>>> > > > > ,Product_Cost_Amt
>>> > > > > ,Product_Sales_Gross_Amt
>>> > > > > ,Product_Sales_Promo_Disc_Amt
>>> > > > > ,Product_Sales_Add_Promo_Disc_Amt
>>> > > > > ,Product_Sales_Total_Promo_Disc_Amt
>>> > > > > ,Product_Sales_Retail_Promo_Amt
>>> > > > > ,Product_Sales_Retail_Amt
>>> > > > > ,Product_Sales_VAT_Amt
>>> > > > > ,Product_Sales_Product_Margin_Amt
>>> > > > > ,Product_Sales_Initial_Margin_Amt
>>> > > > > from
>>> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>>> > > > > ;
>>> > > > >
>>> > > > > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
>>> > > > > ;
>>> > > > >
>>> > > > > create table dfs.tmp.load_pos_sales_detail_tbl AS
>>> > > > > SELECT
>>> > > > > -- dimension keys
>>> > > > > dim_date_key
>>> > > > > ,dim_site_key
>>> > > > > ,dim_pos_header_key
>>> > > > > ,dim_pos_cashier_key
>>> > > > > ,dim_card_number_key
>>> > > > > ,dim_hour_minute_key
>>> > > > > ,dim_pos_clerk_key
>>> > > > > ,dim_product_key
>>> > > > > ,dim_pos_employee_purchase_key
>>> > > > > ,dim_pos_terminal_key
>>> > > > > ,dim_campaign_key
>>> > > > > ,dim_promo_key
>>> > > > > ,dim_site_lfl_key
>>> > > > > -- derived from keys
>>> > > > > ,dim_date_str
>>> > > > > ,`trx_year`
>>> > > > > -- Measures
>>> > > > > ,Product_Sales_Qty Sales_Qty
>>> > > > > ,Product_Sales_Price Sales_Price
>>> > > > > ,Product_Cost_Price Cost_Price
>>> > > > > ,Product_Cost_Amt Cost_Amt
>>> > > > > ,Product_Sales_Gross_Amt Sales_Gross_Amt
>>> > > > > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
>>> > > > > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
>>> > > > > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
>>> > > > > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
>>> > > > > ,Product_Sales_Retail_Amt Retail_Amt
>>> > > > > ,Product_Sales_VAT_Amt VAT_Amt
>>> > > > > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
>>> > > > > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
>>> > > > > from dfs.tmp.load_pos_sales_detail
>>> > > > > ;
>>> > > > >
>>> > > > > select *
>>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>>> > > > > ;
>>> > > > >
>>> > > > > ----- ERROR ----------------------------
>>> > > > >
>>> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
>>> tasks for
>>> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>>> > > > > parallelism 16.
>>> > > > >
>>> > > > >
>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
>>> > > > parquet
>>> > > > > metadata' are complete. Total number of tasks 29, parallelism 16.
>>> > > > >
>>> > > > >
>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
>>> > > > > parquet metadata' are complete. Total number of tasks 29,
>>> parallelism
>>> > > 16.
>>> > > > >
>>> > > > >
>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
>>> > > > > parquet metadata' are complete. Total number of tasks 29,
>>> parallelism
>>> > > 16.
>>> > > > >
>>> > > > >
>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>>> > > > >
>>> > > > > ----------------------------------------
>>> > > > > From Drill log:
>>> > > > >
>>> > > > > 2018-09-20 08:58:12,035
>>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>>> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query
>>> id
>>> > > > > 245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf: select *
>>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>>> > > > >
>>> > > > > 2018-09-20 08:58:53,068
>>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>>> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
>>> but
>>> > > > only
>>> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number
>>> of
>>> > > tasks
>>> > > > > 29, parallelism 16.
>>> > > > > java.util.concurrent.CancellationException: null
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:86)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:57)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>> > > > toList$2(Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>> ~[na:1.8.0_172]
>>> > > > > at
>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>> > > > Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>> > > TimedCallable.java:214)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:324)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:305)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:124)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > initInternal(ParquetGroupScan.java:254)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>> > > > AbstractParquetGroupScan.java:380)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:132)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:102)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:70)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>> > > > FileSystemPlugin.java:136)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:116)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:111)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>> > > > getGroupScan(DrillTable.java:99)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:89)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:69)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:62)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>> > > > DrillScanRule.java:38)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>> > > > onMatch(VolcanoRuleCall.java:212)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>> > > > findBestExp(VolcanoPlanner.java:652)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>> > > > Programs.java:368)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:429)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:369)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>> > > > DefaultSqlHandler.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>> > > > DrillSqlWorker.java:83)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> .foreman.Foreman.run(Foreman.java:266)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> > > > ThreadPoolExecutor.java:1149)
>>> > > > > [na:1.8.0_172]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> > > > ThreadPoolExecutor.java:624)
>>> > > > > [na:1.8.0_172]
>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>> > > > > 2018-09-20 08:58:53,080
>>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>>> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
>>> > Waited
>>> > > > for
>>> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>> > complete.
>>> > > > > Total number of tasks 29, parallelism 16. (null)
>>> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR:
>>> > > Waited
>>> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>> > > > complete.
>>> > > > > Total number of tasks 29, parallelism 16.
>>> > > > >
>>> > > > >
>>> > > > > [Error Id: f887dcae-9f55-469c-be52-b6ce2a37eeb0 ]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.common.exceptions.UserException$
>>> > > > Builder.build(UserException.java:633)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>> > > TimedCallable.java:253)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:324)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:305)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:124)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > initInternal(ParquetGroupScan.java:254)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>> > > > AbstractParquetGroupScan.java:380)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:132)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:102)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:70)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>> > > > FileSystemPlugin.java:136)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:116)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:111)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>> > > > getGroupScan(DrillTable.java:99)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:89)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:69)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:62)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>> > > > DrillScanRule.java:38)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>> > > > onMatch(VolcanoRuleCall.java:212)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>> > > > findBestExp(VolcanoPlanner.java:652)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>> > > > Programs.java:368)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:429)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:369)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>> > > > DefaultSqlHandler.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>> > > > DrillSqlWorker.java:83)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> .foreman.Foreman.run(Foreman.java:266)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> > > > ThreadPoolExecutor.java:1149)
>>> > > > > [na:1.8.0_172]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> > > > ThreadPoolExecutor.java:624)
>>> > > > > [na:1.8.0_172]
>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>> > > > > Caused by: java.util.concurrent.CancellationException: null
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:86)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:57)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>> > > > toList$2(Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>> ~[na:1.8.0_172]
>>> > > > > at
>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>> > > > Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>> > > TimedCallable.java:214)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > ... 33 common frames omitted
>>> > > > > 2018-09-20 09:02:10,608 [UserServer-1] WARN
>>> > > > > o.a.drill.exec.rpc.user.UserServer - Message of mode REQUEST of
>>> rpc
>>> > > > type 3
>>> > > > > took longer than 500ms. Actual duration was 2042ms.
>>> > > > > 2018-09-20 09:02:10,608
>>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>>> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query
>>> id
>>> > > > > 245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc: select *
>>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>>> > > > >
>>> > > > > 2018-09-20 09:02:42,615
>>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>>> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
>>> but
>>> > > > only
>>> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number
>>> of
>>> > > tasks
>>> > > > > 29, parallelism 16.
>>> > > > > java.util.concurrent.CancellationException: null
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:86)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:57)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>> > > > toList$2(Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>> ~[na:1.8.0_172]
>>> > > > > at
>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>> > > > Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>> > > TimedCallable.java:214)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:324)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:305)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:124)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > initInternal(ParquetGroupScan.java:254)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>> > > > AbstractParquetGroupScan.java:380)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:132)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:102)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:70)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>> > > > FileSystemPlugin.java:136)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:116)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:111)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>> > > > getGroupScan(DrillTable.java:99)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:89)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:69)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:62)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>> > > > DrillScanRule.java:38)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>> > > > onMatch(VolcanoRuleCall.java:212)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>> > > > findBestExp(VolcanoPlanner.java:652)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>> > > > Programs.java:368)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:429)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:369)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>> > > > DefaultSqlHandler.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>> > > > DrillSqlWorker.java:83)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> .foreman.Foreman.run(Foreman.java:266)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> > > > ThreadPoolExecutor.java:1149)
>>> > > > > [na:1.8.0_172]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> > > > ThreadPoolExecutor.java:624)
>>> > > > > [na:1.8.0_172]
>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>> > > > > 2018-09-20 09:02:42,625
>>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>>> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
>>> > Waited
>>> > > > for
>>> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>> > complete.
>>> > > > > Total number of tasks 29, parallelism 16. (null)
>>> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR:
>>> > > Waited
>>> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>>> > > > complete.
>>> > > > > Total number of tasks 29, parallelism 16.
>>> > > > >
>>> > > > >
>>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 ]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.common.exceptions.UserException$
>>> > > > Builder.build(UserException.java:633)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>> > > TimedCallable.java:253)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:324)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:305)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>>> > > > getParquetTableMetadata(Metadata.java:124)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > initInternal(ParquetGroupScan.java:254)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>>> > > > AbstractParquetGroupScan.java:380)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:132)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>>> > > > init>(ParquetGroupScan.java:102)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>>> > > > ParquetFormatPlugin.java:70)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>>> > > > FileSystemPlugin.java:136)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:116)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>>> > > > AbstractStoragePlugin.java:111)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>>> > > > getGroupScan(DrillTable.java:99)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:89)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:69)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>>> > > > DrillScanRel.java:62)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>>> > > > DrillScanRule.java:38)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>>> > > > onMatch(VolcanoRuleCall.java:212)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>>> > > > findBestExp(VolcanoPlanner.java:652)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>>> > > > Programs.java:368)
>>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:429)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.
>>> > > DefaultSqlHandler.transform(
>>> > > > DefaultSqlHandler.java:369)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>>> > > > convertToDrel(DefaultSqlHandler.java:318)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > >
>>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>>> > > > DefaultSqlHandler.java:180)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>>> > > > getQueryPlan(DrillSqlWorker.java:145)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>>> > > > DrillSqlWorker.java:83)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> > .foreman.Foreman.runSQL(Foreman.java:567)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.work
>>> .foreman.Foreman.run(Foreman.java:266)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> > > > ThreadPoolExecutor.java:1149)
>>> > > > > [na:1.8.0_172]
>>> > > > > at
>>> > > > >
>>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> > > > ThreadPoolExecutor.java:624)
>>> > > > > [na:1.8.0_172]
>>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>>> > > > > Caused by: java.util.concurrent.CancellationException: null
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:86)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>>> > > > apply(TimedCallable.java:57)
>>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > at
>>> > > > >
>>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>>> > > > toList$2(Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>>> ~[na:1.8.0_172]
>>> > > > > at
>>> > > > > org.apache.drill.common.collections.Collectors.toList(
>>> > > > Collectors.java:97)
>>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>>> > > TimedCallable.java:214)
>>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>>> > > > > ... 33 common frames omitted
>>> > > > >
>>> > > > >
>>> > > > > ----------------------------------------
>>> > > > > ----------
>>> > > > >
>>> > > >
>>> > >
>>> >
>>>
>>

Re: ERROR is reading parquet data after create table

Posted by Herman Tan <he...@redcubesg.com>.
I am running in single instance embedded mode under windows.

On Thu, 11 Oct 2018, 3:26 PM Divya Gehlot, <di...@gmail.com> wrote:

> Hi ,
> Can somebody clarify on the number of tasks, What I understood from Herman
> is if you have 29 parquet files than Drill actually creates 29 tasks .
>
> Herman , Can I know you are running drill on embedded mode or distributed
> mode.
>
> I am running Drill in production for multiple sources and I do have many(
> like 2 years worth of data)  parquet files and never encountered this issue
> .
> Yeah at times when I have parquet in multi directory hierarchy or parquet
> files are small I do get either time out or query is too slow .
>
> Thoughts please?
>
> Thanks,
> Divya
>
> On Tue, 2 Oct 2018 at 18:48, Herman Tan <he...@redcubesg.com> wrote:
>
> > Hi,
> >
> > I have restarted drill and run the script again.
> >
> > select * from dfs.tmp.`load_pos_sales_detail_tbl`;
> > -- SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 11 tasks for
> > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> > parallelism 16.
> >
> > The 29 tasks is related to the 29 parquet files in the folder.
> > To check if any of the parquet files has an error, I ran the following
> SQL
> > on each parquet file in the folder.  ALL PASSED. (SQL Below).
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
> >
> > So it seems that for this table drill can only get the metadata for 11
> > parquet files before it times out.
> > The time-out is a calculation and it varies from size of table.
> > I checked the source code but I cannot find the calculation of the
> timeout
> > of "30000 ms".
> > When I am lucky, drill can resolve the metadata for 29 files in 30000 ms
> > and it passes.
> >
> > I plan to use drill for production but this bothers me that there is a
> > limit on the number of parquet files and the timeout parameter cannot be
> > tuned.
> >
> > Does anyone have any ideas?
> >
> > Regards,
> > Herman
> > --------------  SQL BELOW -------------
> >
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_14_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_1_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_2_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_3_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_1.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_0.parquet`;
> > select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_1.parquet`;
> >
> >
> > On Tue, Oct 2, 2018 at 4:44 PM Herman Tan <he...@redcubesg.com> wrote:
> >
> > > Hi Divya and everyone,
> > >
> > > The problem has disappeared.
> > > Drill was not restarted.
> > > This appears to be intermittent.
> > > Before I submitted the error report, I ran the script several times and
> > it
> > > failed all the time.
> > > Today I ran it again and it succeeded.
> > > I will restart and test again.
> > >
> > > Regards,
> > > Herman
> > >
> > >
> > >
> > > On Thu, Sep 27, 2018 at 11:50 AM Divya Gehlot <divya.htconex@gmail.com
> >
> > > wrote:
> > >
> > >> Hi Herman,
> > >> Just to ensure that  your parquet file format is not corrupted , Can
> you
> > >> please query a folder like just 2001 or some of the files underneath
> > >> .Instead of querying the whole data set at once .
> > >>
> > >> Thanks,
> > >> Divya
> > >>
> > >> On Wed, 26 Sep 2018 at 15:35, Herman Tan <he...@redcubesg.com>
> wrote:
> > >>
> > >> > Hi Kunal,
> > >> >
> > >> > ----
> > >> > That said, could you provide some details about the parquet data
> > you've
> > >> > created, like the schema, parquet version and the tool used to
> > generate.
> > >> > Usually, the schema (and meta) provides most of these details for
> any
> > >> > parquet file.
> > >> > ----
> > >> >
> > >> > 1. The schema is under dfs.tmp, the queries to generate are all
> > >> documented
> > >> > below.
> > >> > 2. I don't know how to find the parquet version of the data file
> > >> > 3. The tool used to generate the parquest is apache drill.  The CTAS
> > is
> > >> > detailed below.
> > >> >
> > >> > Regards,
> > >> > Herman
> > >> > ____________
> > >> >
> > >> > *This is the Text data*
> > >> >
> > >> > This is the folders of the files
> > >> > Total # of lines about 50 million rows
> > >> > ----------
> > >> > show files from
> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
> > >> > 20180825`
> > >> > ;
> > >> > show files from
> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
> > >> > 20180825\2011`
> > >> > ;
> > >> > -----
> > >> > sales_pos_detail
> > >> >   \pos_details_20180825
> > >> >     \2007
> > >> >     \2008
> > >> >     \2009
> > >> >     \2010
> > >> >     \2011
> > >> >   \pos_details_0.csv
> > >> >   \pos_details_1.csv
> > >> >   \pos_details_2.csv
> > >> >   \pos_details_3.csv
> > >> >   \pos_details_4.csv
> > >> >   \pos_details_5.csv
> > >> >   \pos_details_6.csv
> > >> >   \pos_details_7.csv
> > >> >   \pos_details_8.csv
> > >> >     \2012
> > >> >     \2013
> > >> >     \2014
> > >> >     \2015
> > >> >     \2016
> > >> >     \2017
> > >> >     \2018
> > >> >     \others
> > >> > -----
> > >> >
> > >> > *This is the view with the metadata defined:*
> > >> >
> > >> > create or replace view dfs.tmp.load_pos_sales_detail as
> > >> > SELECT
> > >> > -- dimension keys
> > >> >  cast(dim_date_key as int) dim_date_key
> > >> > ,cast(dim_site_key as int) dim_site_key
> > >> > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
> > >> > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
> > >> > ,cast(dim_card_number_key as int) dim_card_number_key
> > >> > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
> > >> > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
> > >> > ,cast(dim_product_key as int) dim_product_key
> > >> > ,cast(dim_pos_employee_purchase_key as int)
> > >> dim_pos_employee_purchase_key
> > >> > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
> > >> > ,cast(dim_campaign_key as int) dim_campaign_key
> > >> > ,cast(dim_promo_key as int) dim_promo_key
> > >> > ,cast( case when dim_site_lfl_key = '' then 0 else dim_site_lfl_key
> > end
> > >> as
> > >> > int) dim_site_lfl_key
> > >> > -- derived from keys
> > >> > ,dim_date_str
> > >> > ,`year` as `trx_year`
> > >> > -- Measures
> > >> > ,Product_Sales_Qty
> > >> > ,Product_Sales_Price
> > >> > ,Product_Cost_Price
> > >> > ,Product_Cost_Amt
> > >> > ,Product_Sales_Gross_Amt
> > >> > ,Product_Sales_Promo_Disc_Amt
> > >> > ,Product_Sales_Add_Promo_Disc_Amt
> > >> > ,Product_Sales_Total_Promo_Disc_Amt
> > >> > ,Product_Sales_Retail_Promo_Amt
> > >> > ,Product_Sales_Retail_Amt
> > >> > ,Product_Sales_VAT_Amt
> > >> > ,Product_Sales_Product_Margin_Amt
> > >> > ,Product_Sales_Initial_Margin_Amt
> > >> > from
> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> > >> > ;
> > >> >
> > >> >
> > >> > *This is the CTAS that generates the parquet from the view above:*
> > >> >
> > >> > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
> > >> > ;
> > >> >
> > >> > create table dfs.tmp.load_pos_sales_detail_tbl AS
> > >> > SELECT
> > >> > -- dimension keys
> > >> >  dim_date_key
> > >> > ,dim_site_key
> > >> > ,dim_pos_header_key
> > >> > ,dim_pos_cashier_key
> > >> > ,dim_card_number_key
> > >> > ,dim_hour_minute_key
> > >> > ,dim_pos_clerk_key
> > >> > ,dim_product_key
> > >> > ,dim_pos_employee_purchase_key
> > >> > ,dim_pos_terminal_key
> > >> > ,dim_campaign_key
> > >> > ,dim_promo_key
> > >> > ,dim_site_lfl_key
> > >> > -- derived from keys
> > >> > ,dim_date_str
> > >> > ,`trx_year`
> > >> > -- Measures
> > >> > ,Product_Sales_Qty Sales_Qty
> > >> > ,Product_Sales_Price Sales_Price
> > >> > ,Product_Cost_Price Cost_Price
> > >> > ,Product_Cost_Amt Cost_Amt
> > >> > ,Product_Sales_Gross_Amt Sales_Gross_Amt
> > >> > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
> > >> > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
> > >> > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
> > >> > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
> > >> > ,Product_Sales_Retail_Amt Retail_Amt
> > >> > ,Product_Sales_VAT_Amt VAT_Amt
> > >> > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
> > >> > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
> > >> > from dfs.tmp.load_pos_sales_detail
> > >> > ;
> > >> >
> > >> >
> > >> > *This is the select query that generated the error:*
> > >> >
> > >> > select *
> > >> > from dfs.tmp.load_pos_sales_detail_tbl
> > >> > ;
> > >> >
> > >> > ----- ERROR ----------------------------
> > >> >
> > >> > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks
> for
> > >> > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> > >> > parallelism 16.
> > >> >
> > >> >
> > >> > On Mon, Sep 24, 2018 at 9:08 AM, Kunal Khatua <ku...@apache.org>
> > wrote:
> > >> >
> > >> > > Hi Herman
> > >> > >
> > >> > > Assuming that you're doing analytics on your data. If that's the
> > case,
> > >> > > parquet format is the way to go.
> > >> > >
> > >> > > That said, could you provide some details about the parquet data
> > >> you've
> > >> > > created, like the schema, parquet version and the tool used to
> > >> generate.
> > >> > > Usually, the schema (and meta) provides most of these details for
> > any
> > >> > > parquet file.
> > >> > >
> > >> > > It'll be useful to know if there is a pattern in the failure
> because
> > >> of
> > >> > > which there might be corruption occurring.
> > >> > >
> > >> > > Kunal
> > >> > >
> > >> > >
> > >> > > On 9/22/2018 11:49:36 PM, Herman Tan <he...@redcubesg.com>
> wrote:
> > >> > > Hi Karthik,
> > >> > >
> > >> > > Thank you for pointing me to the mail archive in May 2018.
> > >> > > That is exactly the same problem I am facing.
> > >> > >
> > >> > > I thought of using Drill as an ETL where I load the warehouse
> > parquet
> > >> > > tables from text source files.
> > >> > > Then I query the parquet tables.
> > >> > > It works on some parquet tables but am having problems with large
> > ones
> > >> > that
> > >> > > consist of several files. (I think)
> > >> > > Still investigating.
> > >> > > Anyone in the community have other experience?
> > >> > > Should I work with all text files instead of parquet?
> > >> > >
> > >> > >
> > >> > > Herman
> > >> > >
> > >> > >
> > >> > > On Fri, Sep 21, 2018 at 2:15 AM, Karthikeyan Manivannan
> > >> > > kmanivannan@mapr.com> wrote:
> > >> > >
> > >> > > > Hi Herman,
> > >> > > >
> > >> > > > I am not sure what the exact problem here is but can you check
> to
> > >> see
> > >> > if
> > >> > > > you are not hitting the problem described here:
> > >> > > >
> > >> > > >
> http://mail-archives.apache.org/mod_mbox/drill-user/201805.mbox/%
> > >> > > >
> > >> 3CCACwRgneXLXoP2vCYuGsA4Gwd1jGS8F+rcpzQ8rHuatFW5fmRaQ@mail.gmail.com
> > >> > %3E
> > >> > > >
> > >> > > > Thanks
> > >> > > >
> > >> > > > Karthik
> > >> > > >
> > >> > > > On Wed, Sep 19, 2018 at 7:02 PM Herman Tan wrote:
> > >> > > >
> > >> > > > > Hi,
> > >> > > > >
> > >> > > > > I encountered the following error.
> > >> > > > > The Steps I did are as follows:
> > >> > > > > 1. Create a view to fix the data type of fields with cast
> > >> > > > > 2. Create table (parquet) using the view
> > >> > > > > 3. Query select * from table (query a field also does not
> work)
> > >> > > > >
> > >> > > > > The error:
> > >> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
> > tasks
> > >> for
> > >> > > > > 'Fetch parquet metadata' are complete. Total number of tasks
> 29,
> > >> > > > > parallelism 16.
> > >> > > > >
> > >> > > > > When I re-run this, the number of tasks will vary.
> > >> > > > >
> > >> > > > > What could be the problem?
> > >> > > > >
> > >> > > > > Regards,
> > >> > > > > Herman Tan
> > >> > > > >
> > >> > > > > More info below:
> > >> > > > >
> > >> > > > > This is the folders of the files
> > >> > > > > Total # of lines, 50 million
> > >> > > > > ----------
> > >> > > > > show files from
> > >> > > > >
> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> > >> > > > > ;
> > >> > > > > show files from
> > >> > > > >
> > >> >
> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825\2011`
> > >> > > > > ;
> > >> > > > > -----
> > >> > > > > sales_pos_detail
> > >> > > > > \pos_details_20180825
> > >> > > > > \2007
> > >> > > > > \2008
> > >> > > > > \2009
> > >> > > > > \2010
> > >> > > > > \2011
> > >> > > > > \pos_details_0.csv
> > >> > > > > \pos_details_1.csv
> > >> > > > > \pos_details_2.csv
> > >> > > > > \pos_details_3.csv
> > >> > > > > \pos_details_4.csv
> > >> > > > > \pos_details_5.csv
> > >> > > > > \pos_details_6.csv
> > >> > > > > \pos_details_7.csv
> > >> > > > > \pos_details_8.csv
> > >> > > > > \2012
> > >> > > > > \2013
> > >> > > > > \2014
> > >> > > > > \2015
> > >> > > > > \2016
> > >> > > > > \2017
> > >> > > > > \2018
> > >> > > > > \others
> > >> > > > > -----
> > >> > > > >
> > >> > > > > create or replace view dfs.tmp.load_pos_sales_detail as
> > >> > > > > SELECT
> > >> > > > > -- dimension keys
> > >> > > > > cast(dim_date_key as int) dim_date_key
> > >> > > > > ,cast(dim_site_key as int) dim_site_key
> > >> > > > > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
> > >> > > > > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
> > >> > > > > ,cast(dim_card_number_key as int) dim_card_number_key
> > >> > > > > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
> > >> > > > > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
> > >> > > > > ,cast(dim_product_key as int) dim_product_key
> > >> > > > > ,cast(dim_pos_employee_purchase_key as int)
> > >> > > > dim_pos_employee_purchase_key
> > >> > > > > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
> > >> > > > > ,cast(dim_campaign_key as int) dim_campaign_key
> > >> > > > > ,cast(dim_promo_key as int) dim_promo_key
> > >> > > > > ,cast( case when dim_site_lfl_key = '' then 0 else
> > >> dim_site_lfl_key
> > >> > end
> > >> > > > as
> > >> > > > > int) dim_site_lfl_key
> > >> > > > > -- derived from keys
> > >> > > > > ,dim_date_str
> > >> > > > > ,`year` as `trx_year`
> > >> > > > > -- Measures
> > >> > > > > ,Product_Sales_Qty
> > >> > > > > ,Product_Sales_Price
> > >> > > > > ,Product_Cost_Price
> > >> > > > > ,Product_Cost_Amt
> > >> > > > > ,Product_Sales_Gross_Amt
> > >> > > > > ,Product_Sales_Promo_Disc_Amt
> > >> > > > > ,Product_Sales_Add_Promo_Disc_Amt
> > >> > > > > ,Product_Sales_Total_Promo_Disc_Amt
> > >> > > > > ,Product_Sales_Retail_Promo_Amt
> > >> > > > > ,Product_Sales_Retail_Amt
> > >> > > > > ,Product_Sales_VAT_Amt
> > >> > > > > ,Product_Sales_Product_Margin_Amt
> > >> > > > > ,Product_Sales_Initial_Margin_Amt
> > >> > > > > from
> > >> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> > >> > > > > ;
> > >> > > > >
> > >> > > > > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
> > >> > > > > ;
> > >> > > > >
> > >> > > > > create table dfs.tmp.load_pos_sales_detail_tbl AS
> > >> > > > > SELECT
> > >> > > > > -- dimension keys
> > >> > > > > dim_date_key
> > >> > > > > ,dim_site_key
> > >> > > > > ,dim_pos_header_key
> > >> > > > > ,dim_pos_cashier_key
> > >> > > > > ,dim_card_number_key
> > >> > > > > ,dim_hour_minute_key
> > >> > > > > ,dim_pos_clerk_key
> > >> > > > > ,dim_product_key
> > >> > > > > ,dim_pos_employee_purchase_key
> > >> > > > > ,dim_pos_terminal_key
> > >> > > > > ,dim_campaign_key
> > >> > > > > ,dim_promo_key
> > >> > > > > ,dim_site_lfl_key
> > >> > > > > -- derived from keys
> > >> > > > > ,dim_date_str
> > >> > > > > ,`trx_year`
> > >> > > > > -- Measures
> > >> > > > > ,Product_Sales_Qty Sales_Qty
> > >> > > > > ,Product_Sales_Price Sales_Price
> > >> > > > > ,Product_Cost_Price Cost_Price
> > >> > > > > ,Product_Cost_Amt Cost_Amt
> > >> > > > > ,Product_Sales_Gross_Amt Sales_Gross_Amt
> > >> > > > > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
> > >> > > > > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
> > >> > > > > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
> > >> > > > > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
> > >> > > > > ,Product_Sales_Retail_Amt Retail_Amt
> > >> > > > > ,Product_Sales_VAT_Amt VAT_Amt
> > >> > > > > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
> > >> > > > > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
> > >> > > > > from dfs.tmp.load_pos_sales_detail
> > >> > > > > ;
> > >> > > > >
> > >> > > > > select *
> > >> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> > >> > > > > ;
> > >> > > > >
> > >> > > > > ----- ERROR ----------------------------
> > >> > > > >
> > >> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
> > tasks
> > >> for
> > >> > > > > 'Fetch parquet metadata' are complete. Total number of tasks
> 29,
> > >> > > > > parallelism 16.
> > >> > > > >
> > >> > > > >
> > >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > >> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> > 'Fetch
> > >> > > > parquet
> > >> > > > > metadata' are complete. Total number of tasks 29, parallelism
> > 16.
> > >> > > > >
> > >> > > > >
> > >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > >> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> > 'Fetch
> > >> > > > > parquet metadata' are complete. Total number of tasks 29,
> > >> parallelism
> > >> > > 16.
> > >> > > > >
> > >> > > > >
> > >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > >> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> > 'Fetch
> > >> > > > > parquet metadata' are complete. Total number of tasks 29,
> > >> parallelism
> > >> > > 16.
> > >> > > > >
> > >> > > > >
> > >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> > >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> > >> > > > >
> > >> > > > > ----------------------------------------
> > >> > > > > From Drill log:
> > >> > > > >
> > >> > > > > 2018-09-20 08:58:12,035
> > >> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> > >> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for
> query
> > id
> > >> > > > > 245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf: select *
> > >> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> > >> > > > >
> > >> > > > > 2018-09-20 08:58:53,068
> > >> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> > >> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000
> ms,
> > >> but
> > >> > > > only
> > >> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total
> number
> > >> of
> > >> > > tasks
> > >> > > > > 29, parallelism 16.
> > >> > > > > java.util.concurrent.CancellationException: null
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:86)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:57)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > >> > > > toList$2(Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> > >> ~[na:1.8.0_172]
> > >> > > > > at
> > >> > > > > org.apache.drill.common.collections.Collectors.toList(
> > >> > > > Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > >> > > TimedCallable.java:214)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:324)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:305)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:124)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > initInternal(ParquetGroupScan.java:254)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > >> > > > AbstractParquetGroupScan.java:380)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:132)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:102)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:70)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > >> > > > FileSystemPlugin.java:136)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:116)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:111)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > >> > > > getGroupScan(DrillTable.java:99)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:89)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:69)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:62)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > >> > > > DrillScanRule.java:38)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > >> > > > onMatch(VolcanoRuleCall.java:212)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > >> > > > findBestExp(VolcanoPlanner.java:652)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > >> > > > Programs.java:368)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:429)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:369)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToDrel(DefaultSqlHandler.java:318)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > >> > > > DefaultSqlHandler.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > >> > > > getQueryPlan(DrillSqlWorker.java:145)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > >> > > > DrillSqlWorker.java:83)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> > .foreman.Foreman.runSQL(Foreman.java:567)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> .foreman.Foreman.run(Foreman.java:266)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > >> > > > ThreadPoolExecutor.java:1149)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > >> > > > ThreadPoolExecutor.java:624)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > >> > > > > 2018-09-20 08:58:53,080
> > >> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> > >> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error
> Occurred:
> > >> > Waited
> > >> > > > for
> > >> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> > >> > complete.
> > >> > > > > Total number of tasks 29, parallelism 16. (null)
> > >> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE
> > ERROR:
> > >> > > Waited
> > >> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata'
> are
> > >> > > > complete.
> > >> > > > > Total number of tasks 29, parallelism 16.
> > >> > > > >
> > >> > > > >
> > >> > > > > [Error Id: f887dcae-9f55-469c-be52-b6ce2a37eeb0 ]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.common.exceptions.UserException$
> > >> > > > Builder.build(UserException.java:633)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > >> > > TimedCallable.java:253)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:324)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:305)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:124)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > initInternal(ParquetGroupScan.java:254)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > >> > > > AbstractParquetGroupScan.java:380)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:132)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:102)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:70)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > >> > > > FileSystemPlugin.java:136)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:116)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:111)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > >> > > > getGroupScan(DrillTable.java:99)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:89)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:69)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:62)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > >> > > > DrillScanRule.java:38)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > >> > > > onMatch(VolcanoRuleCall.java:212)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > >> > > > findBestExp(VolcanoPlanner.java:652)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > >> > > > Programs.java:368)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:429)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:369)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToDrel(DefaultSqlHandler.java:318)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > >> > > > DefaultSqlHandler.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > >> > > > getQueryPlan(DrillSqlWorker.java:145)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > >> > > > DrillSqlWorker.java:83)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> > .foreman.Foreman.runSQL(Foreman.java:567)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> .foreman.Foreman.run(Foreman.java:266)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > >> > > > ThreadPoolExecutor.java:1149)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > >> > > > ThreadPoolExecutor.java:624)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > >> > > > > Caused by: java.util.concurrent.CancellationException: null
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:86)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:57)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > >> > > > toList$2(Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> > >> ~[na:1.8.0_172]
> > >> > > > > at
> > >> > > > > org.apache.drill.common.collections.Collectors.toList(
> > >> > > > Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > >> > > TimedCallable.java:214)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > ... 33 common frames omitted
> > >> > > > > 2018-09-20 09:02:10,608 [UserServer-1] WARN
> > >> > > > > o.a.drill.exec.rpc.user.UserServer - Message of mode REQUEST
> of
> > >> rpc
> > >> > > > type 3
> > >> > > > > took longer than 500ms. Actual duration was 2042ms.
> > >> > > > > 2018-09-20 09:02:10,608
> > >> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> > >> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for
> query
> > id
> > >> > > > > 245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc: select *
> > >> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> > >> > > > >
> > >> > > > > 2018-09-20 09:02:42,615
> > >> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> > >> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000
> ms,
> > >> but
> > >> > > > only
> > >> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total
> number
> > >> of
> > >> > > tasks
> > >> > > > > 29, parallelism 16.
> > >> > > > > java.util.concurrent.CancellationException: null
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:86)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:57)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > >> > > > toList$2(Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> > >> ~[na:1.8.0_172]
> > >> > > > > at
> > >> > > > > org.apache.drill.common.collections.Collectors.toList(
> > >> > > > Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > >> > > TimedCallable.java:214)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:324)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:305)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:124)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > initInternal(ParquetGroupScan.java:254)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > >> > > > AbstractParquetGroupScan.java:380)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:132)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:102)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:70)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > >> > > > FileSystemPlugin.java:136)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:116)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:111)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > >> > > > getGroupScan(DrillTable.java:99)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:89)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:69)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:62)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > >> > > > DrillScanRule.java:38)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > >> > > > onMatch(VolcanoRuleCall.java:212)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > >> > > > findBestExp(VolcanoPlanner.java:652)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > >> > > > Programs.java:368)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:429)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:369)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToDrel(DefaultSqlHandler.java:318)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > >> > > > DefaultSqlHandler.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > >> > > > getQueryPlan(DrillSqlWorker.java:145)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > >> > > > DrillSqlWorker.java:83)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> > .foreman.Foreman.runSQL(Foreman.java:567)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> .foreman.Foreman.run(Foreman.java:266)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > >> > > > ThreadPoolExecutor.java:1149)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > >> > > > ThreadPoolExecutor.java:624)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > >> > > > > 2018-09-20 09:02:42,625
> > >> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> > >> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error
> Occurred:
> > >> > Waited
> > >> > > > for
> > >> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> > >> > complete.
> > >> > > > > Total number of tasks 29, parallelism 16. (null)
> > >> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE
> > ERROR:
> > >> > > Waited
> > >> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata'
> are
> > >> > > > complete.
> > >> > > > > Total number of tasks 29, parallelism 16.
> > >> > > > >
> > >> > > > >
> > >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 ]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.common.exceptions.UserException$
> > >> > > > Builder.build(UserException.java:633)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > >> > > TimedCallable.java:253)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:324)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:305)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> > >> > > > getParquetTableMetadata(Metadata.java:124)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > initInternal(ParquetGroupScan.java:254)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> > >> > > > AbstractParquetGroupScan.java:380)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:132)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> > >> > > > init>(ParquetGroupScan.java:102)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> > >> > > > ParquetFormatPlugin.java:70)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> > >> > > > FileSystemPlugin.java:136)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:116)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> > >> > > > AbstractStoragePlugin.java:111)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> > >> > > > getGroupScan(DrillTable.java:99)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:89)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:69)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> > >> > > > DrillScanRel.java:62)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> > >> > > > DrillScanRule.java:38)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> > >> > > > onMatch(VolcanoRuleCall.java:212)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> > >> > > > findBestExp(VolcanoPlanner.java:652)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> > >> > > > Programs.java:368)
> > >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:429)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.
> > >> > > DefaultSqlHandler.transform(
> > >> > > > DefaultSqlHandler.java:369)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> > >> > > > convertToDrel(DefaultSqlHandler.java:318)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > >
> > >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> > >> > > > DefaultSqlHandler.java:180)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> > >> > > > getQueryPlan(DrillSqlWorker.java:145)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> > >> > > > DrillSqlWorker.java:83)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> > .foreman.Foreman.runSQL(Foreman.java:567)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.work
> > >> .foreman.Foreman.run(Foreman.java:266)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> > >> > > > ThreadPoolExecutor.java:1149)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at
> > >> > > > >
> > >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > >> > > > ThreadPoolExecutor.java:624)
> > >> > > > > [na:1.8.0_172]
> > >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> > >> > > > > Caused by: java.util.concurrent.CancellationException: null
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:86)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> > >> > > > apply(TimedCallable.java:57)
> > >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > at
> > >> > > > >
> > >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> > >> > > > toList$2(Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> > >> ~[na:1.8.0_172]
> > >> > > > > at
> > >> > > > > org.apache.drill.common.collections.Collectors.toList(
> > >> > > > Collectors.java:97)
> > >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> > >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> > >> > > TimedCallable.java:214)
> > >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> > >> > > > > ... 33 common frames omitted
> > >> > > > >
> > >> > > > >
> > >> > > > > ----------------------------------------
> > >> > > > > ----------
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> >
>

Re: ERROR is reading parquet data after create table

Posted by Divya Gehlot <di...@gmail.com>.
Hi ,
Can somebody clarify on the number of tasks, What I understood from Herman
is if you have 29 parquet files than Drill actually creates 29 tasks .

Herman , Can I know you are running drill on embedded mode or distributed
mode.

I am running Drill in production for multiple sources and I do have many(
like 2 years worth of data)  parquet files and never encountered this issue
.
Yeah at times when I have parquet in multi directory hierarchy or parquet
files are small I do get either time out or query is too slow .

Thoughts please?

Thanks,
Divya

On Tue, 2 Oct 2018 at 18:48, Herman Tan <he...@redcubesg.com> wrote:

> Hi,
>
> I have restarted drill and run the script again.
>
> select * from dfs.tmp.`load_pos_sales_detail_tbl`;
> -- SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 11 tasks for
> 'Fetch parquet metadata' are complete. Total number of tasks 29,
> parallelism 16.
>
> The 29 tasks is related to the 29 parquet files in the folder.
> To check if any of the parquet files has an error, I ran the following SQL
> on each parquet file in the folder.  ALL PASSED. (SQL Below).
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
>
> So it seems that for this table drill can only get the metadata for 11
> parquet files before it times out.
> The time-out is a calculation and it varies from size of table.
> I checked the source code but I cannot find the calculation of the timeout
> of "30000 ms".
> When I am lucky, drill can resolve the metadata for 29 files in 30000 ms
> and it passes.
>
> I plan to use drill for production but this bothers me that there is a
> limit on the number of parquet files and the timeout parameter cannot be
> tuned.
>
> Does anyone have any ideas?
>
> Regards,
> Herman
> --------------  SQL BELOW -------------
>
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_14_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_1_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_2_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_3_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_1.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_0.parquet`;
> select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_1.parquet`;
>
>
> On Tue, Oct 2, 2018 at 4:44 PM Herman Tan <he...@redcubesg.com> wrote:
>
> > Hi Divya and everyone,
> >
> > The problem has disappeared.
> > Drill was not restarted.
> > This appears to be intermittent.
> > Before I submitted the error report, I ran the script several times and
> it
> > failed all the time.
> > Today I ran it again and it succeeded.
> > I will restart and test again.
> >
> > Regards,
> > Herman
> >
> >
> >
> > On Thu, Sep 27, 2018 at 11:50 AM Divya Gehlot <di...@gmail.com>
> > wrote:
> >
> >> Hi Herman,
> >> Just to ensure that  your parquet file format is not corrupted , Can you
> >> please query a folder like just 2001 or some of the files underneath
> >> .Instead of querying the whole data set at once .
> >>
> >> Thanks,
> >> Divya
> >>
> >> On Wed, 26 Sep 2018 at 15:35, Herman Tan <he...@redcubesg.com> wrote:
> >>
> >> > Hi Kunal,
> >> >
> >> > ----
> >> > That said, could you provide some details about the parquet data
> you've
> >> > created, like the schema, parquet version and the tool used to
> generate.
> >> > Usually, the schema (and meta) provides most of these details for any
> >> > parquet file.
> >> > ----
> >> >
> >> > 1. The schema is under dfs.tmp, the queries to generate are all
> >> documented
> >> > below.
> >> > 2. I don't know how to find the parquet version of the data file
> >> > 3. The tool used to generate the parquest is apache drill.  The CTAS
> is
> >> > detailed below.
> >> >
> >> > Regards,
> >> > Herman
> >> > ____________
> >> >
> >> > *This is the Text data*
> >> >
> >> > This is the folders of the files
> >> > Total # of lines about 50 million rows
> >> > ----------
> >> > show files from
> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
> >> > 20180825`
> >> > ;
> >> > show files from
> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
> >> > 20180825\2011`
> >> > ;
> >> > -----
> >> > sales_pos_detail
> >> >   \pos_details_20180825
> >> >     \2007
> >> >     \2008
> >> >     \2009
> >> >     \2010
> >> >     \2011
> >> >   \pos_details_0.csv
> >> >   \pos_details_1.csv
> >> >   \pos_details_2.csv
> >> >   \pos_details_3.csv
> >> >   \pos_details_4.csv
> >> >   \pos_details_5.csv
> >> >   \pos_details_6.csv
> >> >   \pos_details_7.csv
> >> >   \pos_details_8.csv
> >> >     \2012
> >> >     \2013
> >> >     \2014
> >> >     \2015
> >> >     \2016
> >> >     \2017
> >> >     \2018
> >> >     \others
> >> > -----
> >> >
> >> > *This is the view with the metadata defined:*
> >> >
> >> > create or replace view dfs.tmp.load_pos_sales_detail as
> >> > SELECT
> >> > -- dimension keys
> >> >  cast(dim_date_key as int) dim_date_key
> >> > ,cast(dim_site_key as int) dim_site_key
> >> > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
> >> > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
> >> > ,cast(dim_card_number_key as int) dim_card_number_key
> >> > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
> >> > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
> >> > ,cast(dim_product_key as int) dim_product_key
> >> > ,cast(dim_pos_employee_purchase_key as int)
> >> dim_pos_employee_purchase_key
> >> > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
> >> > ,cast(dim_campaign_key as int) dim_campaign_key
> >> > ,cast(dim_promo_key as int) dim_promo_key
> >> > ,cast( case when dim_site_lfl_key = '' then 0 else dim_site_lfl_key
> end
> >> as
> >> > int) dim_site_lfl_key
> >> > -- derived from keys
> >> > ,dim_date_str
> >> > ,`year` as `trx_year`
> >> > -- Measures
> >> > ,Product_Sales_Qty
> >> > ,Product_Sales_Price
> >> > ,Product_Cost_Price
> >> > ,Product_Cost_Amt
> >> > ,Product_Sales_Gross_Amt
> >> > ,Product_Sales_Promo_Disc_Amt
> >> > ,Product_Sales_Add_Promo_Disc_Amt
> >> > ,Product_Sales_Total_Promo_Disc_Amt
> >> > ,Product_Sales_Retail_Promo_Amt
> >> > ,Product_Sales_Retail_Amt
> >> > ,Product_Sales_VAT_Amt
> >> > ,Product_Sales_Product_Margin_Amt
> >> > ,Product_Sales_Initial_Margin_Amt
> >> > from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> >> > ;
> >> >
> >> >
> >> > *This is the CTAS that generates the parquet from the view above:*
> >> >
> >> > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
> >> > ;
> >> >
> >> > create table dfs.tmp.load_pos_sales_detail_tbl AS
> >> > SELECT
> >> > -- dimension keys
> >> >  dim_date_key
> >> > ,dim_site_key
> >> > ,dim_pos_header_key
> >> > ,dim_pos_cashier_key
> >> > ,dim_card_number_key
> >> > ,dim_hour_minute_key
> >> > ,dim_pos_clerk_key
> >> > ,dim_product_key
> >> > ,dim_pos_employee_purchase_key
> >> > ,dim_pos_terminal_key
> >> > ,dim_campaign_key
> >> > ,dim_promo_key
> >> > ,dim_site_lfl_key
> >> > -- derived from keys
> >> > ,dim_date_str
> >> > ,`trx_year`
> >> > -- Measures
> >> > ,Product_Sales_Qty Sales_Qty
> >> > ,Product_Sales_Price Sales_Price
> >> > ,Product_Cost_Price Cost_Price
> >> > ,Product_Cost_Amt Cost_Amt
> >> > ,Product_Sales_Gross_Amt Sales_Gross_Amt
> >> > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
> >> > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
> >> > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
> >> > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
> >> > ,Product_Sales_Retail_Amt Retail_Amt
> >> > ,Product_Sales_VAT_Amt VAT_Amt
> >> > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
> >> > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
> >> > from dfs.tmp.load_pos_sales_detail
> >> > ;
> >> >
> >> >
> >> > *This is the select query that generated the error:*
> >> >
> >> > select *
> >> > from dfs.tmp.load_pos_sales_detail_tbl
> >> > ;
> >> >
> >> > ----- ERROR ----------------------------
> >> >
> >> > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> >> > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> >> > parallelism 16.
> >> >
> >> >
> >> > On Mon, Sep 24, 2018 at 9:08 AM, Kunal Khatua <ku...@apache.org>
> wrote:
> >> >
> >> > > Hi Herman
> >> > >
> >> > > Assuming that you're doing analytics on your data. If that's the
> case,
> >> > > parquet format is the way to go.
> >> > >
> >> > > That said, could you provide some details about the parquet data
> >> you've
> >> > > created, like the schema, parquet version and the tool used to
> >> generate.
> >> > > Usually, the schema (and meta) provides most of these details for
> any
> >> > > parquet file.
> >> > >
> >> > > It'll be useful to know if there is a pattern in the failure because
> >> of
> >> > > which there might be corruption occurring.
> >> > >
> >> > > Kunal
> >> > >
> >> > >
> >> > > On 9/22/2018 11:49:36 PM, Herman Tan <he...@redcubesg.com> wrote:
> >> > > Hi Karthik,
> >> > >
> >> > > Thank you for pointing me to the mail archive in May 2018.
> >> > > That is exactly the same problem I am facing.
> >> > >
> >> > > I thought of using Drill as an ETL where I load the warehouse
> parquet
> >> > > tables from text source files.
> >> > > Then I query the parquet tables.
> >> > > It works on some parquet tables but am having problems with large
> ones
> >> > that
> >> > > consist of several files. (I think)
> >> > > Still investigating.
> >> > > Anyone in the community have other experience?
> >> > > Should I work with all text files instead of parquet?
> >> > >
> >> > >
> >> > > Herman
> >> > >
> >> > >
> >> > > On Fri, Sep 21, 2018 at 2:15 AM, Karthikeyan Manivannan
> >> > > kmanivannan@mapr.com> wrote:
> >> > >
> >> > > > Hi Herman,
> >> > > >
> >> > > > I am not sure what the exact problem here is but can you check to
> >> see
> >> > if
> >> > > > you are not hitting the problem described here:
> >> > > >
> >> > > > http://mail-archives.apache.org/mod_mbox/drill-user/201805.mbox/%
> >> > > >
> >> 3CCACwRgneXLXoP2vCYuGsA4Gwd1jGS8F+rcpzQ8rHuatFW5fmRaQ@mail.gmail.com
> >> > %3E
> >> > > >
> >> > > > Thanks
> >> > > >
> >> > > > Karthik
> >> > > >
> >> > > > On Wed, Sep 19, 2018 at 7:02 PM Herman Tan wrote:
> >> > > >
> >> > > > > Hi,
> >> > > > >
> >> > > > > I encountered the following error.
> >> > > > > The Steps I did are as follows:
> >> > > > > 1. Create a view to fix the data type of fields with cast
> >> > > > > 2. Create table (parquet) using the view
> >> > > > > 3. Query select * from table (query a field also does not work)
> >> > > > >
> >> > > > > The error:
> >> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
> tasks
> >> for
> >> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> >> > > > > parallelism 16.
> >> > > > >
> >> > > > > When I re-run this, the number of tasks will vary.
> >> > > > >
> >> > > > > What could be the problem?
> >> > > > >
> >> > > > > Regards,
> >> > > > > Herman Tan
> >> > > > >
> >> > > > > More info below:
> >> > > > >
> >> > > > > This is the folders of the files
> >> > > > > Total # of lines, 50 million
> >> > > > > ----------
> >> > > > > show files from
> >> > > > >
> dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> >> > > > > ;
> >> > > > > show files from
> >> > > > >
> >> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825\2011`
> >> > > > > ;
> >> > > > > -----
> >> > > > > sales_pos_detail
> >> > > > > \pos_details_20180825
> >> > > > > \2007
> >> > > > > \2008
> >> > > > > \2009
> >> > > > > \2010
> >> > > > > \2011
> >> > > > > \pos_details_0.csv
> >> > > > > \pos_details_1.csv
> >> > > > > \pos_details_2.csv
> >> > > > > \pos_details_3.csv
> >> > > > > \pos_details_4.csv
> >> > > > > \pos_details_5.csv
> >> > > > > \pos_details_6.csv
> >> > > > > \pos_details_7.csv
> >> > > > > \pos_details_8.csv
> >> > > > > \2012
> >> > > > > \2013
> >> > > > > \2014
> >> > > > > \2015
> >> > > > > \2016
> >> > > > > \2017
> >> > > > > \2018
> >> > > > > \others
> >> > > > > -----
> >> > > > >
> >> > > > > create or replace view dfs.tmp.load_pos_sales_detail as
> >> > > > > SELECT
> >> > > > > -- dimension keys
> >> > > > > cast(dim_date_key as int) dim_date_key
> >> > > > > ,cast(dim_site_key as int) dim_site_key
> >> > > > > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
> >> > > > > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
> >> > > > > ,cast(dim_card_number_key as int) dim_card_number_key
> >> > > > > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
> >> > > > > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
> >> > > > > ,cast(dim_product_key as int) dim_product_key
> >> > > > > ,cast(dim_pos_employee_purchase_key as int)
> >> > > > dim_pos_employee_purchase_key
> >> > > > > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
> >> > > > > ,cast(dim_campaign_key as int) dim_campaign_key
> >> > > > > ,cast(dim_promo_key as int) dim_promo_key
> >> > > > > ,cast( case when dim_site_lfl_key = '' then 0 else
> >> dim_site_lfl_key
> >> > end
> >> > > > as
> >> > > > > int) dim_site_lfl_key
> >> > > > > -- derived from keys
> >> > > > > ,dim_date_str
> >> > > > > ,`year` as `trx_year`
> >> > > > > -- Measures
> >> > > > > ,Product_Sales_Qty
> >> > > > > ,Product_Sales_Price
> >> > > > > ,Product_Cost_Price
> >> > > > > ,Product_Cost_Amt
> >> > > > > ,Product_Sales_Gross_Amt
> >> > > > > ,Product_Sales_Promo_Disc_Amt
> >> > > > > ,Product_Sales_Add_Promo_Disc_Amt
> >> > > > > ,Product_Sales_Total_Promo_Disc_Amt
> >> > > > > ,Product_Sales_Retail_Promo_Amt
> >> > > > > ,Product_Sales_Retail_Amt
> >> > > > > ,Product_Sales_VAT_Amt
> >> > > > > ,Product_Sales_Product_Margin_Amt
> >> > > > > ,Product_Sales_Initial_Margin_Amt
> >> > > > > from
> >> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
> >> > > > > ;
> >> > > > >
> >> > > > > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
> >> > > > > ;
> >> > > > >
> >> > > > > create table dfs.tmp.load_pos_sales_detail_tbl AS
> >> > > > > SELECT
> >> > > > > -- dimension keys
> >> > > > > dim_date_key
> >> > > > > ,dim_site_key
> >> > > > > ,dim_pos_header_key
> >> > > > > ,dim_pos_cashier_key
> >> > > > > ,dim_card_number_key
> >> > > > > ,dim_hour_minute_key
> >> > > > > ,dim_pos_clerk_key
> >> > > > > ,dim_product_key
> >> > > > > ,dim_pos_employee_purchase_key
> >> > > > > ,dim_pos_terminal_key
> >> > > > > ,dim_campaign_key
> >> > > > > ,dim_promo_key
> >> > > > > ,dim_site_lfl_key
> >> > > > > -- derived from keys
> >> > > > > ,dim_date_str
> >> > > > > ,`trx_year`
> >> > > > > -- Measures
> >> > > > > ,Product_Sales_Qty Sales_Qty
> >> > > > > ,Product_Sales_Price Sales_Price
> >> > > > > ,Product_Cost_Price Cost_Price
> >> > > > > ,Product_Cost_Amt Cost_Amt
> >> > > > > ,Product_Sales_Gross_Amt Sales_Gross_Amt
> >> > > > > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
> >> > > > > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
> >> > > > > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
> >> > > > > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
> >> > > > > ,Product_Sales_Retail_Amt Retail_Amt
> >> > > > > ,Product_Sales_VAT_Amt VAT_Amt
> >> > > > > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
> >> > > > > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
> >> > > > > from dfs.tmp.load_pos_sales_detail
> >> > > > > ;
> >> > > > >
> >> > > > > select *
> >> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> >> > > > > ;
> >> > > > >
> >> > > > > ----- ERROR ----------------------------
> >> > > > >
> >> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10
> tasks
> >> for
> >> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
> >> > > > > parallelism 16.
> >> > > > >
> >> > > > >
> >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> >> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> 'Fetch
> >> > > > parquet
> >> > > > > metadata' are complete. Total number of tasks 29, parallelism
> 16.
> >> > > > >
> >> > > > >
> >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> >> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> 'Fetch
> >> > > > > parquet metadata' are complete. Total number of tasks 29,
> >> parallelism
> >> > > 16.
> >> > > > >
> >> > > > >
> >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> >> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
> 'Fetch
> >> > > > > parquet metadata' are complete. Total number of tasks 29,
> >> parallelism
> >> > > 16.
> >> > > > >
> >> > > > >
> >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
> >> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
> >> > > > >
> >> > > > > ----------------------------------------
> >> > > > > From Drill log:
> >> > > > >
> >> > > > > 2018-09-20 08:58:12,035
> >> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> >> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query
> id
> >> > > > > 245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf: select *
> >> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> >> > > > >
> >> > > > > 2018-09-20 08:58:53,068
> >> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> >> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
> >> but
> >> > > > only
> >> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number
> >> of
> >> > > tasks
> >> > > > > 29, parallelism 16.
> >> > > > > java.util.concurrent.CancellationException: null
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:86)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:57)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> >> > > > toList$2(Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> >> ~[na:1.8.0_172]
> >> > > > > at
> >> > > > > org.apache.drill.common.collections.Collectors.toList(
> >> > > > Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> >> > > TimedCallable.java:214)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:324)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:305)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:124)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > initInternal(ParquetGroupScan.java:254)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> >> > > > AbstractParquetGroupScan.java:380)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:132)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:102)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:70)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> >> > > > FileSystemPlugin.java:136)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:116)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:111)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> >> > > > getGroupScan(DrillTable.java:99)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:89)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:69)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:62)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> >> > > > DrillScanRule.java:38)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> >> > > > onMatch(VolcanoRuleCall.java:212)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> >> > > > findBestExp(VolcanoPlanner.java:652)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> >> > > > Programs.java:368)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:429)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:369)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToDrel(DefaultSqlHandler.java:318)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> >> > > > DefaultSqlHandler.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> >> > > > getQueryPlan(DrillSqlWorker.java:145)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> >> > > > DrillSqlWorker.java:83)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> > .foreman.Foreman.runSQL(Foreman.java:567)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> .foreman.Foreman.run(Foreman.java:266)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> >> > > > ThreadPoolExecutor.java:1149)
> >> > > > > [na:1.8.0_172]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> >> > > > ThreadPoolExecutor.java:624)
> >> > > > > [na:1.8.0_172]
> >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> >> > > > > 2018-09-20 08:58:53,080
> >> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
> >> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
> >> > Waited
> >> > > > for
> >> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> >> > complete.
> >> > > > > Total number of tasks 29, parallelism 16. (null)
> >> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE
> ERROR:
> >> > > Waited
> >> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> >> > > > complete.
> >> > > > > Total number of tasks 29, parallelism 16.
> >> > > > >
> >> > > > >
> >> > > > > [Error Id: f887dcae-9f55-469c-be52-b6ce2a37eeb0 ]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.common.exceptions.UserException$
> >> > > > Builder.build(UserException.java:633)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> >> > > TimedCallable.java:253)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:324)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:305)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:124)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > initInternal(ParquetGroupScan.java:254)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> >> > > > AbstractParquetGroupScan.java:380)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:132)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:102)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:70)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> >> > > > FileSystemPlugin.java:136)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:116)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:111)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> >> > > > getGroupScan(DrillTable.java:99)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:89)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:69)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:62)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> >> > > > DrillScanRule.java:38)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> >> > > > onMatch(VolcanoRuleCall.java:212)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> >> > > > findBestExp(VolcanoPlanner.java:652)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> >> > > > Programs.java:368)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:429)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:369)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToDrel(DefaultSqlHandler.java:318)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> >> > > > DefaultSqlHandler.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> >> > > > getQueryPlan(DrillSqlWorker.java:145)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> >> > > > DrillSqlWorker.java:83)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> > .foreman.Foreman.runSQL(Foreman.java:567)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> .foreman.Foreman.run(Foreman.java:266)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> >> > > > ThreadPoolExecutor.java:1149)
> >> > > > > [na:1.8.0_172]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> >> > > > ThreadPoolExecutor.java:624)
> >> > > > > [na:1.8.0_172]
> >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> >> > > > > Caused by: java.util.concurrent.CancellationException: null
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:86)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:57)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> >> > > > toList$2(Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> >> ~[na:1.8.0_172]
> >> > > > > at
> >> > > > > org.apache.drill.common.collections.Collectors.toList(
> >> > > > Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> >> > > TimedCallable.java:214)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > ... 33 common frames omitted
> >> > > > > 2018-09-20 09:02:10,608 [UserServer-1] WARN
> >> > > > > o.a.drill.exec.rpc.user.UserServer - Message of mode REQUEST of
> >> rpc
> >> > > > type 3
> >> > > > > took longer than 500ms. Actual duration was 2042ms.
> >> > > > > 2018-09-20 09:02:10,608
> >> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> >> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query
> id
> >> > > > > 245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc: select *
> >> > > > > from dfs.tmp.load_pos_sales_detail_tbl
> >> > > > >
> >> > > > > 2018-09-20 09:02:42,615
> >> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> >> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
> >> but
> >> > > > only
> >> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number
> >> of
> >> > > tasks
> >> > > > > 29, parallelism 16.
> >> > > > > java.util.concurrent.CancellationException: null
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:86)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:57)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> >> > > > toList$2(Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> >> ~[na:1.8.0_172]
> >> > > > > at
> >> > > > > org.apache.drill.common.collections.Collectors.toList(
> >> > > > Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> >> > > TimedCallable.java:214)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:324)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:305)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:124)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > initInternal(ParquetGroupScan.java:254)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> >> > > > AbstractParquetGroupScan.java:380)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:132)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:102)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:70)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> >> > > > FileSystemPlugin.java:136)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:116)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:111)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> >> > > > getGroupScan(DrillTable.java:99)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:89)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:69)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:62)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> >> > > > DrillScanRule.java:38)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> >> > > > onMatch(VolcanoRuleCall.java:212)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> >> > > > findBestExp(VolcanoPlanner.java:652)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> >> > > > Programs.java:368)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:429)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:369)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToDrel(DefaultSqlHandler.java:318)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> >> > > > DefaultSqlHandler.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> >> > > > getQueryPlan(DrillSqlWorker.java:145)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> >> > > > DrillSqlWorker.java:83)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> > .foreman.Foreman.runSQL(Foreman.java:567)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> .foreman.Foreman.run(Foreman.java:266)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> >> > > > ThreadPoolExecutor.java:1149)
> >> > > > > [na:1.8.0_172]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> >> > > > ThreadPoolExecutor.java:624)
> >> > > > > [na:1.8.0_172]
> >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> >> > > > > 2018-09-20 09:02:42,625
> >> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
> >> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
> >> > Waited
> >> > > > for
> >> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> >> > complete.
> >> > > > > Total number of tasks 29, parallelism 16. (null)
> >> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE
> ERROR:
> >> > > Waited
> >> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
> >> > > > complete.
> >> > > > > Total number of tasks 29, parallelism 16.
> >> > > > >
> >> > > > >
> >> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 ]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.common.exceptions.UserException$
> >> > > > Builder.build(UserException.java:633)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> >> > > TimedCallable.java:253)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetFileMetadata_v3(Metadata.java:340)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:324)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:305)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
> >> > > > getParquetTableMetadata(Metadata.java:124)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > initInternal(ParquetGroupScan.java:254)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
> >> > > > AbstractParquetGroupScan.java:380)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:132)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
> >> > > > init>(ParquetGroupScan.java:102)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
> >> > > > ParquetFormatPlugin.java:70)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
> >> > > > FileSystemPlugin.java:136)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:116)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
> >> > > > AbstractStoragePlugin.java:111)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillTable.
> >> > > > getGroupScan(DrillTable.java:99)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:89)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:69)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
> >> > > > DrillScanRel.java:62)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
> >> > > > DrillScanRule.java:38)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
> >> > > > onMatch(VolcanoRuleCall.java:212)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
> >> > > > findBestExp(VolcanoPlanner.java:652)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
> >> > > > Programs.java:368)
> >> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:429)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.
> >> > > DefaultSqlHandler.transform(
> >> > > > DefaultSqlHandler.java:369)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToRawDrel(DefaultSqlHandler.java:255)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
> >> > > > convertToDrel(DefaultSqlHandler.java:318)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > >
> >> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
> >> > > > DefaultSqlHandler.java:180)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
> >> > > > getQueryPlan(DrillSqlWorker.java:145)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
> >> > > > DrillSqlWorker.java:83)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> > .foreman.Foreman.runSQL(Foreman.java:567)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.work
> >> .foreman.Foreman.run(Foreman.java:266)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
> >> > > > ThreadPoolExecutor.java:1149)
> >> > > > > [na:1.8.0_172]
> >> > > > > at
> >> > > > >
> >> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> >> > > > ThreadPoolExecutor.java:624)
> >> > > > > [na:1.8.0_172]
> >> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
> >> > > > > Caused by: java.util.concurrent.CancellationException: null
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:86)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
> >> > > > apply(TimedCallable.java:57)
> >> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > at
> >> > > > >
> >> > > > > org.apache.drill.common.collections.Collectors.lambda$
> >> > > > toList$2(Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
> >> ~[na:1.8.0_172]
> >> > > > > at
> >> > > > > org.apache.drill.common.collections.Collectors.toList(
> >> > > > Collectors.java:97)
> >> > > > > ~[drill-common-1.14.0.jar:1.14.0]
> >> > > > > at org.apache.drill.exec.store.TimedCallable.run(
> >> > > TimedCallable.java:214)
> >> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
> >> > > > > ... 33 common frames omitted
> >> > > > >
> >> > > > >
> >> > > > > ----------------------------------------
> >> > > > > ----------
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
>

Re: ERROR is reading parquet data after create table

Posted by Herman Tan <he...@redcubesg.com>.
Hi,

I have restarted drill and run the script again.

select * from dfs.tmp.`load_pos_sales_detail_tbl`;
-- SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 11 tasks for
'Fetch parquet metadata' are complete. Total number of tasks 29,
parallelism 16.

The 29 tasks is related to the 29 parquet files in the folder.
To check if any of the parquet files has an error, I ran the following SQL
on each parquet file in the folder.  ALL PASSED. (SQL Below).
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;

So it seems that for this table drill can only get the metadata for 11
parquet files before it times out.
The time-out is a calculation and it varies from size of table.
I checked the source code but I cannot find the calculation of the timeout
of "30000 ms".
When I am lucky, drill can resolve the metadata for 29 files in 30000 ms
and it passes.

I plan to use drill for production but this bothers me that there is a
limit on the number of parquet files and the timeout parameter cannot be
tuned.

Does anyone have any ideas?

Regards,
Herman
--------------  SQL BELOW -------------

select * from dfs.tmp.`load_pos_sales_detail_tbl/1_0_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_10_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_11_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_12_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_13_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_14_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_15_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_16_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_1_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_2_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_3_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_4_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_5_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_6_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_7_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_8_1.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_0.parquet`;
select * from dfs.tmp.`load_pos_sales_detail_tbl/1_9_1.parquet`;


On Tue, Oct 2, 2018 at 4:44 PM Herman Tan <he...@redcubesg.com> wrote:

> Hi Divya and everyone,
>
> The problem has disappeared.
> Drill was not restarted.
> This appears to be intermittent.
> Before I submitted the error report, I ran the script several times and it
> failed all the time.
> Today I ran it again and it succeeded.
> I will restart and test again.
>
> Regards,
> Herman
>
>
>
> On Thu, Sep 27, 2018 at 11:50 AM Divya Gehlot <di...@gmail.com>
> wrote:
>
>> Hi Herman,
>> Just to ensure that  your parquet file format is not corrupted , Can you
>> please query a folder like just 2001 or some of the files underneath
>> .Instead of querying the whole data set at once .
>>
>> Thanks,
>> Divya
>>
>> On Wed, 26 Sep 2018 at 15:35, Herman Tan <he...@redcubesg.com> wrote:
>>
>> > Hi Kunal,
>> >
>> > ----
>> > That said, could you provide some details about the parquet data you've
>> > created, like the schema, parquet version and the tool used to generate.
>> > Usually, the schema (and meta) provides most of these details for any
>> > parquet file.
>> > ----
>> >
>> > 1. The schema is under dfs.tmp, the queries to generate are all
>> documented
>> > below.
>> > 2. I don't know how to find the parquet version of the data file
>> > 3. The tool used to generate the parquest is apache drill.  The CTAS is
>> > detailed below.
>> >
>> > Regards,
>> > Herman
>> > ____________
>> >
>> > *This is the Text data*
>> >
>> > This is the folders of the files
>> > Total # of lines about 50 million rows
>> > ----------
>> > show files from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
>> > 20180825`
>> > ;
>> > show files from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_
>> > 20180825\2011`
>> > ;
>> > -----
>> > sales_pos_detail
>> >   \pos_details_20180825
>> >     \2007
>> >     \2008
>> >     \2009
>> >     \2010
>> >     \2011
>> >   \pos_details_0.csv
>> >   \pos_details_1.csv
>> >   \pos_details_2.csv
>> >   \pos_details_3.csv
>> >   \pos_details_4.csv
>> >   \pos_details_5.csv
>> >   \pos_details_6.csv
>> >   \pos_details_7.csv
>> >   \pos_details_8.csv
>> >     \2012
>> >     \2013
>> >     \2014
>> >     \2015
>> >     \2016
>> >     \2017
>> >     \2018
>> >     \others
>> > -----
>> >
>> > *This is the view with the metadata defined:*
>> >
>> > create or replace view dfs.tmp.load_pos_sales_detail as
>> > SELECT
>> > -- dimension keys
>> >  cast(dim_date_key as int) dim_date_key
>> > ,cast(dim_site_key as int) dim_site_key
>> > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
>> > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
>> > ,cast(dim_card_number_key as int) dim_card_number_key
>> > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
>> > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
>> > ,cast(dim_product_key as int) dim_product_key
>> > ,cast(dim_pos_employee_purchase_key as int)
>> dim_pos_employee_purchase_key
>> > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
>> > ,cast(dim_campaign_key as int) dim_campaign_key
>> > ,cast(dim_promo_key as int) dim_promo_key
>> > ,cast( case when dim_site_lfl_key = '' then 0 else dim_site_lfl_key end
>> as
>> > int) dim_site_lfl_key
>> > -- derived from keys
>> > ,dim_date_str
>> > ,`year` as `trx_year`
>> > -- Measures
>> > ,Product_Sales_Qty
>> > ,Product_Sales_Price
>> > ,Product_Cost_Price
>> > ,Product_Cost_Amt
>> > ,Product_Sales_Gross_Amt
>> > ,Product_Sales_Promo_Disc_Amt
>> > ,Product_Sales_Add_Promo_Disc_Amt
>> > ,Product_Sales_Total_Promo_Disc_Amt
>> > ,Product_Sales_Retail_Promo_Amt
>> > ,Product_Sales_Retail_Amt
>> > ,Product_Sales_VAT_Amt
>> > ,Product_Sales_Product_Margin_Amt
>> > ,Product_Sales_Initial_Margin_Amt
>> > from dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>> > ;
>> >
>> >
>> > *This is the CTAS that generates the parquet from the view above:*
>> >
>> > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
>> > ;
>> >
>> > create table dfs.tmp.load_pos_sales_detail_tbl AS
>> > SELECT
>> > -- dimension keys
>> >  dim_date_key
>> > ,dim_site_key
>> > ,dim_pos_header_key
>> > ,dim_pos_cashier_key
>> > ,dim_card_number_key
>> > ,dim_hour_minute_key
>> > ,dim_pos_clerk_key
>> > ,dim_product_key
>> > ,dim_pos_employee_purchase_key
>> > ,dim_pos_terminal_key
>> > ,dim_campaign_key
>> > ,dim_promo_key
>> > ,dim_site_lfl_key
>> > -- derived from keys
>> > ,dim_date_str
>> > ,`trx_year`
>> > -- Measures
>> > ,Product_Sales_Qty Sales_Qty
>> > ,Product_Sales_Price Sales_Price
>> > ,Product_Cost_Price Cost_Price
>> > ,Product_Cost_Amt Cost_Amt
>> > ,Product_Sales_Gross_Amt Sales_Gross_Amt
>> > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
>> > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
>> > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
>> > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
>> > ,Product_Sales_Retail_Amt Retail_Amt
>> > ,Product_Sales_VAT_Amt VAT_Amt
>> > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
>> > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
>> > from dfs.tmp.load_pos_sales_detail
>> > ;
>> >
>> >
>> > *This is the select query that generated the error:*
>> >
>> > select *
>> > from dfs.tmp.load_pos_sales_detail_tbl
>> > ;
>> >
>> > ----- ERROR ----------------------------
>> >
>> > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for
>> > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>> > parallelism 16.
>> >
>> >
>> > On Mon, Sep 24, 2018 at 9:08 AM, Kunal Khatua <ku...@apache.org> wrote:
>> >
>> > > Hi Herman
>> > >
>> > > Assuming that you're doing analytics on your data. If that's the case,
>> > > parquet format is the way to go.
>> > >
>> > > That said, could you provide some details about the parquet data
>> you've
>> > > created, like the schema, parquet version and the tool used to
>> generate.
>> > > Usually, the schema (and meta) provides most of these details for any
>> > > parquet file.
>> > >
>> > > It'll be useful to know if there is a pattern in the failure because
>> of
>> > > which there might be corruption occurring.
>> > >
>> > > Kunal
>> > >
>> > >
>> > > On 9/22/2018 11:49:36 PM, Herman Tan <he...@redcubesg.com> wrote:
>> > > Hi Karthik,
>> > >
>> > > Thank you for pointing me to the mail archive in May 2018.
>> > > That is exactly the same problem I am facing.
>> > >
>> > > I thought of using Drill as an ETL where I load the warehouse parquet
>> > > tables from text source files.
>> > > Then I query the parquet tables.
>> > > It works on some parquet tables but am having problems with large ones
>> > that
>> > > consist of several files. (I think)
>> > > Still investigating.
>> > > Anyone in the community have other experience?
>> > > Should I work with all text files instead of parquet?
>> > >
>> > >
>> > > Herman
>> > >
>> > >
>> > > On Fri, Sep 21, 2018 at 2:15 AM, Karthikeyan Manivannan
>> > > kmanivannan@mapr.com> wrote:
>> > >
>> > > > Hi Herman,
>> > > >
>> > > > I am not sure what the exact problem here is but can you check to
>> see
>> > if
>> > > > you are not hitting the problem described here:
>> > > >
>> > > > http://mail-archives.apache.org/mod_mbox/drill-user/201805.mbox/%
>> > > >
>> 3CCACwRgneXLXoP2vCYuGsA4Gwd1jGS8F+rcpzQ8rHuatFW5fmRaQ@mail.gmail.com
>> > %3E
>> > > >
>> > > > Thanks
>> > > >
>> > > > Karthik
>> > > >
>> > > > On Wed, Sep 19, 2018 at 7:02 PM Herman Tan wrote:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > I encountered the following error.
>> > > > > The Steps I did are as follows:
>> > > > > 1. Create a view to fix the data type of fields with cast
>> > > > > 2. Create table (parquet) using the view
>> > > > > 3. Query select * from table (query a field also does not work)
>> > > > >
>> > > > > The error:
>> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks
>> for
>> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>> > > > > parallelism 16.
>> > > > >
>> > > > > When I re-run this, the number of tasks will vary.
>> > > > >
>> > > > > What could be the problem?
>> > > > >
>> > > > > Regards,
>> > > > > Herman Tan
>> > > > >
>> > > > > More info below:
>> > > > >
>> > > > > This is the folders of the files
>> > > > > Total # of lines, 50 million
>> > > > > ----------
>> > > > > show files from
>> > > > > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>> > > > > ;
>> > > > > show files from
>> > > > >
>> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825\2011`
>> > > > > ;
>> > > > > -----
>> > > > > sales_pos_detail
>> > > > > \pos_details_20180825
>> > > > > \2007
>> > > > > \2008
>> > > > > \2009
>> > > > > \2010
>> > > > > \2011
>> > > > > \pos_details_0.csv
>> > > > > \pos_details_1.csv
>> > > > > \pos_details_2.csv
>> > > > > \pos_details_3.csv
>> > > > > \pos_details_4.csv
>> > > > > \pos_details_5.csv
>> > > > > \pos_details_6.csv
>> > > > > \pos_details_7.csv
>> > > > > \pos_details_8.csv
>> > > > > \2012
>> > > > > \2013
>> > > > > \2014
>> > > > > \2015
>> > > > > \2016
>> > > > > \2017
>> > > > > \2018
>> > > > > \others
>> > > > > -----
>> > > > >
>> > > > > create or replace view dfs.tmp.load_pos_sales_detail as
>> > > > > SELECT
>> > > > > -- dimension keys
>> > > > > cast(dim_date_key as int) dim_date_key
>> > > > > ,cast(dim_site_key as int) dim_site_key
>> > > > > ,cast(dim_pos_header_key as bigint) dim_pos_header_key
>> > > > > ,cast(dim_pos_cashier_key as int) dim_pos_cashier_key
>> > > > > ,cast(dim_card_number_key as int) dim_card_number_key
>> > > > > ,cast(dim_hour_minute_key as int) dim_hour_minute_key
>> > > > > ,cast(dim_pos_clerk_key as int) dim_pos_clerk_key
>> > > > > ,cast(dim_product_key as int) dim_product_key
>> > > > > ,cast(dim_pos_employee_purchase_key as int)
>> > > > dim_pos_employee_purchase_key
>> > > > > ,cast(dim_pos_terminal_key as int) dim_pos_terminal_key
>> > > > > ,cast(dim_campaign_key as int) dim_campaign_key
>> > > > > ,cast(dim_promo_key as int) dim_promo_key
>> > > > > ,cast( case when dim_site_lfl_key = '' then 0 else
>> dim_site_lfl_key
>> > end
>> > > > as
>> > > > > int) dim_site_lfl_key
>> > > > > -- derived from keys
>> > > > > ,dim_date_str
>> > > > > ,`year` as `trx_year`
>> > > > > -- Measures
>> > > > > ,Product_Sales_Qty
>> > > > > ,Product_Sales_Price
>> > > > > ,Product_Cost_Price
>> > > > > ,Product_Cost_Amt
>> > > > > ,Product_Sales_Gross_Amt
>> > > > > ,Product_Sales_Promo_Disc_Amt
>> > > > > ,Product_Sales_Add_Promo_Disc_Amt
>> > > > > ,Product_Sales_Total_Promo_Disc_Amt
>> > > > > ,Product_Sales_Retail_Promo_Amt
>> > > > > ,Product_Sales_Retail_Amt
>> > > > > ,Product_Sales_VAT_Amt
>> > > > > ,Product_Sales_Product_Margin_Amt
>> > > > > ,Product_Sales_Initial_Margin_Amt
>> > > > > from
>> > dfs.`D:\retail_sandbox\pos\sales_pos_detail\pos_details_20180825`
>> > > > > ;
>> > > > >
>> > > > > drop table if exists dfs.tmp.load_pos_sales_detail_tbl
>> > > > > ;
>> > > > >
>> > > > > create table dfs.tmp.load_pos_sales_detail_tbl AS
>> > > > > SELECT
>> > > > > -- dimension keys
>> > > > > dim_date_key
>> > > > > ,dim_site_key
>> > > > > ,dim_pos_header_key
>> > > > > ,dim_pos_cashier_key
>> > > > > ,dim_card_number_key
>> > > > > ,dim_hour_minute_key
>> > > > > ,dim_pos_clerk_key
>> > > > > ,dim_product_key
>> > > > > ,dim_pos_employee_purchase_key
>> > > > > ,dim_pos_terminal_key
>> > > > > ,dim_campaign_key
>> > > > > ,dim_promo_key
>> > > > > ,dim_site_lfl_key
>> > > > > -- derived from keys
>> > > > > ,dim_date_str
>> > > > > ,`trx_year`
>> > > > > -- Measures
>> > > > > ,Product_Sales_Qty Sales_Qty
>> > > > > ,Product_Sales_Price Sales_Price
>> > > > > ,Product_Cost_Price Cost_Price
>> > > > > ,Product_Cost_Amt Cost_Amt
>> > > > > ,Product_Sales_Gross_Amt Sales_Gross_Amt
>> > > > > ,Product_Sales_Promo_Disc_Amt Sales_Promo_Disc_Amt
>> > > > > ,Product_Sales_Add_Promo_Disc_Amt Add_Promo_Disc_Amt
>> > > > > ,Product_Sales_Total_Promo_Disc_Amt Total_Promo_Disc_Amt
>> > > > > ,Product_Sales_Retail_Promo_Amt Retail_Promo_Amt
>> > > > > ,Product_Sales_Retail_Amt Retail_Amt
>> > > > > ,Product_Sales_VAT_Amt VAT_Amt
>> > > > > ,Product_Sales_Product_Margin_Amt Product_Margin_Amt
>> > > > > ,Product_Sales_Initial_Margin_Amt Initial_Margin_Amt
>> > > > > from dfs.tmp.load_pos_sales_detail
>> > > > > ;
>> > > > >
>> > > > > select *
>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>> > > > > ;
>> > > > >
>> > > > > ----- ERROR ----------------------------
>> > > > >
>> > > > > SQL Error: RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks
>> for
>> > > > > 'Fetch parquet metadata' are complete. Total number of tasks 29,
>> > > > > parallelism 16.
>> > > > >
>> > > > >
>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
>> > > > parquet
>> > > > > metadata' are complete. Total number of tasks 29, parallelism 16.
>> > > > >
>> > > > >
>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
>> > > > > parquet metadata' are complete. Total number of tasks 29,
>> parallelism
>> > > 16.
>> > > > >
>> > > > >
>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>> > > > > RESOURCE ERROR: Waited for 30000 ms, but only 10 tasks for 'Fetch
>> > > > > parquet metadata' are complete. Total number of tasks 29,
>> parallelism
>> > > 16.
>> > > > >
>> > > > >
>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 on
>> > > > > IORA-G9KY9P2.stf.nus.edu.sg:31010]
>> > > > >
>> > > > > ----------------------------------------
>> > > > > From Drill log:
>> > > > >
>> > > > > 2018-09-20 08:58:12,035
>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id
>> > > > > 245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf: select *
>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>> > > > >
>> > > > > 2018-09-20 08:58:53,068
>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
>> but
>> > > > only
>> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number
>> of
>> > > tasks
>> > > > > 29, parallelism 16.
>> > > > > java.util.concurrent.CancellationException: null
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:86)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:57)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>> > > > toList$2(Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>> ~[na:1.8.0_172]
>> > > > > at
>> > > > > org.apache.drill.common.collections.Collectors.toList(
>> > > > Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>> > > TimedCallable.java:214)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:324)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:305)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:124)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > initInternal(ParquetGroupScan.java:254)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>> > > > AbstractParquetGroupScan.java:380)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:132)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:102)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:70)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>> > > > FileSystemPlugin.java:136)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:116)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:111)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>> > > > getGroupScan(DrillTable.java:99)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:89)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:69)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:62)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>> > > > DrillScanRule.java:38)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>> > > > onMatch(VolcanoRuleCall.java:212)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>> > > > findBestExp(VolcanoPlanner.java:652)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>> > > > Programs.java:368)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:429)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:369)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToDrel(DefaultSqlHandler.java:318)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>> > > > DefaultSqlHandler.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>> > > > getQueryPlan(DrillSqlWorker.java:145)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>> > > > DrillSqlWorker.java:83)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> > .foreman.Foreman.runSQL(Foreman.java:567)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> .foreman.Foreman.run(Foreman.java:266)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>> > > > ThreadPoolExecutor.java:1149)
>> > > > > [na:1.8.0_172]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> > > > ThreadPoolExecutor.java:624)
>> > > > > [na:1.8.0_172]
>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>> > > > > 2018-09-20 08:58:53,080
>> > [245d0f5a-ae5f-bfa2-ff04-40f7bdd1c2bf:foreman]
>> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
>> > Waited
>> > > > for
>> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>> > complete.
>> > > > > Total number of tasks 29, parallelism 16. (null)
>> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR:
>> > > Waited
>> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>> > > > complete.
>> > > > > Total number of tasks 29, parallelism 16.
>> > > > >
>> > > > >
>> > > > > [Error Id: f887dcae-9f55-469c-be52-b6ce2a37eeb0 ]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.common.exceptions.UserException$
>> > > > Builder.build(UserException.java:633)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>> > > TimedCallable.java:253)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:324)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:305)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:124)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > initInternal(ParquetGroupScan.java:254)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>> > > > AbstractParquetGroupScan.java:380)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:132)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:102)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:70)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>> > > > FileSystemPlugin.java:136)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:116)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:111)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>> > > > getGroupScan(DrillTable.java:99)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:89)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:69)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:62)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>> > > > DrillScanRule.java:38)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>> > > > onMatch(VolcanoRuleCall.java:212)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>> > > > findBestExp(VolcanoPlanner.java:652)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>> > > > Programs.java:368)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:429)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:369)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToDrel(DefaultSqlHandler.java:318)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>> > > > DefaultSqlHandler.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>> > > > getQueryPlan(DrillSqlWorker.java:145)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>> > > > DrillSqlWorker.java:83)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> > .foreman.Foreman.runSQL(Foreman.java:567)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> .foreman.Foreman.run(Foreman.java:266)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>> > > > ThreadPoolExecutor.java:1149)
>> > > > > [na:1.8.0_172]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> > > > ThreadPoolExecutor.java:624)
>> > > > > [na:1.8.0_172]
>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>> > > > > Caused by: java.util.concurrent.CancellationException: null
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:86)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:57)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>> > > > toList$2(Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>> ~[na:1.8.0_172]
>> > > > > at
>> > > > > org.apache.drill.common.collections.Collectors.toList(
>> > > > Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>> > > TimedCallable.java:214)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > ... 33 common frames omitted
>> > > > > 2018-09-20 09:02:10,608 [UserServer-1] WARN
>> > > > > o.a.drill.exec.rpc.user.UserServer - Message of mode REQUEST of
>> rpc
>> > > > type 3
>> > > > > took longer than 500ms. Actual duration was 2042ms.
>> > > > > 2018-09-20 09:02:10,608
>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>> > > > > INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id
>> > > > > 245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc: select *
>> > > > > from dfs.tmp.load_pos_sales_detail_tbl
>> > > > >
>> > > > > 2018-09-20 09:02:42,615
>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>> > > > > ERROR o.a.d.e.s.parquet.metadata.Metadata - Waited for 30000 ms,
>> but
>> > > > only
>> > > > > 10 tasks for 'Fetch parquet metadata' are complete. Total number
>> of
>> > > tasks
>> > > > > 29, parallelism 16.
>> > > > > java.util.concurrent.CancellationException: null
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:86)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:57)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>> > > > toList$2(Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>> ~[na:1.8.0_172]
>> > > > > at
>> > > > > org.apache.drill.common.collections.Collectors.toList(
>> > > > Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>> > > TimedCallable.java:214)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:324)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:305)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:124)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > initInternal(ParquetGroupScan.java:254)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>> > > > AbstractParquetGroupScan.java:380)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:132)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:102)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:70)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>> > > > FileSystemPlugin.java:136)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:116)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:111)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>> > > > getGroupScan(DrillTable.java:99)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:89)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:69)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:62)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>> > > > DrillScanRule.java:38)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>> > > > onMatch(VolcanoRuleCall.java:212)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>> > > > findBestExp(VolcanoPlanner.java:652)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>> > > > Programs.java:368)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:429)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:369)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToDrel(DefaultSqlHandler.java:318)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>> > > > DefaultSqlHandler.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>> > > > getQueryPlan(DrillSqlWorker.java:145)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>> > > > DrillSqlWorker.java:83)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> > .foreman.Foreman.runSQL(Foreman.java:567)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> .foreman.Foreman.run(Foreman.java:266)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>> > > > ThreadPoolExecutor.java:1149)
>> > > > > [na:1.8.0_172]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> > > > ThreadPoolExecutor.java:624)
>> > > > > [na:1.8.0_172]
>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>> > > > > 2018-09-20 09:02:42,625
>> > [245d0e6f-0dc1-2a4b-12a4-b9aaad4182fc:foreman]
>> > > > > INFO o.a.d.e.s.parquet.metadata.Metadata - User Error Occurred:
>> > Waited
>> > > > for
>> > > > > 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>> > complete.
>> > > > > Total number of tasks 29, parallelism 16. (null)
>> > > > > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR:
>> > > Waited
>> > > > > for 30000 ms, but only 10 tasks for 'Fetch parquet metadata' are
>> > > > complete.
>> > > > > Total number of tasks 29, parallelism 16.
>> > > > >
>> > > > >
>> > > > > [Error Id: 3b079174-f5d0-4313-8097-25a0b3070854 ]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.common.exceptions.UserException$
>> > > > Builder.build(UserException.java:633)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>> > > TimedCallable.java:253)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetFileMetadata_v3(Metadata.java:340)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:324)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:305)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.metadata.Metadata.
>> > > > getParquetTableMetadata(Metadata.java:124)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > initInternal(ParquetGroupScan.java:254)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.AbstractParquetGroupScan.init(
>> > > > AbstractParquetGroupScan.java:380)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:132)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.parquet.ParquetGroupScan.
>> > > > init>(ParquetGroupScan.java:102)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(
>> > > > ParquetFormatPlugin.java:70)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(
>> > > > FileSystemPlugin.java:136)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:116)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(
>> > > > AbstractStoragePlugin.java:111)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillTable.
>> > > > getGroupScan(DrillTable.java:99)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:89)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:69)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRel.(
>> > > > DrillScanRel.java:62)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(
>> > > > DrillScanRule.java:38)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoRuleCall.
>> > > > onMatch(VolcanoRuleCall.java:212)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.calcite.plan.volcano.VolcanoPlanner.
>> > > > findBestExp(VolcanoPlanner.java:652)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at org.apache.calcite.tools.Programs$RuleSetProgram.run(
>> > > > Programs.java:368)
>> > > > > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:429)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.
>> > > DefaultSqlHandler.transform(
>> > > > DefaultSqlHandler.java:369)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToRawDrel(DefaultSqlHandler.java:255)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.
>> > > > convertToDrel(DefaultSqlHandler.java:318)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > >
>> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(
>> > > > DefaultSqlHandler.java:180)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.
>> > > > getQueryPlan(DrillSqlWorker.java:145)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(
>> > > > DrillSqlWorker.java:83)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> > .foreman.Foreman.runSQL(Foreman.java:567)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.work
>> .foreman.Foreman.run(Foreman.java:266)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(
>> > > > ThreadPoolExecutor.java:1149)
>> > > > > [na:1.8.0_172]
>> > > > > at
>> > > > >
>> > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> > > > ThreadPoolExecutor.java:624)
>> > > > > [na:1.8.0_172]
>> > > > > at java.lang.Thread.run(Thread.java:748) [na:1.8.0_172]
>> > > > > Caused by: java.util.concurrent.CancellationException: null
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:86)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.exec.store.TimedCallable$FutureMapper.
>> > > > apply(TimedCallable.java:57)
>> > > > > ~[drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > at
>> > > > >
>> > > > > org.apache.drill.common.collections.Collectors.lambda$
>> > > > toList$2(Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at java.util.ArrayList.forEach(ArrayList.java:1257)
>> ~[na:1.8.0_172]
>> > > > > at
>> > > > > org.apache.drill.common.collections.Collectors.toList(
>> > > > Collectors.java:97)
>> > > > > ~[drill-common-1.14.0.jar:1.14.0]
>> > > > > at org.apache.drill.exec.store.TimedCallable.run(
>> > > TimedCallable.java:214)
>> > > > > [drill-java-exec-1.14.0.jar:1.14.0]
>> > > > > ... 33 common frames omitted
>> > > > >
>> > > > >
>> > > > > ----------------------------------------
>> > > > > ----------
>> > > > >
>> > > >
>> > >
>> >
>>
>