You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "James Turton (Jira)" <ji...@apache.org> on 2021/07/06 09:08:00 UTC

[jira] [Updated] (DRILL-7968) ANALYZE TABLE ... REFRESH METADATA fails with FLOAT4 column

     [ https://issues.apache.org/jira/browse/DRILL-7968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

James Turton updated DRILL-7968:
--------------------------------
    Description: 
For tables with fewer than ~200 rows and a FLOAT4 column there is no bug: the ANALYZE command succeeds and, indeed, this case is exercised by tests in TestMetastoreCommands.java and alltypes_\{required,optional}.parquet which both contain a FLOAT4 column.

But for tables with more than ~200 rows and a FLOAT4 column the ANALYZE command fails with

```

SQL Error: EXECUTION_ERROR ERROR: PojoRecordReader doesn't yet support conversions from the type [class java.lang.Float].

Failed to setup reader: DynamicPojoRecordReader

```

E.g. you can reproduce the above with

```

create table dfs.tmp.test_analyze as 
 select cast(1 as float) from cp.`employee.json`;

analyze table dfs.tmp.test_analyze refresh metadata;
 ```

The bug is easily fixed by adding support for Java.lang.Float to PojoRecordReader.  I do not know why it requires more than ~200 rows for the code path through PojoRecordReader to come into play, and for this bug to manifest.

  was:
For tables with fewer than ~200 row and a FLOAT4 column there is no bug: the ANALYZE command succeeds and, indeed, this case is exercised by test in TestMetastoreCommands.java and alltypes_\{required,optional}.parquet which both contain a FLOAT4 column.

But for tables with more than ~200 rows and a FLOAT4 column the ANALYZE command fails with

```

SQL Error: EXECUTION_ERROR ERROR: PojoRecordReader doesn't yet support conversions from the type [class java.lang.Float].

Failed to setup reader: DynamicPojoRecordReader

```

E.g. you can reproduce the above with

```

create table dfs.tmp.test_analyze as 
select cast(1 as float) from cp.`employee.json`;
 
analyze table dfs.tmp.test_analyze refresh metadata;
```

The bug is easily be fixed by adding support for Java.lang.Float to PojoRecordReader.  I do not know why it requires more than ~200 rows for the code path through PojoRecordReader to come into play, and for this bug to manifest.


> ANALYZE TABLE ... REFRESH METADATA fails with FLOAT4 column 
> ------------------------------------------------------------
>
>                 Key: DRILL-7968
>                 URL: https://issues.apache.org/jira/browse/DRILL-7968
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Metadata
>    Affects Versions: 1.19.0
>            Reporter: James Turton
>            Assignee: James Turton
>            Priority: Minor
>             Fix For: 1.20.0
>
>
> For tables with fewer than ~200 rows and a FLOAT4 column there is no bug: the ANALYZE command succeeds and, indeed, this case is exercised by tests in TestMetastoreCommands.java and alltypes_\{required,optional}.parquet which both contain a FLOAT4 column.
> But for tables with more than ~200 rows and a FLOAT4 column the ANALYZE command fails with
> ```
> SQL Error: EXECUTION_ERROR ERROR: PojoRecordReader doesn't yet support conversions from the type [class java.lang.Float].
> Failed to setup reader: DynamicPojoRecordReader
> ```
> E.g. you can reproduce the above with
> ```
> create table dfs.tmp.test_analyze as 
>  select cast(1 as float) from cp.`employee.json`;
> analyze table dfs.tmp.test_analyze refresh metadata;
>  ```
> The bug is easily fixed by adding support for Java.lang.Float to PojoRecordReader.  I do not know why it requires more than ~200 rows for the code path through PojoRecordReader to come into play, and for this bug to manifest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)