You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "xuekaiqi (Jira)" <ji...@apache.org> on 2020/04/20 08:30:00 UTC

[jira] [Updated] (KYLIN-4453) Query on refreshed cube failed with FileNotFoundException

     [ https://issues.apache.org/jira/browse/KYLIN-4453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

xuekaiqi updated KYLIN-4453:
----------------------------
    Component/s: Query Engine

> Query on refreshed cube failed with FileNotFoundException
> ---------------------------------------------------------
>
>                 Key: KYLIN-4453
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4453
>             Project: Kylin
>          Issue Type: New Feature
>          Components: Query Engine, Storage - Parquet
>            Reporter: xuekaiqi
>            Assignee: nichunen
>            Priority: Major
>             Fix For: v4.0.0
>
>
> Steps to reproduce:
>  # Build a segment of any cube
>  # Refresh the segment
>  # Query the cube, get error message like
>  
> {code:java}
> java.io.FileNotFoundException: File file:/Users/kyligence/Downloads/localmeta_n/vvv/parquet/gg/20200401000000_20200403000000/4/part-00000-cdaa5f21-34dd-432d-865e-92089a7ffa03-c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)