You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by "Preeti V (Jira)" <ji...@apache.org> on 2020/03/13 00:44:00 UTC

[jira] [Created] (KYLIN-4427) Wrong FileSystem error when trying to enable system cubes and Dashboard in Kylin 2.6.4

Preeti V created KYLIN-4427:
-------------------------------

             Summary: Wrong FileSystem error when trying to enable system cubes and Dashboard in Kylin 2.6.4
                 Key: KYLIN-4427
                 URL: https://issues.apache.org/jira/browse/KYLIN-4427
             Project: Kylin
          Issue Type: Bug
          Components: Metrics
    Affects Versions: v2.6.4
            Reporter: Preeti V


 I am trying to enable system cubes for the Dashboard using Kylin version 2.6.4 The tables are created correctly and the cube builds successfully but there is no query or job data on the dashboard, it shows 0. 
 
We use Azure storage for Hive(wasb:// file system). I can see that there is no data being updated on the Hive_Metrics tables in Azure. In Kylin logs I see the below error

 
 
2020-03-12 20:02:41,790 ERROR [metrics-blocking-reservoir-scheduler-0] hive.HiveReservoirReporter:119 : Wrong FS: wasb://*****.blob.core.windows.net/hive/warehouse/kylin.db/hive_metrics_query_cube_qa/kday_date=2020-03-12, expected: hdfs://*****-prod-bn01
java.lang.IllegalArgumentException: Wrong FS: wasb://*****.blob.core.windows.net/hive/warehouse/kylin.db/hive_metrics_query_cube_qa/kday_date=2020-03-12, expected: hdfs://*****-prod-bn01
        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:666)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214)
        at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1442)
        at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1454)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1448)
        at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.write(HiveProducer.java:137)
        at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.send(HiveProducer.java:122)
        at org.apache.kylin.metrics.lib.impl.hive.HiveReservoirReporter$HiveReservoirListener.onRecordUpdate(HiveReservoirReporter.java:117)
        at org.apache.kylin.metrics.lib.impl.BlockingReservoir.notifyListenerOfUpdatedRecord(BlockingReservoir.java:105)
 
  
I checked the hive configs and it has the warehouse metastore dir correctly pointing to azure. I found another thread with similar problem where they are trying to use S3 instead of hdfs. [http://apache-kylin.74782.x6.nabble.com/jira-Created-KYLIN-4385-KYLIN-system-cube-failing-to-update-table-when-run-on-EMR-with-S3-as-storageS-td14234.html] 
 
I also followed the recommendations here [https://www.mail-archive.com/user@kylin.apache.org/msg04347.html]  and enabled all the necessary config values.
 Is this a bug in Kylin or a configuration issue on my cluster? Any help or guidance is appreciated.
 
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)