You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "wangrupeng (Jira)" <ji...@apache.org> on 2020/07/08 09:50:00 UTC

[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

     [ https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

wangrupeng updated KYLIN-4625:
------------------------------
    Description: 
Currently, Kylin on Parquet already supports debuging source code with local csv files, but it's a little bit complex. The steps are as follows:
* edit the properties of $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
   ```log
   kylin.metadata.url=$LOCAL_META_DIR
   kylin.env.zookeeper-is-local=true
   kylin.env.hdfs-working-dir=file:///path/to/local/dir
   kylin.engine.spark-conf.spark.master=local
   kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
   kylin.env=UT
   ```
* debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option "-Dspark.local=true" 
    !image-2020-07-08-17-41-35-954.png! 
* Load csv data source by pressing button "Data Source->Load CSV File as Table" on "Model" page, and set the schema for your table. Then press "submit" to save.
     !image-2020-07-08-17-42-09-603.png! 

Most time we debug just want to build and query cube easy. But current way is complex to load csv tables and create model and cube. So, I want to add a csv source  which using the model of kylin sample data directly when debug tomcat started.

  was:
Currently, Kylin on Parquet already supports debuging source code with local csv files, but it's a little bit complex. The steps are as follows:
* edit the properties of $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
   ```log
   kylin.metadata.url=$LOCAL_META_DIR
   kylin.env.zookeeper-is-local=true
   kylin.env.hdfs-working-dir=file:///path/to/local/dir
   kylin.engine.spark-conf.spark.master=local
   kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
   ```
* debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option "-Dspark.local=true" 
    !image-2020-07-08-17-41-35-954.png! 
* Load csv data source by pressing button "Data Source->Load CSV File as Table" on "Model" page, and set the schema for your table. Then press "submit" to save.
     !image-2020-07-08-17-42-09-603.png! 

Most time we debug just want to build and query cube easy. But current way is complex to load csv tables and create model and cube. So, I want to add a csv source  which using the model of kylin sample data directly when debug tomcat started.


> Debug the code of Kylin on Parquet without hadoop environment
> -------------------------------------------------------------
>
>                 Key: KYLIN-4625
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4625
>             Project: Kylin
>          Issue Type: Improvement
>          Components: Spark Engine
>            Reporter: wangrupeng
>            Assignee: wangrupeng
>            Priority: Major
>         Attachments: image-2020-07-08-17-41-35-954.png, image-2020-07-08-17-42-09-603.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local csv files, but it's a little bit complex. The steps are as follows:
> * edit the properties of $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
>    ```log
>    kylin.metadata.url=$LOCAL_META_DIR
>    kylin.env.zookeeper-is-local=true
>    kylin.env.hdfs-working-dir=file:///path/to/local/dir
>    kylin.engine.spark-conf.spark.master=local
>    kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>    kylin.env=UT
>    ```
> * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option "-Dspark.local=true" 
>     !image-2020-07-08-17-41-35-954.png! 
> * Load csv data source by pressing button "Data Source->Load CSV File as Table" on "Model" page, and set the schema for your table. Then press "submit" to save.
>      !image-2020-07-08-17-42-09-603.png! 
> Most time we debug just want to build and query cube easy. But current way is complex to load csv tables and create model and cube. So, I want to add a csv source  which using the model of kylin sample data directly when debug tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)